Archives

MAT Working Group session
25 May 2016,
2 p.m.

CHAIR: Hello. I think it's time to get started. We have a terribly big room today for the amount of people who chose to take part in this Working Group today. So, come up here, or if you choose to sit where you are, just remember to be part of it. It would be a little bit more cosy if we could keel feel a little bit closer to the people we're talking to. But whatever you want to do...

It's time to get started. I am Nina, I am the co‑chair with Christian. I will be taking you through the next hour and a half, and we do have a packed agenda for the session.

First, again, welcome. And then we have a little bit of bashing to run through.

We have a scribe, she is sitting right there. We'll take care that everything is duly noted. We have a stenographer, who is busy working already, I am impressed, I love you guys, you are so great.

Thanks.

And then we also have the Jabber channel is going and for people watching on the video, remember the video might be slightly delayed, so your questions will come in and will be answered but it might have a little delay in what you see.

We really encourage questions. But please remember, that when you want to pass a question, go to the microphone, wait in line, please state who you are and what your affiliation is and then ask your question.

The next thing is, we have the minutes from the last meeting. We are supposed to approve them, so, does anybody have any comments on the minutes, they were disposed on the mailing list. And there are no comments which means they are approved, and everything is good.

So, the agenda for this session, we sent that out on the mailing list as well, and so we're going through what I'm doing right now, and then we have a number of really good presentations that I hope you are all going to enjoy.

Anyone have any comments or things to the agenda? Anything they need last minute want to be talked about? No... so, this is the agenda we're going to go with.

So, the first presentation is he is right there getting ready to come up here, we're really excited his this presentation because he is going in and is using Atlas to measure other protocols than just the usual traceroutes and whatever we're doing it for. So, come up here.

MORITZ MULLER: Hello. Thanks for the introduction. We, at SIDN, he wanted to measure the DNSSEC adoption at ISP resolvers using the RIPE Atlas probes, I want you to give the the methodology I was using.

SIDN we have tunnel 5.6 million domain names registered, of these 2 .5 domain names are signed with DNSSEC which makes us the largest signed zone you call it in the world, which makes us very proud. Unfortunately, signing is only one part of making DNSSEC more ‑‑ DNS more secure, also validating is quite a big issue. And this is why we implemented these measurements.

So, we wanted to encourage mostly Dutch ISPs to enable DNSSEC validation at their resolvers and therefore we needed to know which ISPs are doing validation and what are they ‑‑ maybe they are doing permissive mode or they don't do any validation at all and therefore we set up these measurements.

So, at first, we looked at other measurement approaches. Of course we know the measurements of Geoff Huston in APNIC and, but we want to see whether we can measure this adoption also with our data and maybe make a bit more lightweight approach.
So we looked at the traffic we see in our name servers. At the moment we store the traffic of two other name server and this amounts to roughly 30% of the traffic that we see. And we count every result of we see more than 1,000 query for DNSSEC or DNS queries and we count these as validating resolvers.

We then show this information on our website. So here we count the number of queries that we see from resolvers that we count as validation resolvers and count what is non validating resolvers and we show this number here for each aut‑num system. But this measurement has several down sides. First of all we are not sure who does the validation, so maybe the resolvers might be a resolver of a customer, not the resolver of an ISP and we are also not sure what this resolver is actually doing with the signature, with the signatures. If they do validation or if they do in permissive mode or just get the records for other resolvers down the line.

So, we looked at our beloved RIPE Atlas network and we set our measurement there, we selected 500 RIPE Atlas probes, mostly the Netherlands, but they can be adapted to any other country as well where you have quite a few number of Atlas probes. And we set up two queries. And one query is resolving a domain name which is validly signed with DNSSEC and a second one, but which has a validation error so validation should fail.

We make sure that the dual stack and also that the probe is using the the list of local resolvers so that the probe is not resolving the DNS query themselves. And we count the resolvers at one validating resolvers. It's not validated, and we get another response for the server domain as well. Validating resolvers are counted when we get an R code zero so, a valid response for this domain and it is authenticated, and importantly, the server domain get a serve fail. That's a sign that the resolver is validating. And the last mode of operation is permissive mode, so we also get a validated response for the validly signed domain but we also get R code 0 for the server domain. Although the validation fails, this error is just ignored.

This kind of setup has several problems, or challenges. First of all, here we have a setup on the left side you see the probe. On the right side you see the upstream resolver of the IP we actually want to know, the 74.125 .47.14. But, if you look at them, the result of the measurement, then we only see the IP address here on the right. So in this case it's the Anycast IP address of Google of 7.8.8.8. Or even worse if we have some kind of DNS forward err for example in between, then we only see the local IP address, in this case 10.10.10.10, which has even less information or value for us.

So, we set up a third measurement, where we do it in S query, and if you do it in S query to this domain name then we get a response in the IP address of the upstream resolver in this case we get for the A record the 74.and so on and so forth, which is the actual IP address that we want to know.

Of course, in the future, we could also just set up our own serve fail and sign domains and then use the unique ID for the RIPE Atlas probe and we could look see directly which probe or which resolver was responsible for the query. But for now, that was kind of a quick solution and we used the Akamai service.

A second problem is if we have some kind of other middle boxes in between. If you look here for example at the probe A, we have some kind of validation DNS proxy in between, like DNS mask for example, and that means that the validation is actually happening on this validating DNS proxy and not on the upstream resolver in the end. But if you look at the measurement, then we could assume that the upstream resolver is doing the the validation. So, that's kind of a problem. Also, we get a problem if we have two probes using the same upstream resolver and one probe is behind this kind of DNS validating DNS proxy and another probe is just behind the DNS forward error queries the upstream resolver directly. We see at one probe we see a validated sponsor we might see that the responses are validated and the other one is doesn't do any validation, and it's a bit tricky how do we count the upstream resolver then.

We have decided to only resolvers for which we have seen two independent measurements from two different probes, and these measurements give us the same results. And after five weeks, we used 500 SLAs probes and every probe is online all the time when we did it, and we saw 65 unique resolvers, I count unique resolvers as IP addresses, for which we got two independent measurements for two different probes. And we also saw more than 1,000 queries from these resolvers so we want to ignore a bit the small resolvers which are only used for small organisations.

If we ignore this threshold, then we saw a total of 154 unique resolvers.

This 65 resolvers were located in nine unique autonomous systems, so we have Google resolvers, resolvers of open DNS, we also have resolvers of university networks in the Netherlands and we have some resolvers of ISPs. So, this is what we were actually looking for. And if you look at the number of how many are validating and non validating, we see that 24 of these resolvers are validating resolvers and 41 are non validating resolvers. Which doesn't sound too bad. If you look then at the number of queries that we see from these resolvers, then we see that only 65% of the queries come from validating resolvers, so only a small amount of queries of the total number of queries.

So, if you compare our results with the results of folks of APNIC, we see that APNIC counts the DNS validation of 12% IPv4 the Netherlands. But we only see 6%. If we take all the resolvers into account, so all the 154 resolvers, then we see app DNS validation of 10%, but, for now we did a more conservative measurement in this case.

This also has some other down sides. So we only have quite a small set of resolvers. We have, for a different kind of ISPs also measurements for their resolvers, but if they use for example, a big resolver farm with many different IP addresses, then we usually only have one probe querying this resolver and therefore we don't count this result. And also, we don't have resolvers of mobile networks because the Ripe Atlas probes are not mobile networks usually. And if you want to look at the measurements, you can find the links here in the slides.

A future work we want to see why do we have this big gap between the APNIC measurement and our measurement. One explanation is apparently that we only count a small set of resolvers. And my colleagues already busy with targeting or contacting IPs in the necessary err lands directly and trying to encourage them to roll out DNSSEC in their resolvers. If we continually run this measurement we hope three see this effort in our measurement as well. Lastly we are also interested in getting feedback from ISPs about our measurements, maybe okay, why do we have ‑‑ some measurements, we see for some resolvers that they might be in permissive mode, which is contradicting with information that we hear from the ISPs directly. And that would be very helpful as well.

Last, I want to refer to our statistics website where we have shown some other statistics that we collected in Dominion servers. I think that's T thank you for your attention. Questions.



CHAIR: Do we have any questions?

AUDIENCE SPEAKER: Hello, rack bash ‑‑ I have a question and a comment. So, you mentioned that you use 500 probes in the Netherlands. Can you tell us what fraction of these probes are using 8.8.8.8 as their resolver

MORITZ MULLER: I could look that up. I can't tell you right away. It's a large number. I'll come back to you.

AUDIENCE SPEAKER: I also expected that. So my second comment would be, the presentation shows it's useful to know the resolver that is going to be used by the probe and the probe knows what resolver it is going to use. So perhaps, the RIPE Atlas team can actually expose this through the probe because you can see that it's quite easy work to figure out what resolver is being used, and any kind of measurement will be affected by it, depending on whether you use a local resolver or an open resolver like 8.8.8.8.

AUDIENCE SPEAKER: If I could comment on this. Philip long bearing, RIPE NCC. The probe knows where it's going to send the packets. So, as soon as you run your first measurement, then you run ‑‑ for all the probes you know what they're going to do. I think we have APIs for that, if we don't, then we can probably add that. The thing that you don't know is where it's going to come out on the other end and that's something that we just don't know. So, you have to find that out yourself. There is no way that we can magically figure out, unless we would have to set up infrastructure like it was discussing, or maybe have special echo things, also mentioned, but it's not a trivial thing to do for us.

AUDIENCE SPEAKER: Sebastian Castrio. Nice work, I love it. I have a challenge for you. Now using this experiment you are in a position to find the addresses of resolvers, and with the data you have for.nl, you can look up which of those resolvers actually query your data, so you might be in a position to, not extrapolate, but derive a pattern from this source address in this kind of traffic, it's very likely to be a validating resolver, and you can do that to all your traffic in past and will give you a new view of the state of validation.

MORITZ MULLER: That's a very good comment, thank you. We already looked at a bit and our measurements from the active measurements overlap with the results that we see with our passive measurements sometimes. So this indicated that we can do this kind of research on this basis as well, definitely. Thank you.

CHAIR: Great. Thank you. Very, very interesting.

(Applause)

So the next one we have up is...

VAIBHAV BAJPAL: It's a long title. Hi, I am a Ph.D. student. I am supervised by professor Dr. Jurgen Walder. Today I'm going to talk about vantage point selection for v6 measurements, benefits and limitation of RIPE Atlas stats. You will figure out what do I mean by the title. This is a joint work with Steffie and Robert and Emile from the RIPE NCC.

So, look at the plot, it is showing you the evolution of RIPE Atlas probes over time. And what you see is around 9.3 K probes are connected around the globe as of May 2016. This should not be news to anybody, that in starting in 2013, public APIs were repleased by RIPE Atlas to allow to us provision measurements on probes. However, the selection of probes during that time was largely limited based on geography, so such ASUsing latitude or longitude, or network based filter, such as providing a prefix and then you collect some probes.

In July 2014, a feature was added by RIPE Atlas to add attacks to probes. And soon after, there was a capability to allow vantage point selection using these tags. (Tags) (attacks is tags)

There are two types of tags. Currently in the RIPE Atlas system. So, there are system tags and user tags, and system tags are tags generated by the RIPE Atlas platform using built in measurements, so they are frequently updated every four hours and they are fairly stable and accurate. If you look at the plot it is showing you the distribution of a number of probes tagged with these tags. And what you see is, popular system tags are tags that highlight the state of DNS and IP connectivity. So, they will tell you whether the probe is able to resolve A or a AAAA records or is it able to reach out to the v4 Internet successfully and v6 Internet successfully. You also have tags to allow to you figure out which kind of hardware is being used. So using v4 and ‑‑

So, why am I telling you all of this? So my interest is, in v6 measurements and I want to know where are all the dual stack probes. And in order to do this, I have two criterias. I look for probes which had the same ASN, both over v4 and v6 because I want to do latency measurements and I want to make sure that the probe host is actually dual stack native connectivity because most the probes are hosted by gigs and if you don't get hosted, you sign up a broker, so I want to eliminate those continues and I want to look for probes which have these v4 and v6 system tags.

So, if you apply this criteria, you will see in the plot the evolution of dual stack probes over time and what you see is around 2200 probes are dual stack today as of May 2016. And I want to stress this, that this is the source of vantage points for IPv6 measurements across any large scale measurement platform. So, I encourage you to use it for the IPv6 measurements.

Okay. In addition to knowing dual stack probes, you also want to know what is the region based and network based buyers that comes with using these probes. So. First thing you want to do is you want to see where are all these probes actually located geographically. What you see is around 91 countries are covered by these probes so, when I say probes from now on I will always mean dual stack probes. Around 91 countries are spanned and around 838 ‑‑ what you see is if you are actually spawning a measurement from Germany, likely you are actually measuring IPv6 performance of Deutsche Telecom because most of the probes are hosted by D tag. Similarly if you are measuring from US, it's highly likely you are measuring performance from within the Comcast network. So it's good to know this.

One thing you see is that in these top five countries, you don't see Belgium in the list, so why is Belgium interesting? If you see the footnote, there is a Google IPv6 statistics, reported by Google, and bell gentlemen is at the top with 42% penetration in IPv6 adoption as of May 2016 but you don't see it in the list. So it makes you wonder, perhaps probes likely do not reflect v6 user population across the globe. I'm not the first one looking at this problem. But, so, Geoff Huston from APNIC has tried to perform this weighting of IPv6 user population on Google IPv6 adoption statistics. I'm using this idea but applying it on RIPE Atlas probes.

So let's use this APNIC data set of IPv6 user population. If you look at the plot, what you ‑‑ red it the percentage of v6 users across country and blue is the percentage of probes, dual stack probes in that country. So, what this plot allows you to do is, if you do a subtraction between red and blue, you will get a data comparison of top ‑‑ and if you sort them, you will get a data comparison of countries which have very high v6 user population but lower number of probes. So, now look at the table. Now Belgium prospective up at the top. It has likely less number of probes than it should have. This should be seen as a contribution for people applying probes, perhaps we should target these countries now for, to allow equal weighting for v6 measurements. So for example, if you see Japan with 17 million IPv6 users, and around 15% v6 usage ratio has only 32 probes. This can be useful for targeting deployment for probes.

Now let's move from region an basis bias to what's network based bias. The next thing I want to look at it which are the networks hosting all the v6 probes? So if you use peering, and map the ASNs, you will see in the plot evolution of probes by network type and the good news is 80% of the probes are hosted by service provider networks. And out of which, 60% of the probes are hosted at homes. So, probes are dual stack probes. And if you look at the table on the bottom, this provides a very even split of dual stack probes and DSL cable and fibre links. So what you see is, this sample of probes is a very good sample for doing IPv6 measurements from service provider networks, and so if you want to do some kind of study, use this platform.

Now, all this while we have been using system tags. There is one more bit called user tags, so user tags are user tags you can apply yourself. If you go on the website you host a probe and you can put any sort of tag that you want. And the plot shows you the distribution of user tags. And it shows you the evolution of popular user tags and what you see is largely, people like to tag probes that are centred around the home so like NAT, no NAT, home, fibre, DSL, cable, right. So, the first would be why not use some of these tags F I want to do measurements from all cable service providers, maybe I can use this table tag and do IPv6 measurements. But be careful. Because, if you look at the frequency of tag changes, system tags are regularly updated because it's part of the system, but we are humans. We don't regularly go to the web page and update tags, user tags. So, if you look at the plot you'll see that less than 2% of probe hosts have ever updated their user tag. So, it's possible that when you distribute the probe you tag it as a home probe and you move it to your office and you forgot to change the tag. First thing I encourage you to do is if you are hosting a probe, go to the probe web page and make sure that your tags are correct because it's useful for us, for people who are actually using these tags for doing measurements. And try to keep them updated.

And so, why did I do all of this? What is the point of doing vantage point selection? So, I participated in this hack‑a‑thon where I used all these dual stack probes to measure IPv6 performance to dual stack websites, and it's a toy, be careful, but you can play with it. It's on this URL, so click on it and you can see how the performance of any website is to the old v4 and v6.

So, what did we see today? We saw that user tags tend to become stale over time. However, system tags are very good and they refresh ever 4 hours and are stable and accurate.
We have used these system tags to identify dual stack probes. And we find that around 2200 probes are dual stack, covering 9 is countries and 813 ASNs. 80% of these probes are deployed in service provider networks with around 7 hundred probes connected at homes. This is a very good even split between DSL cable and fibre deployments. However some countries like Belgium and Japan are under represented in this sample. So it would be good to have some more probes applied in these regions.

Thank you. I'm happy to take questions.

AUDIENCE SPEAKER: Hi. Brian Trammell. I thought I would continue the tradition of the next speaker coming up and asking the current speaker a question.

So, on the take away there you said that user tag tends to become stale over time because the tags don't change. You actually got me to look at my probe and make sure that my tags were up to date as well, and they were. Do you have any idea about the rate of change of the underlying property that at that tag is supposed to give you information about? So, you say that tags don't change but also where the probe is doesn't change that often sometimes right or it moves from one IPv6 tunnelled home cable network to another IPv6 tunnelled home cable network. So, have you, and if you haven't, would you look at sort of tags that are clearly wrong? So, something that's coming from Google's AS and says that it's home cable tunnelled v6, have you looked at that at all?

VAIBHAV BAJPAL: Not yet. But that's the follow‑up question.

BRIAN TRAMMEL: I look forward to your next presentation.

CHAIR: Anyone else? Thank you very much.

(Applause)

Next we have Brian. And Brian is a returning speaker in the Working Group on a returning subject as well, so it's going to be very exciting to hear what he has been working with since he gave his first presentation. Thank you.

BRIAN TRAMMEL: Hello. We talking about Internet path transparency measurements using RIPE Atlas. Using Ripe Atlas is new since the last time I was here in Bucharest. But the Internet path transparency measurements thing is not. This is even the same slide from MAT at RIPE 71. The problem that we're concerned with here for those who weren't there, is that the decreasing end‑to‑endness of the Internet. A lot of this is policy. A lot of this is we have firewalls because we want to security that those provide, we have NATs because we're using this old IPv4 thing and would like to continue using it. But, the more of these boxes that you put into the Internet, the harder and harder it is to actually be able to deploy new things.

I spent a lot of time in the IETF if the transport area where we like to think about deploying new things. And a lot of times these discussions go towards well that will never work because of NATs, or because I saw this one middle box this one time that breaks the packet in a certain way. Therefore we can never, ever do that. And the goal here is to get data that can get us out of this trap. The goal is to say how bad are these problems? Where do these problems occur? Are there things that we can do that maybe will work in 90 or 95 or 99% of the Internet and are there fall backs that we can use?

These same measurements are interesting in operations, because as some of these new features do actually get traction and deployed, ECN, which I was talking about last time, it allows you to sort of like be able to ‑‑ it's another tool for trouble shooting.

The way that we do active measurement of path transparency is simple, it's a two accept methodology. You throw a bunch of packets at the Internet and see what happens the the Internet reacts ‑‑ sop of them make it it the other side. Or with different properties. In the ideal, when you're looking at a single path you want to control both end points of that. So you send the experimental traffic, you send the control traffic, you look at the difference between what you sent and what you received, and that gives you information about what worked. And then you have the source and destination have to talk to some coordinator. These are hard to do if you want to get a lot of pals in the Internet because you need a lot of end points. You need a large set of devices that you control both, you control both end points. And you coordinate these things. It's not easy to set one of these up.

More scaleable is where you use sort of here, where you are basically doing one ended testing and you're using assumptions about what each hop along the network or assumptions about what the end point is going to send back to you as part of the protocols that are running in order to give you that feedback channel. Trace route does this. Trace route does this. There are other tricks that you can use here. And when you use multiple sources, all of which are sending control traffic and some of it works and some doesn't, then you take the trace route data long with that, you can start to deduce where the brokenness is. That's the approach used by trace box, I don't know if trace box has been presented here, if it hasn't we should get them to come and talk about it. It's a pretty cool Hack.

So the question SHA we want to ask today is can we run the Internet over UDP? The reason we want to ask this question is all of these protocols that we want to run, putting traffic onto the Internet with new protocol numbers breaks a lot of times because like all the NAT boxes, expect the first four bytes of the packet to be the ports, so it's an additional piece of information that's used for forwarding, and if they don't see a 6 or a 17 in the protocol header number, then they don't know that they can do that, they go I don't know what it is, drop it on the floor or did a DMZ trick. It's really difficult to get new protocols to deploy this way.

So, there is lots of current work on taking new transport protocol work and wrapping it in UDP. There's wide availability of API in user land. So, there is a lot of current work here, there is this new effort in the IETF called plus, which I'm part of, and the question here is if we're doing all of this work on actually doingen caps laying the protocols in UDP, is it safe? Will it work? Are there operational practices that hinder it? We have this problem that we do testing and we don't own a whole bunch of machines and we don't want to by a botnet and the idea actually from the Plenary talk yesterday, where we could just do crowdsourcing didn't can you remember to us, we use RIPE Atlas, where there is a bunch of probes

TORE ANDERSON: They are distributed. We have this RIPE Atlas thing, can we do this sort of UDP, TCP sort of, A B testing and we went to the RIPE Atlas people, or the RIPE Atlas demon we said we'd like to put arbitrary UDP and TCP on out and see what happened. They said networks have you heard of this thing called trace route. If you ‑‑ in RIPE Atlas, you can set the initial TTL for a trace route. So you don't actually have to start from 1 and go up to 64 or 255 or whatever. You can start the TTL at a relatively high number and it will spit out a single packet. And it will have a TTL of 199. It will make it to the end and come back. Anything on the path that's reactive that might break because of trace route will only see a single packet T looks like a single connect or a single UDP packet. We went and said okay, do basic connectivity with a high TTL etc., try and find out how many probes and if this blocking is path or access dependent.

And we saw something really surprising actually. I'm still not entirely certain what's going on here. There is a lot to unpack in this graph. Here we have 128 probes, 32 angers, we order them from west to east. Whether you ‑‑‑off al the trials we did the blue err it is, the more the UDP attempt worked over TCP. And the red err it is, the more the TCP attempt worked. We were expecting to see an fair amount of UDP blocking, we saw there was a TCP blockingment we chose a port that we didn't expect to see a lot of blocking on and the TCP firewall rules or whatever it going on, tend to block them way more than the UDP. So we dropped ‑‑ I mean there's like one anchor here that has a little bit of UDP connectivity issue in Asia and one, two, three, where UDP connectivity might be a little bit off compared to TCP. So that was interesting. Then we dug into the RTT bias and we found that here it's mostly probe dependent. So, there are here again the blue err the better it is for UDP. The red err, the better Yours sincerely for TCP. Here we have UDP is slower here and here, but otherwise there's a kind of a very low background ‑‑ there's a bit more variance on paths especially from Europe to Asia, what would you expect, the longer the path is higher the RTP, the more variants you are going to see.

And we saw ‑‑ this is again UDP versus TCP, we saw a lot more interference going on and a lot more variants on TCP 80 when we compared them. And we take that to be evidence of a lot of middle boxes on these paths that are actually trying to help the web, they are doing accelerations, proxies and so forth.

What we didn't see here was a lot of UDP blocking, we didn't see a lot of probes that just straight up blocked UDP the. Wait, no, sorry I forgot there's one more slide on the a RTT biases. We looked v4 verse v6 to see if the initial RTT bias, we make an attempt to the UDP and TCP. So over here, UDP is faster, over here TCP is faster and we see that the spread is a lot tighter on IPv6 than v4. It's the APNIC anger in February of 2016. And the encouraging thing for this, can we run the Internet on UDP thing is the median line is right it's lightly toward UDP, so in this case UDP is just a little bit faster which we also did not expect to see.

We didn't see a whole lot of blocked networks in these measurements. You know, we expected 128 probes and 32 anchors and we picked well. We picked okay, good not a lot of UDP blockage. Then we decided to look at well what is the population of probes that are on networks that might block UDP. So, we went back to the database, we looked at all of the people ‑‑ all of the measurements that had tried to do an UDP trace route. We took all the probes that were involved this those, we tried to do at least nine separate UDP traceroutes, that's the median number of UDP traceroutes in the data set. So all the ones that were above average, so we have a higher number of samples, took targets that we had evidence were up. So a lot of these shooter as opposed to spray err measurements that are somebody said oh I have a target that's down, I'm going to use RIPE Atlas to tell me whether or not the target it down and it turns out the target is down. So if they are both broken on that target we don't consider it, because it's not going to tell us anything about the differential treatment. Then we made sure that the probe to target path could connect either using TCP or ICMP. Of all of those we found that 2240 from all the measurements in twin met the criteria and then we counted how many never saw a packet come back on the UDP past. Of those there were 82 probes. These were largely on networks that had marginal connectivity. We didn't see a lot of sort of common properties that these probes would have, but you saw that these were on networks where you had like a lot of other measurements that were failing to things.

So, if we take this to be a data point and indication of ground truth. If we say can we run the Internet over UDP? We need a back up for these 3.6%. I'll note that this is in line with reported measurements that Google has presented in the map research group in the IETF measurements and analysis for protocols where they said between 6 and 7% of quick doesn't work. 6 and 7% of paths that the they have tried quick from there's some forth of failure. So these are on the same order of magnitude of failures that we're looking back.

I immediately said great, what about MTU? If you are actually going to run the Internet over UDP, if there is anything that is going to block larger UDP packets as opposed to smaller that the path MTU is transport specific, then you are going to have a problem here. And then he helped us out by running a measurement to do it and we can happily report that no, basically this line is pretty flat until you get into this 1400, 14 XX area. It's actually a lot less dependent than ICMP on IPv4. The gap here between this 9100 probes and 88 something, that's about 296 of these probes block UK UDP. So the UDP traceroutes to these targets never ended up anywhere.

So, in conclusion, SLAs proved itself useful for estimating this UDP connectivity and the initial RTT for answering this can we run the Internet over UDP connection. It's a hack. Using trace route for this ‑‑ of course trace route it a bit of a Hack itself. It turns out basic UDP isn't very broken. Of the access networks that we were able to look at from RIPE Atlas it works on 29 and 30 of them. All the vertical structure that you saw in the earlier plots suggests that these are not path linked but access network linked problem, so it's easy if you are going to build a protocol that's going to encapsulate something in UDP it's basis it on not on the properties of the path. Whether or not that's a thing you should fall back on and stop using. So this 3% failure. It's more than you'd like to see if you are saying okay let's roll it out, but fallback helps a lot here.

In the tradition from yesterday, I have a bonus slide. I hope this one is less depressing. Why do we care so much about UDP connectivity? Well, we're actually trying to use it to build a generic layer here that allows us to put new transport layers on top of the IPv4 and IPv6, on top of the Internet layer. So, think it have as sort of a construction set for transport protocols like quick. We have a BoF at IETF 96 in Berlin. That's the 17 to 22 July. If this is interesting to you, please come by. One of the things that this enables is, in protocol performance measurement, if you have a time machine, I recommend you go back to yesterday afternoon at 5:30 and see Mirias' talk about this this was quite interesting. And this topic might be coming, wink wing, nudge, nudge, to a RIPE BoF near you. To thank you very much. Any questions?

AUDIENCE SPEAKER: Hi Brian, completing the full circle here. So, slide 6 where we found that the DC P appears for impaired than UDP, I am wondering (TCP) I am wondering is it a function of the destination port that is being used here, 33435, so how would it look if you used a well known port like port 80?

BRIAN TRAMMEL: It looks less bad but more confusing, we weren't able to explain some of the stuff going on in it, that's why the slide isn't there. One of the things that happens if we go forward here, some of these dark red numbers here are actually, this goes back to Randy's earlier joke, some of these are faster than the speed of light. So the RTT that we get on TCP port 80 indicates that there is something spoofing an ACK on that path. That kind of place havoc with the UDP /TCP connectivity figures.

AUDIENCE SPEAKER: I just want to add a point. I think the reason why we use this port is because that's the only one available at RIPE Atlas, right.

BRIAN TRAMMEL: It was the openly one available on UDP, so it was giving us 33435, so we wanted to have that as the control, yes.

AUDIENCE SPEAKER: We would definitely like to do more measurements with other ports on RIPE Atlas. The measurements you are referring to we did on other test beds to try to confirm those results.

BRIAN TRAMMEL: Ripe Atlas pony request, UDP source port collection for UDP trace route and here is fill I will telling me why I can't have it.

AUDIENCE SPEAKER: Philip. The problem is that I think you are also measuring a bit this legacy protocol. And in this legacy protocol, if you do, you get an ICMP back and usual you only ‑‑ well, you can't get more back but sometimes you'd only get a little bit of data back so you have to encode a lot of stuff in that little bit that you get back. And then we sort of need those ports. It's possible that maybe we can create another mode of trace route where you say well, maybe the results will be not as accurate but you get more control over the destination ports.

BRIAN TRAMMEL: I don't actually care about which hop it came from, right, because I have my TTL set very high. But that's a good point. Or, run is on IPv6. Okay. Thank you.

AUDIENCE SPEAKER: That's a good follow‑up question. A good follow‑up question would also be, how much of connectivity bias we see over v4 versus, has UDP blocked more over v4 than v6? I think there was a slide ‑‑

BRIAN TRAMMEL: We didn't look into that, and I'm trying to remember why. And it's because like ‑‑ so in this initial one, in this sort of these grid ones, we did not use dual stack as a criterion, because we wanted maximum geographic distribution there rerunning this, where we do it all on dual stack hosts where ‑‑ we were actually using the tags to figure out that it thinks it's on v6 and we do some basic measurements to confirm that, that is in the ‑‑ well it's the third to next thing to do actually.

AUDIENCE SPEAKER: I'll be very interested to help new this.

BRIAN TRAMMEL: Great. Let's talk, thanks.

CHAIR: Great. Thank you. And I just want to remind you all that the time machine is actually the website agenda where there will be a video of Mirian's talk yesterday, so... you can just jump in there.

Next up, Vesna, and you are getting ready already. She is going to give us an update on what's going on with RIPE Atlas.

VESNA MANOJLIVIC: Hi. I guess RIPE Atlas doesn't need introduction. And this seems to be one of the names on Twitter, is this RIPE Atlas Working Group or MAT Working Group? Sorry, chairs. It's a pun, there is a smiley over there.

So, this is what I'm going to talk about, and I have 20 minutes, and I hope I will do it in 15 so then there is sometime for discussion.

These are the numbers of the probes currently around the globe. There is about 9 and a half thousand active once and what is in the shown here is another thousand or more of written off probes, so in total we have distributed about 17,000 probes. And this grey map is actually a video showing the growth of probes over time over the globe. This lasts for a whole minute, and there should be some background music to it, but it was too difficult to put it on the slide and since we are in Tivoli, I'm going to perform it for you. So, if goes like this... da da da da da da da......

(A some err Salt please)

Going back to the serious numbers, these were the numbers in the activity report for the 2015, and this is the difference since the previous time, so, we are covering more and more networks, but the Internet is growing faster than the RIPE Atlas, so the percentages are still kind of around 6% for v4 networks and around 10% for v6 networks.

We are covering a few more countries. So, 184. And you can see all the other numbers displayed here. The main growth happened with the RIPE Atlas anchors, so a lot of people are interested in deploying them, they are all over the globe, we are cooperating with other regional registries, they are sponsoring angers by the ebbs members in their regions, and that is important because we are Europe well covered but the other parts of the globe still needs some more attention.

And as I announced on the previous RIPE meeting, this year, the support contract for the first 15 pilot anchors that are running on the Dell servers are going to expire and we want to recommend to this host to switch to a new architecture. One of them already happened, that was the INEX anger from Dublin, thanks Nick, that went smooth. And so on this graph, you can see like when we decommissioned a certain anchor and when we deploy a new one. These are some of the logos of the anchors hosts, they are grateful for their support in deploying these larger servers.

And some more updates about the community. So, as you can see on the presentation, using come ex, there are a lot of users and not all of them that are active they prefer ping and trace route measurements. A lot of activities happening also on the mailing list. We are trying to look into that, and we are using some open source tools and these are just a pretty screen shots, but the article about analysis of the mailing list behaviour is coming up soon.

We are also very grateful to our sponsors that support us financially and with their money we can buy more probes. So, thanks a lot.

And a lot of people would like to learn about RIPE Atlas. So, we have volunteers that are mostly also ambassadors distributing probes and also giving tutorials in workshops, if you are interested to be one of them, please approach me later, we can talk about how can you adjust the material that we have to your local audience.

And for the others, we have webinars online that are happening every few months and there is a video available on YouTube that you can watch later on.

There are also use cases being published more and more and these are the three latest ones on RIPE Labs by Daniel, Stefan and re mine even. And of course we had the hack‑a‑thon. A lot of results from the Hack tonne were already presented at this RIPE meeting. The winner team is coming up next, they are going to talk about their project. And apart from having an honour to give a presentation at the MAT ‑‑ I mean SLAs ‑‑ I mean MAT Working Group ‑‑ they also received a prize of the box of waffles which we were kind enough to share with all of us.

We always post the announcement about the new feature on the RSS feed and on this website so you can follow it in the future. So, the main thing that we were busy with was maintaining the infrastructure and doing kind of tidying up and checking if everything is okay and making smaller improvements. Part of that was a security review. So we had two external companies doing the review, and one of them we actually documented already, you can find the information on this website. The other one just finished and we are going to publish more information about that later on. So, if you want to know the details, please look it up there.

For the people who like to do measurements, we have increased the limit, so one measurement can now take up to 1,000 probes. And recently also, we had a lot of people reporting problems with the newest probes that we have been shipping. We still don't understand really what the problem is. So we try to look into that and Philip, who has been speaking on the mike here, already a few times, just published an article today looking into the lifecycle, the life expectancy even of the probes since the beginning, so in the last five years, and trying to find the patterns that would help us understand better what is it that we have to do to help these users that are having problems recently. So please be patient a little bit more and we apologies for this, of course it's not our fault but still, we will try to fix it.

And again the update, since the previous meeting we added 15 zones of the country code top level domains to DNSMON, so we are monitoring these zones with the special service that is based on RIPE Atlas anchors. For the people who do not qualify or the people who would like monitor their second level zone, there is something called domain MON which uses similar visualisations but you have, but you have to deploy probes of your choice, and you also have to spend your own credits on that, and you can find more information in the article and here is the URL of that service.

We updated a lot of documentation, we moved to the second version of the API, and we restructured the website again. So some people will happy about that, some people are unhappy. We can't always please everybody. And do keep on giving us feedback on how we can improve.

Another feature that we were expecting to be more popular is HTTP measurements, they can only be performed towards RIPE Atlas anchors, maybe that is a secret of why our people deploying so many anchors, but we are still waiting for these use cases to appear on the RIPE Labs or in the papers based on the HTTP measurements. They are relatively new, only a few months, so this is your chance, the researchers and students, you can use the existing HTTP measurements towards anchors.

So, we also implemented some features that are not so technical, but are more for the people using Atlas, that has been sharing of credits that has been requested many times, so you can have standing orders and distribute the credits within the group of the researchers or the people in the same organisation or you can have a shared access. And for people who give training courses that also was a stumbling block, let's say, when you have 30 people and you have to give them credits, it takes a lot of time. So now we have vouchers that you can use to share credits with people participating in workshops. The members of other regional registries and the people who maybe don't want to deploy a probe yet or it's difficult to ship a probe to their country. They can get credits using this voucher, give it a try, the first shot is for free and then later they are going to apply for their own probe and deploy it and earn their own credits.

RIPE forum. We also introduce this had new interface for the mailing list, and since then, there was some extra participation of the people who are not so used to mailing lists communication.

The community has been active in deploying marry own monitoring tools and also incorporating our command line tools into various, Linux and BSD distributions, we even tried to port it to Windows. It's still in progress, and if you are a developer or maintaining packages for other distributions that are not on this list, please approach us, we would like to add these command line tools to as many distributions as possible.

So, what are we going to work on? We are currently busy with the Wi‑Fi measurements in cooperation with he had ROM, so that's almost done and they will be deploying these special probes that can do Wi‑Fi in the he had Rome network and after that we will start evaluating visibility of deploying probe software in the VM. This is something that people keep on asking for and we keep saying no, no, no we are not going to do, it but okay, we will try to do it and then we will report back to you.

And we will work more on open IP map.

So for the DNSMON users, that's their service is going to improve a little bit and then a smaller features like measurements towards multiple targets and on and the other list of possible things that this we will be working on you can see on the road map.

So if we have time for discussion, these are two questions that we have for you. And otherwise, we are going to post these on the mailing lists and then start a discussion there and then bring it up again at the next RIPE meeting.

So, we are keeping all the data available since the five years online all the time, and this is starting to become an operational issue, let's say. So, we are going to suggest that we, kind of, offer older data in a different way, put it on the FTP server, something like that, and then try to negotiate with the users, with the researchers and operators what is the frequency of the updates, how long should we keep the data, which format should we keep it on? And so on. And the other thing is if you are doing the measurements that are not public, then we might store those measurements, even shorter, or change something about that, in order to encourage everybody to make their measurements public and contribute back to the community that is actually helping to maintain this platform and this service.

So, very few slides of the eye candy about the hosting country. So these are the ‑‑ this is a distribution of probes in Denmark. So, if you are living somewhere in the areas that are not covered, please see me orally a later on, we have some probes here, or talk to so. Ambassadors that are also walking around here and they can give you a probe. And this is where we would like to see the probes.

So, Emile looked into data about eyeball networks and calculated their market percentage, market coverage and then we looked into how many probes we have in each one of those networks, so they are sorted by the percentage of market share, again with you know, it's not completely scientific, and so, then we look home probes we have, so, in the networks where we have more than three probes, preferably we don't want any more, but the other ones, especially where they have zero, we would like to have at least one probe in those networks. If you are from one of these networks, again talk to me later.

We have three anchors in Denmark. And they are all in this mythical place call Belarupe. Where is Belarupe? Can somebody later point it out to me and say why is everything in Belarupe? And so, if you are deploying another anchor in Denmark, please put it somewhere else for the diversity sake.

Emile made the other tools that give a country look. One of them is called IXP country Jedi. And this is the view of Denmark. It's quite yellow and it should be more green, so there is some homework to be done there. If you are interested in the methodology of this, there is a labs article describing it and the raw data is on this URL.

And finally, this is the distribution of, like, paths between the mesh of these probes in Denmark recollect and where are the paths going. So they are definitely not staying in Denmark. The left‑hand side is v4. The right‑hand side is v6.

DNSMON view of Denmark. We have several different ways of looking at things. Most of them are super green and boring, so I was searching until I found the one that has some orange and some red just to make a prettier picture, there is a lot of options that you can play around with in this visualisation, so if you are hosting dot D K, this might be interesting for you, or if you are a user in Denmark, you might want to take a look at this.

And you all know how to take part. How to contact RIPE Atlas so I would rather skip these slides or just quickly go through them and then ask you the questions. So this is it.

The bonus slide. It's a tradition. It's a towel day today. This is my pseudo towel. So don't panic, and ask questions.

AUDIENCE SPEAKER: Fillet Yilmaz. Thanks for the performance Vesna at the beginning. A live performance always is better than the recorded stuff. But just before that, your slide showed some abandoned probes, which are rather high in number. Not only disconnected but I'm just talking about the abandoned red ones. Do you know the underlying reason, and I'm assuming that categories like you don't even ‑‑ you didn't even hear back about that, you just don't get any information any more, is that correct?

VESNA MANOJLIVIC: So abandoned are the probes that were connected, we have tried to contact them, and they have been disconnected for more than three months. So, that's the category of abandoned. And it also includes the, some of the probes that are lost in transport, but then people let us know. And we didn't actually cover here the written off, so the once that are actually like completely lost. So, if you consider that we have distributed 17,000 is the cost of shipping across the probes.

AUDIENCE SPEAKER: I am curious behind that category. I wonder is it a soft within reason, it's a block box they don't know what to do with it, or is it more than it doesn't work? It would be nice to know if you could try to dig in in the future.

VESNA MANOJLIVIC: The article that we published today is the beginning of that and we will keep on updating that article. Thank you.

RANDY BUSH: I keep looking around for Meradith Whitaker, but if you are looking for someplace to dump a lot of data, it seems that M‑Lab might be a ‑‑ from all the adds they sell, they can afford to buy some spinning disks for the community.

VESNA MANOJLIVIC: Thank you, I will contact Meredith and say that you recommended that solution.

AUDIENCE SPEAKER: Conor, RIPE NCC, just to reply to Randy. Thank you Randy but the main problem is actually not the storage, but to be able to serve that data live and then do all the queries and everything, we need to expand the platform with the data. So, and it's possible and to be honest, even financially at the moment it's not an issue, but looking in the future, we prefer not to expand in that way and the main question here is, do you think operationally or research wise, it's required to have all of this data all available online, let's say, because we can always provide them in FTP or dumps or any other kind of dumps, but having everything well aggregates and processing done on the data. Do we need that or not, that's basically the gist of the question.

RANDY BUSH: I'll reveal that I had two motives. One was to give you a path to make it available online to me forever and easily. And two is, to give an incentive to M‑Lab to make M‑Lab stored data available universally, which would be a win for both sides in my insufficiently humble opinion.

As far as access, you know, I mean the honest truth is, I told a grad student, right now, two different projects of mine are, there was a paper five years ago, it studied this, it came to this conclusion. We want to do a longitudinal study going forward. Please reproduce that experiment, not only to test that it's real, but to calibrate your tool and then go forward. And so, I just think of those data as always available. I mean, I think the Internet ‑‑ you know, if they are going to keep all our bad stuff and all the embarrassing things you said on Facebook, well they can keep the good data too.

AUDIENCE SPEAKER: Emile Aben, RIPE NCC. For one of the things that this visualisations that we do work on, which is open IP map, I would like to acknowledge the people who have, the operators who have actually put data into that system, like the crowdsourcing of geolocation, which makes all kinds of things possible. I know some of them are here in the room and I'd like to acknowledge that and we really appreciate that.

VESNA MANOJLIVIC: It can we have a round of a laws for these people? Thank you.

AUDIENCE SPEAKER: Hi. So, first slide which was intended to be a joke but on a serious note, more and more people are using RIPE Atlas and at some point in time will have a condition, so how about this: Why not foreign a RIPE Atlas Working Group, can we do a BoF? (Form)

CHAIR: Well, do you really think there is a need for a RIPE Atlas Working Group?

AUDIENCE SPEAKER: Personally, yes. I know that there are more studies using RIPE Atlas, so maybe people are ‑‑ there are more people who would like to come and present. But this is my personal opinion so it's opening the mike.

CHAIR: Okay.

AUDIENCE SPEAKER: Shane Kerr. So, I guess we're going to continue our tradition of the next speaker coming at the end here, but I really don't think there's a lot of need for a RIPE Atlas Working Group. I think if the RIPE NCC wants to get together with its constituents, whether those are paying customers or other people, by all means hold some workshops, things like that, but I think we have enough exposure to it and obviously derive huge benefit from it in this Working Group and other areas, but I think it's enough. So...

CHAIR: Thanks Shane.

AUDIENCE SPEAKER: Daniel Karrenberg, RIPE NCC. One of the inventors of RIPE Atlas. A couple of things. I don't think we need a RIPE Atlas Working Group. But, I would definitely echo those who said there might be you know some other content that we might do in this Working Group. But I think this Working Group has served very well in giving us the feedback that we need. So, when I take my RIPE Atlas user hat on, I'd like to echo what Randy said, there is an absolute value in having long time serious data. And last time I checked the RIPE NCC wasn't cash ‑‑ didn't have a cash problem, and the storage, you know, with using the technology should actually become less expensive as we go forward. So, I think it's best to keep that data at the RIPE NCC, because then it's where it was before, we don't have to transfer it to some other place which might have different jurisdictionings, different risks stuff like that. I think we should keep it at the RIPE NCC and keep it accessible. It's not rocket science to do this.

The other question that was on the slide, I didn't fully understand, but it was about private measurements. And I personally think we should slowly but surely actually get rid of that. I had the impression that we had, two years ago or so, we had a discussion in this Working Group, and basically arrived at the conclusion that the only thing that's private about the private measurement is who made it, you know, who created it, so basically the metadata of the measurement and all the results would be just as public as they are, as the public measurements are. And I just, I was shocked to learn that actually when we did some stuff that you didn't mention, which is the realtime streams which had been expanded, to get near realtime data in an uns precedented quantity, were actually very difficult to implement because we had to filter out the results from public measurements. So, I'd like to propose, from private measurements, excuse me, so I'd like to propose that we actually task the RIPE NCC with working out a way to change whatever terms and conditions need changes so that all the results that RIPE Atlas produces are public, and if we can get consensus even say we totally abolish private measurements, but if we want to keep private measurements, it's only the measurements specification, i.e. who defined it, where did it go to, how many probes and when and all that kind of stuff remains private, but the results are, would be public. I think if you want to keep your measurements secret, I think Ripe Atlas should not be the tool of choice. (RIPE Atlas)

VESNA MANOJLIVIC: I hear you Daniel. We had this discussion before, as I have two hats. As the RIPE NCC staff of course my life would be shall easier if that wouldn't case, but as a community representative, I have heard other people expressing an opposite opinion and we will have to have this discussion again. We will have a discussion indeed.

RANDY BUSH: I will disagree with Daniel's two points. The first one is, I believe, you do have a cash problem, you just can't spell it. And to illustrate, to make research easier, I have a spinning 24 terabyte N FX S boarder that completely shadows route views and a bunch of other BGP stuff sitting right next to a compute machine, and you put the grad student on the compute machine and they can do serious runs against routing data. And to go pull it piece by piece from some remote server, just ‑‑ so I really mean cache with an E.

As far as public private, I mean there's a different way to think of it. You have made an amazingly good tool. Like any seriously novel tool, we get surprised by its novel uses that we didn't anticipate. And I think there may be large operators who want to use the tool to monitor their network, and I think that's a useful way to look at things, and I can see them not wanting to have that data public, and I can see you not wanting to store them and not have to serve them. Therefore, export and make open source the back end, and allow people to just buy 1,000 probes, set up a back end and run their own garbage.

CHAIR: Daniel, I think can we have this as a final remark because we are running out of time and perhaps we could restart this discussion on the mailing list. And the reason I'm cutting this down is because we still have a hack‑a‑thon winners who have not shown us yet what they did. So, I'd like to get Shane and Desiree on the stage and then please continue the discussion on the mailing list. Thank you.

SHANE KERR: Hello. Actually, is it possible for me to do, to plug in a laptop here? We don't have to. But we can do slides I guess. It's a bit boring.

DESIREE MILOSHEVIC: Hi everyone. Afternoon. So, we are going to present our little hack‑a‑thon prize and the work we did with the RIPE Atlas measurement over the last weekend. Our project is called halo, and as Shane has just said, we also have a live demo, so we can demo that as well.

So we had five teams and luckily the one with Daniel Quinn, Shane Kerr and myself that won this big box of stroup waffles and shared with everyone. I think with our project, whilst Shane is connecting, it was ‑‑ we had a few challenges. What we tried to do is look at the probes that are on ‑‑ that are connected on the Atlas probes, did a live anchors and tried to identity light what would constitute an event and if one singular probe was down, whether that would actually be part of a bigger event or a smaller event. So, having had this challenge to find outlooking through this API that we have built, that we can have lookups by the country as well, you can look up a particular AS number. It was really difficult to see, and that was the biggest challenge if certain probes being done within one AS network, whether that would actual constitute a bigger part of the event and how would we actually divvy up these buckets of time, so if they are only down for let's say 15 minutes and then come up, what is something at the edge of the network was going down or was it part of the whole event?

So...

SHANE KERR: I want to be able to see the display. So, I do have slides that you can look and see this stuff that we're going to show you here later, but I figured live demos are always better because what could possibly go wrong?

So you see up here, this is just we put everything on GitHub, it's normal. It's how everything works. And the product, the end result looks like this. And you have got basically two fields, you type in either an AS or a country or an address prefix, you can type in a date, if you don't do the date it basically does it from the current time going backwards.

So, we use this live stream that is available from the RIPE Atlas. I thought this was a really cool and innovative idea that Daniel came up with, but Vesna has since mentioned that this was almost exactly the same thing that was done two hackathons ago, but I still think it's really cool. Anyway... so what you get is ‑‑ when you type in the number you get three kind kind of widgets here, one is a map of the country showing all of the probes that we know about. And on the right‑hand side, you have a kind of tool which tries to spot outages and I'll explain a bit more about that in a second. And at the bottom. You have got actually a list of each time a probe connects or disconnect. So you can kind of look at that thing on the bottom and spot problems the and if you want to, if you see an outage that we have discovered, you can click on it and I'll show you the probes that are involved with that outage event.

So, what we do is we try to come up with a threshold of what we consider an outage the right now it's hard coded. There's no way to enter it into the tool unfortunately. So, I kind of fudged these results a little bit and I went and updated the code to make it 10% for one lookup and 1% for another and things like that. It's a one line change if you want to start looking at that yourself. To give an idea of what it actually looks like and what it can find.

I went to the Wikipedia, and I found a list of power outages thinking a power outage is probably a good way to find network outages and I found one in Turkey in 2015, and in the Wikipedia, I actually have a link to a news article about it, biggest power cut in 15 years, investigation underway. We're not going to watch the video. But interestingly they give the exact time, 10:36 a.m., so we throw this into our tool, and country of Turkey, this is the date, March 31, 2015, you can actually see at 7:36 a.m. we see this outage. And at 7:36 because that's UTC and Turkey is three hours ahead. We are able to spot the power outage in Turkey, which I thought was really cool.

So another thing that we did ‑‑ we have kind of confirmation that this actually works. We weren't allowed to use a separate outage because it actually took down all the collectors, RIPE Atlas itself was using which turned out to be not very interesting. We don't have a good way to separate problems with the measurement network from problems with the measured network right now. Maybe something to work on in the future. For instance there might be a better stream which can include information about disconnects that means the RIPE Atlas system itself is having problems.

So, the reason I guess we talked about, we actually did this was because Comcast has this website which causes them problems, the Twitter feed that people look for hash tag Comcast, and then they have this kind of scary heat map about all the problems, and so I thought well let's go validate this technique, right, so on the left‑hand side here we have notable outages and the biggest one that they see it April 10, 96 reports, and it turns out that when I plugged in, Comcast AS 7922, and there actually were quite a few probes that were out on that day. It wasn't by any means a major system wide outage, but it was a lot. On the other hand, when you look at the graph, you don't see any real noise. You don't see the good clean signal that we did in the case of Turkey where you see this pig spike here when the power outage happened.

So, that's basically it.

Maybe to make more clarification about that previous outage, any time somebody tweets a name of the network operator it would go into list of bucket and show something was wrong, even if they said something good about that operator, so there was a lot of, the idea was to kind of drill down deeper into details and see if we could use the Atlas data to see what happened. What we haven't revealed is if we had a little bit more time is really see if something happens to the edge of the network and if we had a 15 minutes bucket and if some of the individual probes were to go down and then if they still stayed down after 15 minutes or they come back, whether that's a part of a larger network event or whether it's just an individual probe going down. So, this is the fine tuning where I think the next Mac on this group could excel.

SHANE KERR: And many people have said why don't you do this other thing? Why don't you take into account this other thing? It was a day and a half of hacking. So the reason we don't do all the other things is we want to get something at the end of it that we could use. I think it's kind of works. It's buggy. It's on GitHub and try and use it. Put it in newer knock, it might be useful. If you do make improvements, push them back so other people can check it out too.

CHAIR: All right. Thank you. That was amazing. And I think this is bringing us to the tend. So, unless anyone has any last minute things they would like to bring up, I'd like to say thank you for coming here today. Thank you for the good discussion that was starting out that I had to shut down, but maybe next time we should try and make more time for actual discussions like that one. Randy?

RANDY BUSH: I just want to say that RIPE Atlas and the team behind it are awesome.

CHAIR: I totally agree. Thank you very much.