Archives

These are unedited transcripts and may contain errors.


MAT Working Group session
15 May 2014
4 p.m.


CHAIR: Welcome everybody. This is the measurement analysis and tools Working Group here as they say on the aeroplane, if you are not going to measurement and analysis today you are on the wrong plain. So let's get started. Here is our agenda for today. We have got an anti?spoofing mix of some NCC tools focus stuff and some academic sort of researching stuff. I have gotten one bash to this /SKWRAPBD already which was we're going to stop Vesna's two talks. Other changes to the agenda?

Then we'll stick with that. Normal administrative things to get through.

This is the welcome to the Working Group Chairs. Welcome. Thanks to our scribe, our stenographers, I believe Andrea will be Jabber scribing so if he leaps to the mike, don't panic, he is just relating comments. Standard mike rules for the mike apply. Are there any objection to approving the minutes from RIPE 67? But I believe they were posted to the mailing list some weeks ago, any objection to approving those minutes? Hearing none.

We will proceed to our first item on the agenda, which I believe is Christian talking about his ?? giving an update from an earlier MAT talks with some results from his masters study.

CHRISTIAN KAUFMANN: Thanks. So, who was mentally in the room when I gave last time the presentation about... a quarter?ish. Good.

So, what I basically did is, apparently because I was bored, I decided to do a masters thesis and and one when I want to do it, you want to do as little possible of work so you choose something you do in your daily life or something you are familiar with is it's easier and faster. So I have chosen RIPE Atlas. I will quickly talk a little bit about what I did so for the other three quarters which didn't know that part and then I actually show you some of the results.

So, going back. The idea was I wanted to measure the inter?connection density, how I called it between IPv4 and IPv6. So, in plain English sentence, do we actually have as many IPv6 sessions between the ASNs as we have in IPv4?

We all kind of know the issue. The latency in the different protocols is not necessarily the same. But even if you kind of know that we actually, or most of us don't know exactly how vastly different they are. There is a certain perception that v4 is probably most of the times faster/better for various reasons. But to which extent actually?

So, one way to measure the density between the various ASNs is to actually ask every ASN. Send an e?mail to all the ASNs say how many sessions you have, is it actually the same? I guess that is not really practicable and wouldn't really work. Then I thought about well, how about I actually look into the routing table, but as we all know BGP has a best path selection to you wouldn't see all the various paths, even if you look in a lot of routing tables and in a lot of Looking Glasses because of the best path you would miss all the other paths. What else can I actually do?

And I thought, well, I could actually look at traceroutes and pings and try to kind of reengineer the whole thing and see if from that I can get an idea what the latency difference S but also if I actually look at traceroutes and I look at the IP hops and ASN hops and see what the differences are. To make that half?way useful, I decided to do quite a lot of them and use RIPE Atlas for that.

So, I have chosen 500 sources, 500 probes, 500 destinations which I got from the most POP laugh websites. I have chosen the ones which are dual stacked and which are also not Anycasts or on a CDN so you basically get a real trace route and it always goes to the same destination from the 500 sources.

Then we do that for v4 and IPv6 because we want to compare them. And we do the traceroutes because you are not just getting the RTT you also get the IP hops and therefore can have a look on the ASN hops.

If you use RIPE Atlas and you do different measurements then what happens is it actually gives you a different set of probes all the time. So, but because I want to compare them between the 500 destinations, I don't always have the same sources I was running an initial tests recorded the IDs of the probes and then told the RIPE Atlas, please don't use the same probes.

Then we did actually a colleague of mine helped me with that, we did a little bit of scripting because you don't want to click the 500 probes and destinations in the gooey and do that manually, that would take quite a while. So we created a Python script which basically talks to the API of the RIPE Atlas and it generates the test. And then of course, you you are going to download the tests and do some data crunching.

So, with the traceroutes, as I said, you automatically get the RTT and you get the IP hops, we then looked up with the similar roulette IP ASN to the database. Looked at the IP address, seeing which ASNs they are and created the DAs and passed for the trace route and put that all into various, well matrixes, so we had one v4 and v6 for the RTT. We had the IP hops and we had the ASN path counts.

So far so good. Now, what did we actually see? My favourite slide. Very colourful, kind of nice, but doesn't say much.

On the X axis you have the ASN hops. So, on the other axis you have the density, so what I did here is I graphed all the various traceroutes. All the traceroutes I have are basically graphed here and you see whenever they had 2, 3, 4, 5, 6 ASN hops, then you have a peak and then we graphed them. The red graphs are IPv4. The blue ones are IPv6. So, you see that you have less ASN hops in IPv6 than you have in IPv4 regarding that graph. You also see actually around 0 and 1 quite a spike, which actually goes out of the graph, and there we are, otherwise it's more or less a nice distribution and then you know most of them, they are two between 2 and 6 ASN hops and then it kind of fades out.

When I looked at the graph, I tried to figure out what that actually means. So, regarding the anomalies between hop 1 and 0, I believe this is actually because of IP tunnels. Tunnels, you have in IPv6 over IPv4. So the packet goes, you know, into the tunnel, transfers actually through various ASNs before then drops out on the other side, and this is, in my opinion, also the reason why you actually have the blue is more to the left than to the right side. Besides that, the majority of them are between 2 and 6 hops, you know, which is kind of I guess if you make a trace route what we would expect.

Besides that, that's I didn't said it it's a nice graph, it actually doesn't tell us much because it is mainly influenced by the IP tunnels and therefore not so much telling us about the density.

Then I thought okay let's graph that over the RTT. Take the same numbers and look at the round trip times. Blue, again, is IPv6. Red is IPv4. And as you can see there, they are pretty much the same spread. Blue is a little bit stronger from the colour, this is how the graphic thing works. But at the end of the day, for all the probes on the X axis to all the destinations, it's more or less evenly spread, it's mainly around 70, 80 milliseconds range and then it kind of fades out to 500.

So, so far that RTT actually says, well there is not, speedwise, not much difference between the two protocols.

Then I thought, okay. Let's take some fancy blocks, at least my Professor likes that a lot, and what do you say here from an RTT perspective is actually pretty much the same. The outline he is again are not really in, it you see in IPv6 you have more of them around 3, or 400 milliseconds but the boxes itself which give you the spread and also if you look at the mediums, the black line, they are pretty much the same. Between two protocols forth test I was running there was actually just a difference from 7 milliseconds. So, IPv6 was just 7 milliseconds slower or higher latency than it was in IPv4. But in daily base, pretty much no big difference.

So, I thought okay, well that is all kind of nice, but what else can I actually get out of this data? And what I looked at were traceroutes scenarios. Because we still don't know much about the ASN path. So I defined four scenarios. I looked at all the probes and all the results I had where the ASN path and the IP hops in both protocols were actually the same. Because I thought, if they are the same, then this must be really, you know, dual stack, the same trace route going over exactly the same routers, and ending up source and destination wise over the same path. For that I actually just found 6% of all my tests. Then I thought okay, let's assume the ASN path actually have to be the same, but the IP hops can be different. So, this would be an example where, you know, you make a trace route and it goes from source to destination, it goes through the same providers, but it takes a different path, either, because they have a different routing policy internally for v4 and v6, or, because they have less Internet connections, that we want to measure between the two networks. They might peer taking an example at the links but not at AMS?IX for both protocols, then it takes one instead of the other path. So ASNs the same, IP hops different. For that case, I had 24%.

Then I looked at the scenario 3 where actually both the ASN path and the IP hops are different. So, in that case, the traceroutes are not just taking a different path through the networks, they are also traversing actually through different ASNs because in that case, my assumption is there is either no peering session between these networks and they send a probably over transit or they have a different peering policy to a certain extent between v4 and v6, and therefore it takes a different path.

That was actually the majority of all the traceroutes and scenarios I could find with 62%.

To see if all the numbers add up to 100%, I did the last part more for entertaining purposes and said what other ones where the ASN path is different and the IP hops for any random case are actually the same. Not that that actually means anything, but it gave me the missing 8% and it actually shows us that as much as that study is true, the likeliness that a part actually, you know, has the same IP hops but goes through different networks, which is quite of strange, is actually higher than the likelihood that it takes the same ASN path and the IP path, the path which we all want to try to achieve with dual stack and having the same sessions.

AUDIENCE SPEAKER: Before you go on, can I just ask you ?? Geoff Huston, can I ask you one question because I don't quite understand this. When you say IP hops, I assume you had a v4 trace route in v4 and a v6 trace route in v6. When they are "The same" how do you know that that v4 address is the same as that v6 address? What's your same metric?

CHRISTIAN KAUFMANN: I don't know if they are ?? because I was looking at hops, I actually don't know if they are the same. I was counting basically hops, not looking if it's the same.

GEOFF HUSTON: Hops within an AS.

CHRISTIAN KAUFMANN: Yes.

GEOFF HUSTON: I see, if I had three traceroutes hops, 3 TTL points in the same AS in 4 and in 6 you go that's the same.

CHRISTIAN KAUFMANN: I would not know if they are the same routers.

GEOFF HUSTON: So scenario 4 has completely confused.

CHRISTIAN KAUFMANN: That basically says, it was mainly to actually see if the numbers had up to 100 and to have the last scenario. But it actually says, the IP hops are the same so I have 5 IP hops on the left in v4 and I have 5 in v6 but they are even going through different ASNs. So they are going through routers.

GEOFF HUSTON: It was just 5 hops against 5 hops?

CHRISTIAN KAUFMANN: Yeah.

GEOFF HUSTON: Okay. Thank you. It was just clarification, I was just confused.

CHRISTIAN KAUFMANN: No. No. Thanks for asking. And there is a little bit of disclaimer for that slide which is, as I said before, with having tunnels in IPv6 over IPv4 counting them actually and comparing them is of course a little bit of critical question, so there is certainly a certain margin of error in that.

So, after all these little graphs, and the random numbers, which is you know, I don't know, probably 40, 50% of the master thesis but the part which is related to RIPE Atlas, the question is what is the summary out of all of that? And the summary and for that I'm relatively sure because the RTTs are not so much bound to tunnels or notes tunnels is that IPv6 for the path I was measuring and there were quite a lot of them, the speed difference is actually not much these days. On the other side, as much as we can believe the interconnected path between the ASNs and the IP hops, they are still some missing. So, whatever we as the peering community, are the people who actually configure routers, should probably go back and have a look if we not just went when we established the first session to another AS and our router was capable at that time with IPv6, established one or two sessions, but if you are actually on path for both protocols. Because I believe that we kind of stopped somewhere in between. We had a basic connectivity to a couple of other /SA*PBS, we were all happy we see them in the routing table, all it fine, then when we probably wrote to dual stack, or added a new router, we probably didn't do that every time for both protocols.

Thanks. So, as part of that information is taken a little bit out of context because that was especially the path for the RIPE Atlas. If you can't sleep or want to read 82 pages then send me an e?mail and I send you the rest of the master thesis to put them into context. Thanks.


(Applause)

AUDIENCE SPEAKER: Jenna from Google. One clarifying question you mentioned you excluded any cost in NS J, how did you do that?

CHRISTIAN KAUFMANN: That's a good question, what I did is I actually looked them up in hosts and did traceroutes and tried to eliminate them. It was a manual process. It took approximately one?and?a?half weeks in August, a lot of cold beer, not so much fun.

RICHARD BARNES: Thank you, Christian. All right. So our next presentation is from Vesna on RIPE Stat.

VESNA MANOJLIVIC: Hi, I'm Vesna, community builder for measurements tools for RIPE NCC, and I'm here to give you a traditional RIPE Stat update presentation. Last time we introduced a lot of new features so in the meantime we focused on exploring the possibilities for utilising all the wealth of features that RIPE Stat gives to different audiences and this time I would like to present more of the use cases rather than the new features.

Just to see to judge the audience a little bit. How many of you have already heard and used RIPE Stat before? Can I see the show of hands? Wow, this is completely impressive. Thank you. And on the other hand, how many of you would need a quick introduction? Wow... okay, not too many, it's good that I kept it to just two slides.

So, RIPE Stat has been developed by the RIPE NCC since we were sitting on a lot of data and we are generating more and more data using the active measurements and collecting route announcements and so on. So we wanted to have one entry point for all of you to access that information. So RIPE Stat is basically a one stop shop for all the information about the Internet number resources and that consists of the data that the RIPE NCC has, so the in?house information about Internet resources that question gave out the, the registrations in the RIPE database, routing information service, or RIS, which collects BGP data. RIPE Atlas active measurements, and reverse DNS. On top of that, we are also collaborating with a lot of third parties and collecting their information. First of all, from all the other regional registries, then from the routing registries and final geolocation provider and blacklisting providers. How can you access all this, bulk search by any possible key word. You can see them here on the slide and I will be referring them throughout the presentation so I won't list them all now.

What do you get when you go to stat.ripe.net? The main interface is a web interface so you get a search box and then if you enter IP address, AS number, country name, you get results that are represented in graphical widgets. And there is quite a significant number of them; namely 42, and that's too much to show you at the same time. So, we only show you four at the beginning and all the others are grouped, they are grouped into them attic tabs, so we have chosen tabs where we categorise them and you can also create your own tabs where you can select the widgets that you prefer and just put them there. There was a tutorial about all this on Monday morning my colleague gave. It was quite popular, there were a lot of people there and the slides are available. So if you want more details about how to use RIPE Stat you can find it there.

And in addition to this web interface, there are a few more ways that you can access this data. There is a text service for the people who prefer the command line like the old?fashioned people like me, and then there is a data API for the new kind of programmers, the script kiddies that can use this data API to create their own programmes and scripts. And there is a mobile app.

So, since we are in Poland I wanted to show you some country statistics. And the first one is the historical overview of the growth of IP networks in Poland and you can see here the IPv4 network, the v6 and the number of AS numbers going over the way back to, I can't see this because it's too small ?? 2004 at least, I think much much more in the past. So, you can do this per country or you can compare multiple countries in the same view. So if you compare Poland with Germany and Ukraine, you can see the difference in the growth of v4 or IPv6 networks and how did that develop over time and you can try to make some conclusions based on this or make all kinds of investigations and take a look at events and so on. So, all these visualisations are have very rich and they require some time and patience and effort to actually explore all the possible answers that you can get from using these graphical visualisations.

We also connecting RIPE Stat and RIPE Atlas more and more so in RIPE Stat, you can see the number of probes, this is from a week ago when there were 159. Now there is six more probes in Poland already. And I'm sure there will be even more because we have distributed about 100 already during the RIPE Meeting, so, this is ?? this picture is of course interactive and changing all the time.

And finally, the M?Lab data which is based on the participation of people who do the M?Lab tests and then M?Lab is collecting this and we have developed a visualisation widget for presenting this data in RIPE Stat, and it can be embedded in any other website, so M?Lab is using it also on their site.

So, what else can you use the RIPE Stat for? Well, we have been using it in the other part of the RIPE NCC, the registration services, together with the local Internet registries have been using RIPE Stat for the assisted registry checks, so when we try to help you out to see how is your registry doing, you can use RIPE Stat features for that specifically the consistencies in the routing information and the DNS information.

We wrote about it on RIPE Labs too.

Our researchers have been looking into notable events and since the last time there was a mistake or a hijack in Indonesia where one operator actually started announcing a lot of AS numbers and they kind of disappeared from the routing, so you can see a dip here in the graphs, and these kinds of things, for these kinds of investigations, the RIPE Stat is really powerful.

If you want to play with it, BGPlay is back. It used to be a very popular tool in 2009 where we produced a video of the famous YouTube incident, and now it's rewritten, it's part of RIPE Stat but you can use it as an individual widget and again, it gives you a very graphical overview of the paths between AS numbers and you can focus and zoom in or you can show the historical overview, so it's a really interesting research tool.

And finally, you can also compare multiple widgets and different views of the same resource if you want to make certain decisions, for example if you want to peer with a certain network, you can take a look at how many prefixes are they announcing, how many neighbours do they have? How many neighbours did they have historically? What is their routing consistency? And so on.

More about that is also available on RIPE Labs.

So, the feature that is not very well known is that you can use RIPE Stat to access RIPE Atlas results. So, if a certain IP address or a host name has been used as a target of RIPE Atlas measurements and you are searching for that specific IP address or the host name in RIPE Stat, it is going to present to you the results of the RIPE Atlas measurements. So this is the famous one to Stefan also gave a presentation about, the Google DNS resolver has been targeted very much by the researchers and a lot of Atlas measurements are available there so you can also see that in RIPE Stat.

So, this is my last slide. We are focusing more on the data consistency and improving resilience of the whole system, and, as I said, exploring the possibilities that all the features of RIPE Stat are giving to the operators, to the researchers. We are cooperating with the other organisations that can use this data. And in the near future, this will be our main focus. However, we are still interested in hosting other data sets. So if you have an interesting data that you would like RIPE Atlas ?? RIPE Stat users ?? sorry ?? so access through our interfaces, please let us know and approach us either or you can say it now on the mike or get back to us later on.

These are our contact details. You know how to find us. So. That's it. And any questions?

RICHARD BARNES: Thank you, Vesna. Yes, questions?

AUDIENCE SPEAKER: Blake with L 33. I just wanted to say thank you for putting BGPlay back in that. We use that, it's really nice for showing customers, said last week this weird thing happened and, you know, you just point them to that and they shut up and go away. Thanks.

VESNA MANOJLIVIC: I'm really happy to hear that. Thank you.

AUDIENCE SPEAKER: A feature I'd like exposed through RIPE Stat would be that if I type in an AS number the system will tell me which announcements are either transited or originated by that AS. I have requested this before, but...

VESNA MANOJLIVIC: Thank you. We'll either already have it on the road map as a requested feature or we will add it to the road map and I will read it in the minutes exactly what is it and I'll get back to you on it and we can talk about the details.

AUDIENCE SPEAKER: Wilfried Woeber and just a question to the item of plans to renew the RIS collection process. Does this point that there is a new hardware coming up or is it just a software improvement?

VESNA MANOJLIVIC: Thanks for that question. I see that cafe is ready to reply.

AUDIENCE SPEAKER: Yes, we are trying to revisit the model. And on both sides so collection, so we are looking to find ?? come up with a cheaper and easier to maintain and new hardware basically, and also on the software said we are trying to improve things so we get realtime or near realtime results from the boxes and we can expose that users, we haven't started working on that but it's in our plans to do it. We will report back to you before the next RIPE Meeting.

RICHARD BARNES: Our next talk is is switching back to more of a research track, he is going to tell us about some of his experiences using some of the RIPE Atlas tools to do some research.

VAIBHAV BAJPAL: I am a second year PhD student. I am going to talk about some of lessons that we learned by using the RIPE Atlas measurement platform for measurements. This is work done along with my collegue, Steffi and both of us are supervised by Professor Dr. Shulder.

Just to give you a quick background of where we are coming from. So essentially we are partners within Leona project. The goal of this project is to assess the quality of experience of End Users through active measurements. Essentially there was a talk given by Trevor last year, you can go to the first foot note to learn more about the project. So, essentially we are developing new metrics for the platform. In this talk I'm be talking about RIPE Atlas but I'll also be comparing with it against the platform. We started using the RIPE Atlas platform since the early 2013 when the APA was publicly released and since then we have been trying to develop collaboration with ?? this talk is essentially a subset of a larger measurement study that we preferred. In the past six months which we submitted to IMC this year. I can't talk about the paper because it is undergoing a review process, but we can essentially still talk about our experiences in using this platform during this journey.

Okay. So, lesson number 1. So we had been hosting these RIPE Atlas probes for multiple years. So when designing the experiment we had this perceived assumption that we had ample credits, millions of credits. We'll be able to provision measurements, that's not an issue. That's not the roadblock and we started designing the experiment. When we started provisioning measurements we couldn't provision any of the measurements. Essentially, when we looked into the documentation, we found that there are essentially rate limits in place on each user account. There are three of them. The third one is something that you should plan ahead of, because this will make your designed experiment that will span multiple days, but that's not the case because we proposed the study on the Atlas mailing list and you can actually get these rate limits lifted off on your user accounts and thanks to Vesna for lifting these rate limits on our accounts.

Just a quick caveat. When I'm talking about lessons learned there might be some good things and there might be not some good things so please don't destroy me at the end of the talk. I am also part of the community so I'm only here to help everybody.

So this was one. Second. Last year we participated in a dots tool seminar on how to do proper measurements and there is a statement that I'm quoting from the paper that was submitted, a lack of calibration can lead to to identity of results. That was discussed in the Atlas mailing list and back in 2000 somebody was discussing this point.

So, essentially what do I mean by calibration? I mean you immediate to know some physical parameters around which the probe is doing the measurement. So the first parameters. What is from their release that the probe is currently learning? Which is doing this measurement. So if you see this first block is actually taken from the website, so if you see the firmware, the frequently of releases have actually increased so it's more pertinent to actually know the firmware release so that if you see an unexpected result in the measurement you can trace back to the source code to try to figure out why do you see something which is unexpected. This is RIPE Atlas has been very smart in this, so, when you provision a measurement for each result, you actually ?? the RIPE Atlas platform tags the founder in the measurement result. So this is very nice of the platform to be able to allow us to track back to the source code.

Something that is missing. Is hardware calibration, so essentially you see that around 8,000 probes are applied all over the globe, but these are not one set of hardware probes, these are three types of probe families: V1, V2 and V3. And the issue is V3 probes are actually more hardware capable than V1 and V2, they are a completely different family altogether. There are also these anchors which are coming up very quickly, so at this point, there are more than 50 anchors, and any provisional measurement, the measurement can be provisioned on an anchor. So you need to be very sure who is measuring in your experiment.

So now to figure this out. We actually asked on the mailing list how do we know which kind of a probe is actually doing the measurement and thanks to Robert, I don't know if I am pronouncing your name correctly, but I'm sorry, so essentially what it tells me is a probe itself can reveal what kind of a probe is doing the measurement. And we want to contribute this table to the community so it would be nice if this goes somewhere onto the web page so that people can trace back what kind of hardware is running that measurement.

So why did we talk about all of this calibration? The first question was: Does probe hardware revision have effect on measurement results? We developed a simple experiment. We started finding probes, who is first hop, who is first trace out hop is private. When I say private, it is in the v4 private address space, so RFC 1918 compliant address. And whose hop number 2 is public, so which is not an RTFC 1918 address? What this experiment does, it ensures that the probe is actually wired behind the residential gateway, and it also makes sure that you are not crossing a wireless link. So I know some of the probes which were connected behind the router crossing a wireless link and then going through the gateway. So this can completely screw up your measurement results so it would be nice to know who are these probe hosts who are doing this.

This is the design. And we provisioned trace route measurements that lasted for a day and we started investigating latencies to hop number 1. So hop number 1 could be the home gateway, which is wired. And compared it with V2 and V3 probes. So this is ?? do you see this? So I can tell you the gist of the story. These are four plots from four ASNs and this is a, on the X axis you see the latencies that you experience on the first hop and on the Y axis it does the percentile. Red, if you see a red are V2 probes and green is V3 probes, this is latency to first hop, latency to the home gateway. What you see is V3 probes are showing latencies of less than one millisecond. This is what we were expecting. It shouldn't take more an a millisecond to get to the home gateway, but for V2 it was around 6 milliseconds, which is surprising, each data point is a probe, so I'm showing you multiple probes from four different ASNs. And only for the sake of ?? other probes were also showing similar behaviour. What is the reason for this?

So, I'm telling you this story as if we came up with this research question but we were doing the study and as a side effect we found this. So in any ways, because we know the from that we started looking into what firmware were these probes running and we started looking into the source code. So just to give you a quick background on how the measurement framework is organised in RIPE Atlas.

So, all the measurements tools are adaptation of busy box utilities. This comes from open ended authority. And each measurement has to modified to run in an evented manner using lib event. When you provision a measurement on an UDM, each test does not spawn a new process, but it involved a separate function. So essentially there is a process which is handling a single evented loop and whenever the measurement request comes in a new function call is initiated. Why is this designed this way? So we have to look in the history. So this circumstance vents the absence of an a management you and this was carried on in V3 probes as well.

So what is the outcome of this? This is a sample of source code taken from trace route for this firmware release, and the first function call is actually called when you provision a trace route. And that function is actually registering a call back and the second function that you see, the call back is actually time stamping, by AC MP responses so when you send the ICMP query the response is time stamped in user space. What this means is if a probe is measured with multiple measurements, this user time stamping will be more delayed. And these delays are more pronounced in probes which have hardware like v1 and v2 probes. This is my suspicion, so I suspect that this could be the problem, that v1, v2 probes might be experiences load issues, but I can't prove this at this point. I need to do more experiments, but I thought just to share this along. And get nor insites.

Anyway, moving forward. Number 4, so when you do trace route, for each hop you are actually selling multiple queries, and by default both in SamKnows and in RIPE Atlas, three queries are sent per hop. So you are wondering can per hop averaging of RD D significantly vary the results? So in RIPE Atlas, the RIPE Atlas is doing the right thing. So when you do a trace route, each query response is registered in the results and no kind of aggregation is performed and this is very nice of the platform because the aggregation should be left to the data analysis part and not to the probe itself. Within SamKnows, we had actually aggregating results from 3 ICMP queries so we are averaging them and these are the results that you see, if you average. If you average responses for each hop, you will see a number of spikes and dips, and this is going to affect your overall result. So if you actually look at the CDF of the first and second hop using averages and if you try to do a subtraction between them you will get negative latencies, this is the kind of effect you will have if you do subtle aggregations of per hop latencies.

So what we learned from RIPE Atlas and we actually have replaceded implementation within SamKnows and now we actually expose each query result without aggregation.

Number 5, this is also some kind of an issue, so, we know that there are 8,000 probes deployed all around the globe but if you actually start to looking into the AS based of the probes, if you are doing a specific measurement study like measuring v4 and v6 inter?connection densities, so you are specifically wanting to test from a specific AS to a specific AS. You have to look at the AS distribution of the probe and what you see it is heavily tailed, and these are the top ASs. If you see the top AS rank, so when I mention AS rank, a rank is designated by the number of probes which are deployed in that AS. So, rank 1, Comcast has only 170 connected probes. If you look at top 10 AS ranks, they only contributed 19% of all probes, if I want to do a specific measurement study it is very hard to do. So, this is a personal request to try to increase the number of deployed probes to make this more evenly distributed so that it does not heavily tailed.

This is all something that we learned from the seminar, so metadata is data itself, and at this point, all we know about the deployed probes is where they are sitting so behind which ASN they are sitting in but if we want to do more specifics measurements we need more metadata. Something like what is the connection speed of the network behind which the probe is connected? In which kind of a network this probe is connected in, whether it is sitting in network, at an IXP or even within a home network, it is one has to do a lot of manual process to actually figure this out. And if it is sitting in ?? if it is sitting within a home network, what kind of a van type the home gateway has, is it connected to a DSL cable or a fibre to the home network? So this is the page that you see when you register the probe so when you receive the probe you are redirected to this page to register in. There are some questions that RIPE Atlas ask the user, it would be very nice if this information is made available to the APA to the ?? and not only this data, but also the history of this data so we can track changes so something like if tomorrow if I change my ISP, you are going to, and you are running U RD N from my probe you will see entirely different results. So how will this change be defrequented back to the UDN results so the researcher can know about this.

So what all did relearn today? We learned that RIPE Atlas has some rate limits imposed on user accounts. If you want to provision a measurement you need to propose the study on the Atlas mailing list and it can be revoked. Probe calibration is important. Calibration office knowing the hardware and the firmware capabilities and this is how we figured out that v1 and v2 probes are experiencing some kind of a load issue compared to V3 probes. Proper statistics matters. So subtle aggregations like averaging queries per hop may make an effect on your measurement results. And the probe distribution currently is very heavily tailed, if you look at the AS probe distribution, we need to do something about this. And metadata is not only data, but it is also changing data and it would be very nice if message can get through this.

Thank you.

RICHARD BARNES: Any questions? Cafe.

AUDIENCE SPEAKER: Cafe: First of all I want to thank you for the presentation. It's really useful. I see Robert and Philip are already lined up and I'm sure they have technical answers for the points you have brought up. But I just wanted to mention a high level architecture design that normally or guideline that we normally use, because RIPE Atlas is not only a research platform, as we know its also something very important diagnostics tools for operators and our community consists of both of these parties. And none of them are more important than the other. So, we are always trying to find a balance. So, many of the things you see, they are a result of like well known or deliberated design decision that we have made, for example, I can assume the number of firmware updates because there are a the love things we have to be able to do to provide new services which operators would use for diagnostics, so we would have to roll out more firmware updates, but as you mentioned we also have a provision there which we put exact firmware version in the results. So, we are trying to find, to keep this balance, to keep the network useful both the researchers and operators. And many of the points when looking at them they are valid, but an operator would look completely from a different way, that's the reason. And we're always trying to find the best balance and come up with something that satisfies both worlds. That was my comment.

AUDIENCE SPEAKER: Robert Kisteleki, RIPE NCC. It would take days if not weeks to discuss all of this, so in the interest of saving time I'm not going to do that. Let's talk off line. Metadata, two months ago, maybe three, we released a feature where any Atlas probe host can tag their probes. We have a built in set of tags, so something like VSL cable, fibre, NAT, no NAT, tunnel, v6, no v6, you name it, hosts can also create their own tags, which is how we learned that we missed a couple. So, that's fine, because if it looks like that useful for everyone we enable it and everyone else can see it. That feature is out there and the next stop is going to be make that available in when your scheduling measurements. So you will be able to say for this measurement from this AS give me home probes, for that measurement for that prefix give me well connected probes, stuff like that. So I'm not going to go into more details because I could...

AUDIENCE SPEAKER: Philip Rome bearing, RIPE NCC. I'd like to thank you for measuring and publishing the first hop latencies, we measured them at some point but then we internally never agreed how to publish it and then we said it takes a couple of milliseconds, what so the. So I have two technical comments. One is that originally we used existing basic box scout but over time firmware updates we have completely rewritten that or all of the code is unique to the Atlas project. It can be used by everybody else. The second one is that the version 1 and 2 probes are really, really slow. So, we have seen that just adding a bit more code has a noticeable effect on the laysnesses and what you are actually seeing is that the probe so so slow that it actually takes milliseconds to do that and if it gets to probably 6 milliseconds it's probably when it's doing an Arp request because it doesn't have the something cached.

VAIBHAV BAJPAL: I forgot to mention that among the all the probes that are deployed, around 65% are v1 and v2 so it is going to have an impact on measurement.

AUDIENCE SPEAKER: Just a quick remark, it was very interesting on the slide 7, you mentioned a method to detect wireless links but you could imagine wireless link in WG S environment of forward addresses, and it wouldn't appear as an IP hop. So you could have hop 1 private, hop 2 public, but still wireless bridges or wireless listings. So this could ?? I mean, it was some kind of mistakes.

VAIBHAV BAJPAL: I'm not sure if I follow, but we should talk off line.

RICHARD BARNES: This is perhaps more of heuristic to try and make a good guess.

AUDIENCE SPEAKER: Just one quick thing I forgot to mention before. For distribution we are well aware that we also really like to get more distribution over bigger number of ASs, and we are trying to solve that issue by first of all targeting our members who don't ?? who have an ASN, got an ASN because we have distributed about 29 thousand ASNs within RIPE region, so contacting them. And also talking to ?? we are already talking to other RIRs to come up with models so we can reach their customers and they encourage them to host the probe. So that's also an issue we really want to tackle but right now the balance that you see comes out of the current matters that we use for distributing probes.

RICHARD BARNES: Thanks again. Next up we have another research talk from /STKPWHRABG I remember, this time looking at a different methodology, not using Atlas but using the Z map tool which does some scanning.

ZAKIR DURUMERIC: Good afternoon, my name is Zakir Durumeric I am currently a Ph.D. student at the university of Michigan. My network focuses on network security today. What can we learn about the Internet today? What can we learn from global measurement? How do we actually make hosts and networks more secured going forward?

So, today, this year, it's kind of golden age to start looking at these Internet wide scans. Within the last years we have developed tools that let us scan the entire public IPv4 address space in a matter of minutes. From one host we can scan every public IPv4 address in 40 minutes. So this brings up this question what can we measure? It's actually important to think about this too because I say we are in this golden age because in IPv4 scanning is available and at the same time, IPv6 scanning is not quite figured out, but most hosts are not only on IPv6. Most large services are available on both IPv4 and IPv6, so we can really connect to and see most of these public services three this comprehensive scanning technique. Our research is on what can we learn from global perspective. I'm here today to learn what do networks operators want to learn? We have a lot of academic questions we want to research. What is interesting to network operators to system administrators, how can we work together

So, in August of this year released a tool call ZMAP, which is an open sourced network scanning err that is capable of scanning the full IPv4 address space in about 45 minutes. This requires a full gigabit of bandwidth. You have to be on a large enough network that you can actually perform this type of network scan but this is available to a lot of large academic and other research institutions at this point in time.

And we have got to the point where this is a command line. This is one line you run on a terminal and you are able to do a full scan of the v4 address space. And doing a lot of testing, we actually see about 98% coverage of live hosts with a single probe to each host. And this is at about 97% of the gigabit of a theoretical speed of gigabit ethernet. I'm not going to go into the details of how we got this to work. There is a research paper out there. That goes into a lot of details how this was actually possible. But I want to talk about what does this allow us to do and to look at a couple of case studies.

Before I get into this I kind of want to talk about the ethics of doing these large Internet wide scans. When we go and do scan we are probing every public IPv4 address space and there is this question of is this traffic really worth the research results that come out of it? There are chance that is this is going to trigger intrusion detection systems even though our intent is benign. We are not attemping default passwords, we are not attempting to access private information, this is hard to he will it. When network operators see probes coming if a lot of times this is its first step in an attack. What we try to do is try to show our intent. It's not possible to request permission from everyone on the Internet. There is no standard like a text, a network can proactively say I don't want to be scanned, don't scan me. But how do we go around this. What we do is try to reduce our scan impact as much as possible. We try to scan as slowly as possible to get scan results that we need. We try to signal our benign nature that puts up a website saying this is what we are attempting to pull down and scan for and we honour all requests to be excludeded from future scans. We make it easy to gain contact with us if you don't want to receive probes we'll exclude you from future scans. What we found is a small number of people actually requested to be excluded. A lot of times we will correspondence. It's one a tenth of a percent of the public IPv4 address space has requested to be excluded at this point in time.

I want to talk about three different case studies of things that I think have been exciting that we have been working on.

The first is a paper I have published two years ago about widespread weak cryptographic keys. In this scan actually started out we wanted to look at the http S system, and certificates.and what was going on. We did this scan and attached to every HTTP server on the Internet. We looked at the public keys. If you look right away, you'll see this large discrepancy. We connected to 12.8 million HTTP hosts but we only came back with 5.6 million distinct public keys. And immediately you are wondering from the theoretical perspective we really expect to see one key on each host, but we know as system operators, there is lots of reasons you might share keys, you might be Google, you have thousands of hosts out there, they don't all from a different certificate, or private key, that's okay the it's really not insecure that Google shares the same private key across multiple servers. Similarly, we have large hosting providers that might share keys, they might be large colocation centres where they are managing SSL holder for customers, this does happen.

But going a step further. We started to figure out you can group these very easily, you see all these certificates were in the same AS. They are all in the same geographical location. We start to pull out these blocks and we find that after pulling these out, we pulled out these large providers and corporations, we still find that 5.6% of TLS hosts and almost more of 10% of SSH hosts are still sharing keys in a vulnerable manner. We end up finding there are a lot of problems collecting enough entropy to generate a unique private key. And in this case, your device, your linkless, has generated a key and different Internet has also generated the same key and it's not good. But at the same time, two of you have the same private key; the chances of you have finding each other is actually pretty small. The fact that the other person with the linksus router ever notices this it's improbable, but more serious problems could persist if devices can't collect enough entropy. With if we step back into the maths go little bit and talk about RSA for a second, it's important to know that your RSA public key is a product of two very large primes, P and Q, and the most efficient way of attacking RSA in the real world is to factor this public key back into the P and Q and this is incredibly difficult and this is why RSA works today. We aren't able to do this for 1,024 bit keys, we have seen it done before, we don't know how to do this for larger keys, but that being said, the GCD of two numbers is trivial. It takes about 15 microseconds to take the GCD of two large numbers.

So what would happen if one host had P and Q1 and the other host had P and Q2? Taking the GCD of those two hosts together, we're going to come back essentially with their private key.

And we do this, we run essentially the GCD overall pairs of RSA keys on the public Internet and we end up finding we find more than 2,000 of these primes that are shared. And what this let's us do is compute the private keys for almost half a percent of the TLS hosts on the Internet.

A similar thing exists in DSA, with that we're able to compute that private keyss are approximately 1% of SSH hosts. So what happened.

Well, these compromised keys are generated by these embedded devices, these video cameras, how many routers, VPNs, the list goes on and on.

And we go through and start to look at why did this actually happen. We start this high veiw of ?? we want to look at keys on the Internet and now at this point we're tarted to look at how keys are generated, and if you look at Open SSL, deview random if you have looked at ?? it says you should use Devu random for everything except general race cryptographic keys, in the real world, delve random is so slow that nobody uses it. Open SSL on these devices just defaults to delve you random. Keyboard and mouse, these embedded devices don't have keyboards or mouses. There's no entropy from me moving your mouse around if there's never been a mouse connected to it. A lot of come from disc access timing. We have gotten good at getting real consistent Flash memory to all of these devices. Every Linxus router boots up the same way and it reads the same data at the same time from the same flash.

Time of boot, another great one, other than these devices have realtime clocks, they all start at 0. We have crossed out everything that goes into delve you random. So the first problem, essentially, is these embedded devices lack all the sources and entropy that we expect to be present on a desk?top computer.

The second problem is that your input pool, the way we collect entropy, we say you can't use any of the randomness until we collect 192 bytes. And the question is when do you actually have 192 bits available? That before you generate your key or not?

So a typical server boot we look and it turns out that entropy 190 bits in is about 65 seconds in. Well, open SSH seeds, it keys at about 5 seconds into a boot process. We essentially have this hole for an about a minute where your Devu random is predictable.

That's the first find of big use case we want to look at. I bring this up because this is we really would not have found without a global perspective. You wouldn't have seen this if you had gone out and looked at 1 tenth of a percent of the Internet. You had to have this 10,000 foot view in order to make they say sections. I have to hook had the HTPS ecosystem. This is important because they have blind ecosystem we rely on everyday. We connect to websites over HTPS. We connect to bank of America. We trust that someone that is vouched nor bank of America. They vouch for Mastercard and we are connecting to the right website. We have given these CA certificates out to dozens of dozens of organisations who have gone out and sold these to third parties and we found that by going out and looking at all certificates on the Internet there are over 1800 CA certificates that be kind for any website on the Internet. So again something you wouldn't have found, these organisations don't close who they gave certificates out to. It's not within the financial interests for them to limit their power. So instead we have this kind of unregulated ecosystem which we can start to have a perspective on.

We find that we teach these organisations and our security classes that say you need to follow those principle of least privilege and defensive depth the we're not doing that in the real world.

And we have numbers to show that.

Similarly we find that we have NIS puts out these standards that says we know we are going be able to compute the keys for this soon. NIS says by 2016 we should stop using them altogether. Well, it turns out with no oversight 70% of the keys expire after that.

So, the last kind of piece I want to talk about is what do we do with all this data? We have gone out and scanned all of these sites. We have released tools to do this but at the same time we want to make that data available to everyone, to researchers in particular and because of that we have released things like the list of public certificates, you can go out and see what certificates exist and this is out there for both system operators to see has my key shown up on another website. It also allows researchers to ask these types of questions.

The third I want to look at it Heartbleed which I hope everyone here is familiar with in the operator community the this was a case when this first happened there was a lot of misinformation spread around. 60 of the websites are vulnerable. 10% are vulnerable. Numbers are being thrown around by news organisations. This scanning allows us to say these exact number of websites are vulnerable. This is the list. And here is how we fix them. We can start to look at questions like how do we patch these websites. What are the trends we start to see? It turns out after 48 hours 11% of servers remained vulnerable. To us that says we need to be developing technology that works better for system operators. If 10% of people are still vulnerable 48 hours later, how do we fix that? Patching plateaued at 4%. 4% of websites that support HTPS are still vulnerable today. It's been a month. What do we do that. And we start to look at things do people replace certificates, we find just astounding things that 15% of sites that replace certificates use their existing vulnerable key, so they went through the effort of replacing their key and they used the old one again and we need to talk to these people. We did notifications. We e?mailed every network operator out there that had vulnerable websites and said these IP addresses on your network are vulnerable. And so the big question to us was: Would this actually impact network operations. And we find is it does. By going out and looking for these vulnerabilities and notifying network operators we actually see a statistically significant difference in patchings. We were a little bit worried about this too thinking where the responses were going to get back from network operators, we were quite surprised, but please out of 59 responses we only received two negative responses. Most of them came back and said we thought we did that and we forgot about that one network /PHRAOEUPBS. We forgot about this one embedded device that it had a website on it. Thank you for let you go us know. /WAO*EPL go and patch T only in two cases did people say what are you doing?

So, in conclusion, we're living in this unique period. We also have the tools available to us to scan all of IPv4 and IPv6 is not out there yet. It's not to the point where there are devices just on IPv6 alone that we want to look as, this puts us in this golden age as researchers to look at this type of questions. Z map lowers the barriers of entry. Before it took months and hundreds of machines to perform these probes. Now we can start to answer these questions in minutes. It's important for things like when hard bleed comes out. We know people start to scan within 2 hours. This puts us on the same playing surface to start to answer and get these servers fixed.

We explored three applications of high speed scanning that we have been looking at, ultimately we hope that Z map enables future research, in the academic severe and research. How can people use this to look at you the people they are working at and their own severe.

That's what I have. Thank you.

RICHARD BARNES: Thank you very much. Questions?

AUDIENCE SPEAKER: Keith Mitchell. You mentioned in one of your slides that there is no way for sites to indicate that they don't want to be probed. We have actually been operating a don't probe database for sometime now, so people can register with us and say we don't want researchers to scan our address space, so just wanted to make you aware of that.

AUDIENCE SPEAKER: Robert kiss trackey, security person. This is really good sufficient and really scary. 1% of a big number is a big number. So, if the good guys as well as the bad guys can scan the Internet, they will find the same keys that you did. I think we have homework to do. And that also leads to this dilemma that you have that you have the tool and not publishing the tool doesn't save anyone. Publishing the tool doesn't save anyone either but also you have access to this data and you know, having the data but not giving it to anyone is not that useful. Giving it to anyone is scary. But they can discover it themselves. So not an easy position.

ZAKIR DURUMERIC: This is a hard line we have to draw is what data to we release and who do we release it to? Especially in the Wikis study we contacted every manufacturer, we contacted their security teams, we worked with them to get it patched. We talked to the cert teams in different countries we said these are people we trust, they have contacts in the organisations, let's get this fixed, the very big organisations we contacted them directly. But at the same time, we are not in the position to contact every person. We can send e?mails to abuse contacts. That's about what we can do as external researchers. That's generally what we do do. We start and we say now that we have this, we'll release the details of what was out there in broad numbers, release the problem itself. But we won't release a list of IP addresses to people who are vulnerable. Instead, we'll talk to these people like certs, these large ASs and IPs and work with them first in order to get things patched.

In the case of large data sets we are releasing a lot of our data, and lists like things what are the certificates, we're not including bits like is this person vulnerable to this attack.

AUDIENCE SPEAKER: The dilemma that I had was whether to ask you for this data set in order to expose this through our tools or not. At the moment I'm thinking not is a better choice.

ZAKIR DURUMERIC: That's hard. Again, you might be one of the people who is in the best position to help out with this as RIPE. You know the contacts. You know who to actually contact in. This is a struggle for us is to find the right people at these organisations to talk to you. Sometimes abuse contacts work. 30 percent of them bounce. It doesn't leave us in a particularly good position.

RICHARD BARNES: Out of curiosity, you said there were two negative responses to your heart beet e?mails, what did those folks say.

ZAKIR DURUMERIC: It came down to you did not have permission to scan our networks, what are you doing? Get off our IP space, to which we say we would be happy to exclude, send us the CIDR prefix for your network we'll exclude you from all future research scans.

AUDIENCE SPEAKER: When Heartbleed ?? this is excellent stuff, I love it, good job. And when Heartbleed appeared we started discussing within our community to actually run a scan on .zn networks and we actually went to ask the police, well, the law enforcement agencies if we could do that and they said no. So, when Robert says, yes, the good guys and the bad guys will find out about this, actually the bad guys are in preferable position.

ZAKIR DURUMERIC: Yeah, Heartbleed ended up ?? we spent a lot of time ?? the reason we STARTed two days after it was we were not willing to actually scan for the vulnerability. We were not willing to exploit servers both on the ethical and legal level and people thought in the beginning the only way to tell if a server was vulnerable was to exploit the vulnerability. And we did a lot of testing where we actually found out was that we could essentially, because of other kind of side effects of the patch itself, we were able to defect whether that version of open SSL was vulnerable without ex employing the vulnerability. And that's where we said we can do good here. We won't be would not be willing to exploit the vulnerability, but because there is kind of side channel information that we can use without really crossing that line, we can help people out.

AUDIENCE SPEAKER: I think the numbers didn't match, didn't add up together on the responses, 59, 51, 2 and 3.

ZAKIR DURUMERIC: This is possible. Numbers change quickly in this space, so I gave grabbed ?? you are absolutely correct. There are three, they are not in the negative pile I can tell you. Those who paid a lot of a tension to. My guess is they are in the positive pile. The neutral pile is a weird one, people responded to us with things completely unrelated to Heartbleed and we honestly didn't know what to make of them because they'd be about the weather or something, and ?? yeah...

AUDIENCE SPEAKER: And do you realise you may be a criminal now in some jurisdictions?

ZAKIR DURUMERIC: In some jurisdictions, yes. I'm not a lawyer but my understanding is in the United States is that what we are doing is legal and we are very careful about this. I mean, we do not check these log ?? ins, we do not pull private information, we do not exploit vulnerabilities. We look at your public certificate and we look at who issued it. We look at the information we need, but we do not ?? we are not in search of private data.

AUDIENCE SPEAKER: Understood. Just a warning. Thanks.

RICHARD BARNES: Thanks again, Zakir.

And our finally scheduled talk for the afternoon is again from Vesna to talk about Atlas.

VESNA MANOJLIVIC: Hi, so it's me again, Vesna from RIPE NCC, community builder for RIPE Atlas also.

And I'm very happy that there were so many presentations about the technical details and implementations and uses of the RIPE Atlas during this RIPE Meeting. So that the only thing that is left for me is to show you some pretty pictures. I will also invite you to take part and to get involved with RIPE Atlas, I will briefly describe how, what are all the ways that you can do that, and at the end of the slides that I will show, there is additional information with actual numbers with the graphs that you can read the numbers of, and so, I will just show the picture slides and the rest you can download. It's already available on the presentation archives.

This is the satellite map of the RIPE Atlas deployment. We have more than 5,500 active probes. RIPE Atlas is a global measurements network. We do have the probes active worldwide however you can see that the largest concentration is in Europe, which is the RIPE NCC service region and most of the participants in this effort are the members of the RIPE NCC.

We already saw this picture in the previous presentation. So, since the last RIPE Meeting, the number of the RIPE Atlas anchors has increased. Currently there are a 56. There is several more being almost deployed and at the RIPE Meeting there was a lot of interest for them, that these are the larger boxes and their picture is coming up here. So these are the photos sent to us by the users, by the community. They show the little black version 1 and Version 2 probes which we have also heard about today in the presentation. This is how it started. And then we moved to the so?called version 3, which is what we are distributing currently. These are the slightly larger white probes, you can still get them today ?? maybe not today any more because of the social, but tomorrow definitely, we are distributing them here in the, during the RIPE Meeting, and especially if you are from exotic location or from a country which is not in Europe, let's say, we encourage you to take some more probes with you, and finally the photo of the anchor and the duck, these are the larger boxes that mostly placed in data centres and in racks, so they do not qualify for such interesting creative photos mostly, unless you add a duck.

So, these are the small versions of the growth graphs. As I said later in the other sides that you can download, you can see them all and they are also publicly available. So starting from the top it's the increase of the number of the user defined measurements per type over time in the last three months. Then from the beginning of RIPE Atlas here in the top right corner, the probe deployment graph. So the number of probes that we distributed, the number of currently probes or connected and the number of discorrected probes in red and a smaller numbers of written off or probes that are lost and never seen. So, we are keeping track of this and based on the feedback from yesterday, we will improve both the tracking of the number of these probes and chasing of the people who promised that they will place them in their networks.

The number of anchors, the graph is slightly different because well, it kind of jumps and goes in steps, and recently there was quite a large increase. And finally the number of RIPE Atlas users, we just call it users here but there is a big distinction between the visitors to our website, the hosts of the RIPE Atlas probes and RIPE NCC members who also can schedule measurements. So this kind of detailed gaffe is not shown here and we can produce it for the next time.

(Graph) at the end of March, we have reached the very important milestone which we have set as a goal to ourselves to have 50 RIPE Atlas anchors active by the end of the first quarter of the year, and we have achieved that, thanks to the community, thanks to the hard work from our team. Ly a made this cake recollect it's a very steam tank presentation of the anchor or a /STKPWHREP Lynn that has an anchor or something, it was all completely edible and we did eat it all together. So these anchors are not only pretty, but they are serving as the measurements points for the new DNSMON. So this is one of the use case that is I wanted to briefly present. They were other talks about DNS during this meeting so this is a view of the visualisation of one of the DNSMON measurements for the operators of TLD registries. Then for other kind of users, there are all kinds of use cases forks researchers busy looking into events, and there was a presentation about events in Turkey, Stefan made an article about it, Emile also wrote about it and made this beautiful colourful graph which you can also find on RIPE Labs, then if you are operating Internet Exchange or Anycast route name server you can also see the impact of placing the route name server instances in a certain country by the drop of latency of the probes going to that route name server. So this is the top graph here, and finally, one of the recent new features that we had developed is so?called status checks. Which is enabling the operators that are already using tools for automatic monitoring of their networks such as Zynga and Nagus, so utilise the ping measurements from RIPE Atlas network and use the specially defined data API that will give them a status checks which they can then customise and put the thresholds that they are interested in and then there are examples from the community provided for the ICI NG A configuration, and this turned out to be a very popular service. If you haven't heard about it there are blogs around the Internet and also copied on RIPE Labs about the experiences from the operators from this service.

So, since now I tickled your curiosity and you want to jump into this RIPE Atlas community, how can you do that? How can you get involved?

Well, if you want to host a probe at home, you can just apply for one. We have stream lined the procedure and we are changing the shipping procedure and once you plug in your probe and register it we are going to display your name for a week on this community page and with the /PHRAG of your country as far as we can guess, I will not go into geolocation question, that was also a topic for this RIPE Meeting.

If you are operating a larger network, if you would like to use the RIPE Atlas anchor for your Internet Exchange or data centre, you can apply for one. It's also very easy. We are automating that procedure and these are some of the hosts that already deploy the anchor so the technical part is going very good. Sending us the logo is not going that good. So please, if you are ?? if you are sitting here and you still did not send us a logo, we would like to promote your company on your website and be grateful to you, so let us know. These are some of the anchor hosts and these are the other ones, so it's not all 56 that are displayed here.

On the other hand, if you are a frequent flyer and you have a lot of friends, you can help us out by distributing these RIPE Atlas probes. And then we are being to call you and Ambassador and we'll ship you five or ten probes to /SHART with and sometimes /WAO*EPL include a T?shirt or so and then we ask you to do a lot of hard work and actually not only give the probes out but register who did you give it to and chase those people, if they don't register it and so on. We provide a web interface for this and we are in close contact with all the ambassadors who are helping us in increasing the number of probes around the world. This is really valuable, especially outside of the RIPE NCC service region where we don't have very good contacts in the local communities. So, the ambassadors who know the local community, have good contacts, are increasing the number of probes in various countries around the world and almost every country by now has at least one probe. So thank you.

If you are a coder, you can add your code to our community repository on GIT hub and if you have some money to spare, we will be very grateful and you will receive a lot of publically recognition, again we will show your logo on our website, and we will mention you in the presentations, and you might even get some extra probes but then you have to be an ambassador, so thanks to all of our sponsors who are helping us to run this project and only spend the members of the RIPE NCC money for it.

And finally, we have a road map where we describe all the requested features and they list all the features that are already delivered. So these are the greyed out ones, they are at the bottom of my slide and the most focus is actually on the features that we are working on or that we will soon start working on. So, I hope you can read this picture, in other words there is an URL somewhere where you can see the most recent version of the road map. So, if you don't see the feature that you are interested in on this road map, please let us know.

The most recent features that we added since the last RIPE Meeting were the IPv6 extension headers ?? I know that there are some people here cheering for it, they asked for it, we did it, so now we are waiting for your experiences in using that. As Robert already mentioned, the tagging of the probes. So the metadata, you can say now if your probe is at home or in the data centre, what kind of connectivity you have, or if there is a duck next to it or any other random tag that you want to add. And you can also just star it and say ?? you can star the certain probes that you would like to pay attention to, let's say you can call them my favourite probe and then they will appear in the listing on your web interface when you log in. I already mentioned status checks, which is enabling automatic alarms. The circled Tokyo ping is also already deployed. There is maybe one person in the room who asked for this, and I hope you're listening and you are happy now.

There was an extended support for DNS measurement to allow some other options. And last week, we enabled the latest results, API, and parsing library in Python. So all this is always described on the RIPE Labs and it is announced on the RIPE Atlas users list.

So, here the URLs where you can find this information and how you can contact us.

So, I will take questions, but just to tell you there is additional information which I will not go to.

CHRISTIAN KAUFMANN: Thanks Vesna. Any questions?

AUDIENCE SPEAKER: Jenna. First of all thank very much. Thank you and the whole RIPE Atlas group, particularly I want to thank you for IPv6 extension header support. I promise we'll see the results soon. Some of the people already see some of the results already complained about bugs I found. For those who considered gatehead probes shipped, please check your local custom policy. After almost two months I failed to get another ten probes to Zurich and I'm going to smuggle them into the country in my back pack.

And just out of curiosity, you showed a slide with a number of measurements, it's about two nice pies on those graphs, in December and I think somewhere in ?? what happened in December? Or was it just people have nothing to do around the Christmas time or I missed something?

VESNA MANOJLIVIC: I think DNSMON.

AUDIENCE SPEAKER: It's actually a combination of both. Some people are messing around, but the spike that actually is sticking is more DNSMON. So in early January, late December, early January we started turning on the zone monitoring and that's a significant amount of measurements. We can handle it but it still shows up on the graphs.

AUDIENCE SPEAKER: Hi. Can you go to slide number 2 please. I want to see the map. You're missing the southernmost probe located in the southern tip of Chile down there. My good friend Hugo did a fair amount of work to try to find a host for the probe down there, and being ?? and actually he beat me because in New Zealand we are not that south. So...

VESNA MANOJLIVIC: I will include the updated screenshot of this map later on.

AUDIENCE SPEAKER: Emile. I want to pay attention to the most northern probe is there. It is on the map.

AUDIENCE SPEAKER: While on the topic. Robert: If anyone has leads to Antarctica, let us know. Leads that work. We had so far two leads, they both failed. I think I heard the worst example was that we actually got to someone who was there and they said, oh, let us load up the website and they used the satellite connection of like 64 kilobits for that and it took a while, so they said this is really not for us, which is too bad because they don't have to load up the website to run the probe.

But anyway... if someone at Mac measured owe can spare 2 kilobits a second, that would be awesome.

CHRISTIAN KAUFMANN: Any other questions? No. Thanks Vesna.

So, I was told that we have multiple AOBs. One, two, I wasn't told how many. Gert.

AUDIENCE SPEAKER: Emile Aben, NCC I want to make people aware tomorrow in the Closing Plenary we'll have presentation about infrastructure geolocation about IP addresses and I want to make people here aware because it was an idea we presented in the last MAT Working Group.

Now we actually have running code, so, tomorrow we're going to get some rough consensus on how to actually move forward with this.

I actually ?? so I have some live demo I could give to people but I don't want to do it in public because of the demonstration effect. So if people are interested in router geolocation and visual traceroute and stuff like that, I'll be in the back there running around if people are interested, talk to me and I'll happily show you what we have at this moment.

AUDIENCE SPEAKER: Romeo Swartz, RIPE NCC. I would like to take this opportunity to give you a quick update on the state of the TTM project. There have been previous announcements about a TTM. I think we have tried to pro claim it dead a couple of times, but that ?? well would be a good opportunity to say here that, or a good moment to rephrase, paraphrase the name use author in saying that the reports of the death of TTM have been greatly exaggerated. Currently, we are in a phase that we again think that we can talk about that. Of course the previous announcements and the reason that TTM is still not dead, a big reason for that is in the strong community feedback that we have had with previous announcements, we have taken that into consideration, worked with people in the community that have great concerns and we think that those concerns no longer exists from the TTM user base. However, we also still have, at this point in time, a dependency of the DNSMON visualisations and measurements based on TTM, we have actually seen in previous presentations, we now have, we think, great replacement of DNSMON, that is no longer based on TTM. We are working on getting guidance from the DNS Working Group on the transition and finalisation of the time lines of shutting down the old DNSMON system, which will allow us to also shut downTTM we think that ?? we expect that we will have full clarity on that before the end of the week the currently we expect that the timeline that we're currently looking at is that TTM can actually be fully shut down by the end of June, so that is an announcement for here. Obviously there will be a formal announcement of that through the appropriate mailing list and the MAT Working Group mailing list being one of them. One final note there have been many people contacting us with questions relating to the reuse or continued use of the TTM systems as NTP sources. And we have done some preparatory work for that that we will also share with you, but will allow current TTM hosts to remain using the systems as NTP clocks but under your own control. So an announcement about that will also be forthcoming. If there are questions of course I'm available here this week as are many other colleagues involved in this, so please let us know.

/KAUBG /KAUBG perfect. Thanks for the update. I thought it's already dead. I guess I missed that part.

AUDIENCE SPEAKER: Chris Buckridge, RIPE NCC. This is somewhat but not only related to this Working Group, you have heard a little bit about the RIPE academic cooperation initiative this week. We have had a few presentations in various sessions from academics who RIPE NCC has assisted in attending the RIPE Meeting. I just want to note that after this Working Group there is a BoF session where the three RACI attendees who you haven't heard elsewhere during the week will give lightning talk presentations on their research. So I'd encourage you all, as the low hanging fruit who are actually still in the room, who hang around a little longer and be an audience and hear some interesting research. Thanks.

CHRISTIAN KAUFMANN: But we are not the low hanging fruit.


AUDIENCE SPEAKER: Only in the nicest possible way.

CHRISTIAN KAUFMANN: Any other business? It doesn't seem to be the case. Well. Then. Thanks a lot for joining the MAT again and I hope we see each other in London in November.