24th October 2017
At 9 a.m.:
CHRISTIAN KAUFMANN: So good morning. It's like on an airplane, we have a bit of technical problems and have to reboot something before we start. That is the part that usually freaks me out. Probably three minutes or so we will start, then we can go.
Good morning, so we have rebooted the plane and we can go. Welcome to the MAT Working Group session, it seems that changing it from the afternoon to 9:00 in the morning had a little bit of impact on the audience, I was told we should measure the impact, but then I guess we would have to swap it back and forth multiple times and see what the difference is.
I am Christian Kaufmann, I am chairing the session today for the last time. We have a second co‑chair, Brian, he is our new and we have scribe, Jabber and stenographer. If you then later comment or have questions to the talk and you go to one of microphones, then please for the webcast, state your name and affiliation so we know who you are.
We have to approve the minutes for RIPE 74 which you all have read, I assume. So, anyone against approving the minutes of RIPE 74, are we all fine with it? Nobody is against it. That is marvellous. Then the minutes are officially approved, thanks for that.
We have, as usual, different talks today, starting with Brian talking about himself, we compare latency measurements, we look into a new version of UDP or not. We have updates from the NCC, three of them. And then I will wrap it up in the end. And with this agenda, I give over to Brian.
BRIAN TRAMMELL: Good morning. It's 12.2, I measured that for you Christian. 12.2 is the result. Hi, I am Brian Trammell, some of you may know me, I am actually kind of relatively new but I have been measuring stuff for a very long time and I'd like to actually come and help you guys measure some stuff too. So this is a really interesting Working Group to me, it was sort of the first one that I came and paid a little attention to in the RIPE community because there is a really interesting intersection between operational reality and sort of research in the space of how do you measure things and what do you want to measure about the Internet. I am a academic so I like measuring stuff for the sake of it measuring it, why do you measure it, because we have the tools to measure it and see what it does and ways completely different way for why do you measure things for an operational perspective, there you are measuring things because we have a problem and we want to know why it's broken. And I found this a really interesting venue for those two worlds to talk to each other so as co‑chair I hope to help foster that and make it bigger, on the one hand scientific it rigour and on the other hand, operational ref vans which I guess makes a interesting if ‑‑ so that is me, any questions? Hi, I am Brian Trammell, ask me anything. All right. Cool. Nice to meet you all and I am looking forward to co‑chairing the Working Group, thanks.
CHRISTIAN KAUFMANN: Thanks, Brian. Our first speaker is Agustin, and he will talk about latency measurements. Thanks.
AGUSTIN FORMOSO: Thanks, Christian. Good luck Brian with your new duties. I think the MAT Working Group Programme Committee for accepting (thank) the talk, and what I am talking about a pretty simple case about how we can do measurements in our region, RIPE Atlas has been giving Bert coverage of ASes, it is coverage in regions like the RIPE NCC region or the ARIN region but if we look in relative terms of how the LACNIC region is covered, it's not as good as the other ones, so what happens when we want to do measurements in a regional level scale and we just happen to have about 4% coverage of our active ASes, how representative are our measurements from our operational reality, what is really going on on the regional networks?
So, one of the things we had to do in these cases is, besides using the Atlas infrastructure, is also using alternative ways of generating similar data sets and what we are looking for is getting this alternative data sets to be in a some way comparable to what Atlas is generating.
So I think there are a couple of talks in the RIPE community about doing measurements with alternative measurement platforms. One was in RIPE 73 about a year ago, Christian and the other was with Randy Bush about comparing the NL ring platform, I like especially the second one, but we wanted to do our own comparison, look at our own data sets and how do you compare to similar measurements done from the Atlas platform.
But still we wouldn't take RIPE Atlas as our starting point, as our reference and we want to start measuring things in particular in the ASes which are intersection of these groups, so we want to get those ASes that have at least two platforms that cover them, launch the same set of measurements from those ASes and just compare them, see how they compare for each other and try to analyse the possible causes that cause these differences.
So, we did our first test. It was just launching the measurement on our wireless network and then plugging in the network adapter and launching again on a wire network. We did like two measurements simultaneously, one just was getting the results at the browser level, just fetching all the results we were getting. The other one was getting the TCP dump level and just looking for the packet that had the http get and the okay 200 response. And you can see in the chart we have the solid lines for wire, the dotted lines for wireless and we did both in the browser in blue and TCP dump in red. So what we see simply is that getting the measurements done through the browser OS and wireless in particular stack introduces a, an incredible amount of noise, and the only thing that differentiates the dotted and solid lines is the wireless connection. So we think that this might not also affect our measurements but also all the other platform that is also based on wireless probes, so this opens a little bit our eyes and starts getting us to think of a new way of getting the data and start doing aggressive filtering and start to think about what parameters are we really looking from the data sets we are collecting, things that using the median, for example, it's probably not a viable option any more. So we really have to start paying attention of what things are we getting from the data and how we start filtering it.
So we came to this. It's a common practice to do the P 90 approach, it's very usual to just stream the slowest 10% of your data set, we said let's go for the wired connections and get our P 90s and take that as our actual network measurement reference and let's see which percentile of the wireless measurements corresponds with that. So it's a little bit of a raw and basic approach but we are actually stripping out everything that, everything that all the noise that comes from the wireless component. And actually you can see at least the fastest percentiles of the wireless connection correspond pretty much with the wired connection, and it's more or less the things that we want to do, just makes things comparable, try to have the same order of magnitude for the same kinds of measurements.
So, setting that P 90 cut‑off point, we stripped everything that was slower than that and we remained with the fastest, with the fastest measurements. So, the first thing is that the option of removing everything above the P 22 for a wireless is really aggressive, but it's the way that we can make our wireless measurements comparable to the ones through wired connections. So, we start with that kind of lab‑based rule of thumb about stripping everything about P 22, just to make sure we are not introducing additional delays just because we are on wireless. And then we with the remaining data set we see there is a little bit of a constant across the whole percentiles.
So, there is a little bit of a constant over there, you see how the dotted lines are shifted constantly through the solid lines. We don't know if that is specifically of the stack we are testing the things in, we don't know exactly which factors make the difference in here. We'd like to see a little bit more about if we can also additionally subtract that amount and get as close as we can to the wireless measurement but for that we need proper modelling of how things are going on, what the components involved are, so we think the a little bit of a future work.
So our initial problem was trying to get our browser measurements comparable with the RIPE Atlas ones. Over there. So after stripping out everything, we see that pretty much you get a similar latency profile, at least the latency profile behaves similarly except if a remaining constant, I don't know where it comes from, but the fact that the curves are pretty similar, it's much better than the previous scenario where curves have very different nature, one from another. What is the difference in here? Probably the configuration of the specific stack of the browser, the operating system, the networking hardware, there are lots of things that might be affecting this, this might be a constant determined by multiple factors, I don't know. We would probably need to have a very good test‑bed for that, making lots of options cross some options with other one, like operating systems, browsers, but up to this point we think we are better than we were in the point before.
So this is the data set we collected from the browser. This is a set of 1,000 gets to the anchor I told you about. But that's really unrealistic on how you can do things on a browser. Usually when you do browser‑based measurements, you are more of a kind of opportunistic guy, one who wants to take the most advantage of your, let's say, 10‑second browsing you are doing on a certain web page. If your script gets too long you would have users leaving your page and not ending your measurements and you wouldn't get the most out of it.
So we set up asking ourselves what metrics should we get from our 10‑second measurements, which of those measurements are actually more representative of the underlying reality, the latency reality that is really going on in the networks. You can see some of the measurements, they are pretty much aligned with the initial data set that is the back line but some other measurements, they really have nothing to do with it so we really want to make sure we are using the right metric out of our ‑‑ out of our data set.
So we got our data set, we started taking random experiments, we took 100 random experiments out of the thousand RTT data set and we just started iterating about 1,000 times, just to get a grasp of how the different metrics we can get out of the samples are relevant or not. So, the things we started to look at are the P 22 that have measurement, and how does it compare to the initial P 22. We got the P 10, P 05 and the minimum and we wanted to look at how ‑‑ which is the probability of having that specific metric equal or less than our initial P 22, that would mean in which ‑‑ which metric has probability of being a representative metric of the initial data set. So, we got the ‑‑ we got the results, the minimum was the most suited metric we should get, but into which amount is the, for example, the P 05, something useful? Well, in about 70% of the time it is, but the remaining 30 just lets a thing to know we shouldn't be using even P 05, we should just be using the minimum if we want to get some useful information out of the data set.
So the other comparison I wanted to do is RIPE Atlas and the virtual platform I told you about, this runs on windows laptops and it's just there is an agent there and it provides some measurements, functionalities such as ICMP ping so this is a little bit of the daily, the daily profile we got, the RIPE Atlas predictably steady with numbers of samples and probes and RTT. The software probe is much more volatile and the amount of samples in RTT vary a little bit throughout the day, that is something to consider. So we set up the same experiments in both platforms. And what we found out, RIPE Atlas is the red virtual and ‑‑ the virtual platform also has this kind of strong mode at the beginning, strong mode at the at the beginning and long tail at the end. This wireless based platform we thought this might be affected by wireless connections so we wanted to think a way of how to strip out the data and in order to strip out all this uncertainty factors that are actually polluting our data set.
We did that just by detecting the modes in the curve, detecting the main modes and trying to strip out everything that was not that relevant.
So, we detected the peaks actually, we got ‑‑ we detected two of the main modes, we could strip out the long tail but unfortunately we couldn't detect what is most important for us, is the low RTT modes. There is also the fact that the virtual platform has, let's Dáil an inter injury resolution, we don't have flow numbers so that is the main really why this looks choppy like a staircase and doesn't look really continuous. That is another case, we got to detect some of the modes but unfortunately we lost what is most important for us.
So, overall, there is a huge noise introduced by wireless browser‑based measurement. We found a way of kind of getting the healthy data out and stripping out what is unhealthy for us and that is good because now it's about comparable in orders of magnitude to RIPE Atlas results. And for the virtual probes, we also tried to find the best way to filter the data set to find it at least to neighbouring at least comparable to the measurements in RIPE Atlas.
There is a bit of a future work, especially modelling how, what things might be affecting the measurements and in which way. Our P 22 cut‑off and something we have been doing is doing IQR filtering, we thought it was healthy to strip out outliers but not in this case we would be stripping out that we actually don't want to strip out, like the low RTTs and we are actually keeping, some things are probably not worth keeping.
A bit of a final note: The Chrome, browser we use for our local test and the platform has no v6 functionality and I think they don't have plans to implement it which I think is very bad.
CHRISTIAN KAUFMANN: Thanks a lot. Some quick comments and questions? Three people already lined up.
Kaveh: Thank you very much for the presentation. Can you go to slide 10, so in the meantime ‑‑
AGUSTIN FORMOSO: Which slide?
AUDIENCE SPEAKER: 10. So that one from previous life I think I know what causes that. It's because so the wireless is collision avoidance and the CA networks, this attacks you pay and when you have full doplex and no one else on your networks and no collision on Internet you have that but it's in ‑‑ you always pass that, so this is normal. Having a collision avoidance system on wireless. But my question is on, in the browser is it possible to know if you are on wireless or not?
AGUSTIN FORMOSO: I am not sure.
AUDIENCE SPEAKER: If you deployed this people might be connected by cable and then all ‑‑ you have to adjust ‑‑
AGUSTIN FORMOSO: This data set in particular was just using the lab approach about wired and wireless. In the wild I wouldn't know if users are behind wireless or not. But browser APIs are evolving very fast so I wouldn't be surprised if in the few months we got something like that.
GEOFF HUSTON: APNIC I am really confused about where you are getting your RTT measurements from because once you get past the initial handshake and you start to look at the data flow you start to get into browser noise. Browsers are not simple machines. They are major operating systems in their own right and the signal that you get from inside the data flow is basically bullshit. Because there is so many elements of delay you are getting a huge amount of imposed noise. We do around 60 to 65 million browser sample points a day. Now the way we put out RTT is really simply. You look at the initial TCP handshake and the reason why is that that is a kernel operation, not a user mode operation, all of a sudden when you do it down in the kernel, when that initial SYN goes out, the SYN‑ACK and the ACK happens at the kernel and so you bypass all of the browser noise. So I am a little bit confused why you are spending this huge amount of time figuring out that browsers are erratic, big ticks, yes, they are erratic. But I kind of thing there is a forest and tree problem going on with this data.
GEOFF HUSTON: You are doing a TCP connect to somewhere.
AGUSTIN FORMOSO: Exactly not controlled by us.
GEOFF HUSTON: Instrum the somewhere. Instrument where you are going to and look at it there with the full packet capture.
AGUSTIN FORMOSO: The main point here is the target end points are not in our control. They are usually speed test servers or ‑‑ web servers we have distributed.
GEOFF HUSTON: So I am suggest measure it the other side and you will get a much cleaninger measurement. Thank you.
CHRISTIAN KAUFMANN: Thanks a lot. That is a lot.
The next speaker is none owe Garcia, he will talk about reliable UDP, almost. Sounds a little bit like a paradox.
NUNO M. GARCIA: Good morning everyone. Are you familiar with, how is it called, the imposter syndrome? It's when you suspect everyone in the living room is more knowledgeable than you. I don't have that, I take it as a fact. Thank you for inviting me and I am very happy to be here, I am going to try to convince you there is a way to create an almost reliable UDP protocol so let's get this moving. Are you familiar with this? Congestion, loads of traffic. What about this one? So this is some measurements one of my students made actually the measurements from made by the MoUy project in Japan so they record 15 minutes every Dave the traffic in an optical link between Japan and United States. What you see here is the daily average packet size for IPv4 for ten years. And as you can see, there there is a gap. What happens is the link was overloaded and they decided they had to switch links so they upgraded the connection to a wider link, to 100 megabit per second wider link. So this is like normal chart, actually we can spot several areas here in this chart, the first one of course is, the first one is of course until middle 2006, okay, when you see that the average packet size is around 400 bites. And then they switch to a wider link and in from one month to the other the average packet size jumps from 400 bites to 700 bites. So this would be one area, this would be another area and here you can see it and in 2011 that the average packet size is very variant and then around 2013 the average packet size decreases again and then it goes up a little bit, well, let, me tell you two things I want to stress out of this chart. The first one is that at the first look we had no idea why the average packet size changed because of the change of the capacity of the link. And the only explanation we could get is that this decrease in the average packet size is because you are having a lot of very small TCP packets. And when the link grew, the number of it. TCP control packets was not so big, so actually the average increased because at the application layer for sure the users did not change the nature of their regular Internet using. So the second thing I am seeing here is that maybe we can use average packet size as an indicator to whether link is overloaded or not. So we have more nice charts. This second one is number of IPv4 packets. So you can see that in the time span of ten years, the number increased by an order of magnitude and at the end of 2013 this link was recording almost 1 billion packets in the 15 minutes recorded per day. What about IPv6. So IPv6, not that bad, just one order of magnitude less, but remarkably showing a higher growth than IPv4. So it claimed four orders of magnitude in the time span of ten years. And this is the corresponding chart of IPv6 average packet size versus IPv4. So the IPv4 packet size is there in the corner so you can just take a grip of the two of them. What is remarkable here in this chart is that IPv6 is capable of using much, much better the available payload for the IP packets. As you can see, it almost touches the 1,500 bytes many times there while for IPv4 the 1,500 doesn't even show on the scale.
What about applications. I was mentioning that applications were the same for the users, well, we have seen this ten year time span, more social media, more Cloud, more everything as a service because of course users want to be as comfortable as possible, so in fact, in the future, we will see more mobile phone applications, mobile operators selling data which is kind of strange, more web apps. Everything monitoring me, my wrist watch, my car, my TV monitors, me, my fridge monitors, vacuum cleaners who map my house and so on. So there is this very nice talk of Kevin Kelly, every device on the screen ‑‑ every device on the screen will be a screen to the one. In the future we see more user‑driven applications, government‑driven, corporation‑driven, machine‑driven and this will be fuelled by open source hardware and software. So this is what we can expect in the future. Unless we do something about it. What can we do? Well, we have a number of problems here, the first one is about of course the very small payload that Internet forces us to have, as you know 99.99% of the packets in the Internet are smaller than 1,500 bytes and I am pretty sure that the remaining 001% are areas of assessment so there is also the problem TCP control mechanisms and all the others that protocols implement. So we have as a result of that very low efficiency, high overload, high latency. We can expect there will be an increase in the size of the messages because we will use multi‑media more and more. We have to face the transition to IPv6 and as you know the IPv6 the useful payload is 20 bytes less than the usage, the available payload for IPv4 because the other is bigger and of course we will also see increased mobile traffic with all the control associated with it. So, I think we could use a new protocol for machine to machine communication, for example. I am going to stick with this one. I am guessing that also regarding this UDP is lighter than TCP so you should try to use UDP, it's nicer alternative. And we came up with a solution, we call this the key to UDP. So this is what a normal UDP communication looks like. You send a stream of packets from one port to the other, yeah? So on the other end of the network is a machine listening to one port and then you stream all the packets to that port. Of course since the packets are not numbered, you will not be able to detect at the other side which packets were lost or if there was a packet that was reversed in order in the middle of the network. So this is our suggestion. Instead of sending all the packets to one port, send the packets to a series of ports, like if you are knocking on a series of doors, yeah? So at the other end, if there is a packet missing, the other machine will be able to detect that there was a break in the pattern of receiving the ports. And actually this is a very nice idea except that for some firewalls this may look like a port scan attack. What about this one? You don't stoned a cease series of ports, you send from a series of ports. This is a nice idea except that in some NAT‑PT machines they will not use amorphic algorithms so they may scramble the order of the ports so we have to do a bit of research on these two issues here. Nevertheless, we figured that there are three possible ways we can use this new way of using straight UDP communication between two machines, the first one we called it the sourced keyed UDP, this is when the key is on the source side. The second one we called the destination keyed UDP, this is when the key is on the destination side. And the third one which is always possible is to source destination keyed UDP when a series of ports communicate with a series of ports in each end of the network. Moreover, ports don't have to be sequential, if you want to complicate things you don't have to go 7,000, 7001, you can repeat it and make it very complex, you can foresee that there can be dynamic port key changing in order to make it even harder for possible eavesdropers to follow up the communication.
So how does the destination application knows what key is going to be used? Well for sure, it can be hard coded in the application. And we will say that this is determined keyed UDP or if the application does not know beforehand which set of ports is going to be used, we can say that the key can be inferred and this is what we call the discovered keyed UDP or if you set up a new protocol on top of all that, you can say that there is a key definition protocol that both machines use beforehand to establish which set of ports they are going to be using.
And we also think that this keyed UDP is compatible with standard UDP. So this is like a comparison chart here, the worst thing that would happen is that if destination application is standard UDP and you are using this destination keyed UDP you will only receive every end packet. So we will lose a number of packets in the way.
Okay. For a change now, we can also try to think, we can use this with IPv6 addresses because theoretically it's possible to assign multiple addresses to the same interface so instead of using port numbers we can use IPv6 addresses, and they all refer to the same interface, yeah. So this is also interesting to research about. So what are the pending issues about keyed UDP, so what do we do when the destination application detects that there was a packet loss? Well, we can do nothing, just keep the information for the application itself, and then you can report it to the upper layers. You can report back and say this line is not very good so let's try to use a different protocol. You can ask to resend if the source machine has kept a copy of the sent packets you can ask to resend another time. Or you can do data imputation, for example, if you are using audio or video there are a number of algorithms to fill in the gaps of a missed packet. Well, it will depend a lot on the application you are going to be using. You can increase the complexity of the protocol, and there is an algorithm, you can use time stamping to kind of know if there was a serious loss or it's eventual loss, of course, I mentioned this, problems with non‑amorphic NAT‑PT machines, you can see some NAT‑PT overload or IPv6 routing tables overload and there are a number of other problems. They need to be further researched and assessed and I have a couple of students working on this and I am inviting you to join us if you want.
So let's try to see what the eventual reconstruction algorithm could look like. So this is like a set of packets I am going to send, okay? And I am going to send packets from port 1 to 6. I am going to move a bit. So this will be the first stream of packets, the second, the third, fourth and fifth so I am going to send 30 packets from port 1 to port 6. The letters I have added there just for us to keep track of which packet is which, so randomly, we decided that packet 2A, 1B, 5B, 5C, and 2 E would be lost, and we decided that packets 5 A, 3 B, 1C, and 5 V would switch. So this was the transmitted set and this was the received set. Of course the received set is smaller, it has five packets less, and there are some packets that are out of order. So what we do in this reconstruction algorithm that we propose is that we take, if N is the length of the key so N would be six, we take a five, N minus one, we take five series of packets and sort them out. And we do this after the first sequence has arrived and we do this in a windowed manner. This is the received sequence and this is the sorted sequence. The sorted sequence starts with packet one, then tries to find one packet too here as it doesn't find packet 2, it fills in with an F for failure to receive. And there is a packet 3, 4, 5, and 6 and so on and it does like this. What is the ultimate result? We can see, for example, in this line here, let's take this line ‑‑ this is an easy ‑‑ because packet 4 never arrives, you see? This never arrives. So let's try this one. So what are the candidates in this line here, what are the candidates for the position of packet 2 B? And then we will have an election. The candidates are 2 B, 2 B failure, 2 B, 2 C, 2 C. So 2 B wins. So in this position, packet 2 B would be the final packet to be received. And I can forward you the paper if you are interested in going through this with more detail. And so this is what I was going to ‑‑ I was describing, for example, for position, for the eighth position ‑‑ so for the eighth possible, it's going to be packet number 2, the candidates were 2B, 2B, failure, 2C, 2C, I have already mentioned that. For position 9, failure failure, 3B, 3B, 3C, 3B, 3B, so he 3B would be elected. So on. So this is the final set reconstructed after we apply this algorithm. So you can see that the algorithm was able to identify which packets were missing and moreover, it was able to put into place and to the correct place the packets that had been switched. So this looks very nice, yeah? Of course there are worse results, if you raise the number of failures and the number of switches over the 20% or 25% in some cases, it will be chaotic and the algorithm will not report a very nice metric. Nevertheless, I think that resilience up to 20% loss is very, very, very good. So this would mean that instead of the standard UDP layer here, we would have an additional or let's say a new kind of layer where, on the downward direction, the application would communicate with the keyed UDP layer directly and all the others behind, and on the upper direction, the keyed UDP would forward the packets to the stream reconstruction algorithm before this algorithm could forward the packets to the application layer.
So this is all about users, yeah. So we think further data from our machines, not just users of course. We wouldn't be able to process it because it's machine to machine. We will see multi‑sensor and device and multi‑format and of course, so on. It's going to be complicated. It's going to be a bit scary, I think, but it will probably be worth the effort so, yeah, questions? Thank you so much, guys.
CHRISTIAN KAUFMANN: Questions, comments. I am very happy that we have a presentation which is not just reporting that something is broken or doesn't work but has a suggestion how to fix it. So thanks a lot for that.
AUDIENCE SPEAKER: The idea is great, simple and perfect. I think it has a good future. But I have only one question: Did you have any measurements when it comes to performance like procession the layer or complexity or something like that? And the other question, what kind of application can we apply to this thing? Is it the same UDP like.
NUNO M. GARCIA: We don't know, the final question. This is an idea that has been around for two years now. But as, you know, it's very hard to find people who are willing to do research in very new areas as this case, thank you so much, I will take your card after the session. So we don't know exactly where and when and to what we can apply this new protocol. I am guessing every time there is a UDP communication quo use this because even if you don't report to the source application that you are having a very serious loss of packets you can still keep the metrics. For me, for example, it's very annoying and I know this is not the only reason why, it's very annoying to make a Facebook call or a Skype call and at the end have this kind of thing, how was it? Was it good, great? Cop on, you should know, you are transmitting the packets, the applications should know how the communication went. They don't need me to tell if it was good or bad. For me it's very annoying. Regarding the first question, about complexity and about processing time, we really should not be concerned about processing time. It's just silicone or optics in the not‑too‑distant future. Devise the algorithms and the guys in the hardware will find a way to make it work fast.
AUDIENCE SPEAKER: Brian Trammell. No hats and many hats. So measurement hat. This was very intrigue presentation, I was trying to figure out what it is you built and what it is is essentially encoding layer that encodes information about loss, reordering and possibly latency into the ports of UDP. It's not a ‑‑ you are going to have to build a transport protocol on top of this and once you Rye to put applications on top you are going to end up, your last sort of open issue slide is, was ‑‑ there was ‑‑ I forget ‑‑ there wasn't? Did I see an earlier version of this? I looked at these slides earlier and maybe I made up an open issue slide. Here is mine: So in order to actually have ‑‑ take this encoding, this, this, this here, so what to do when ‑‑ so that basically you are going to have to reinvent something like TCP on top of this.
NUNO M. GARCIA: Why?
BRIAN TRAMMELL: I would suggest you go back and look at all of the stuff that the transport community has tried to implement and failed.
NUNO M. GARCIA: We did that.
BRIAN TRAMMELL: You could built ‑‑ you could use this as sort of an under layer for quick somehow, there is all sort of things you could do with it. One thing that I would suggest that you do measurement hat is really, really look at the prevalence of the types of failures that you get because of NAT‑PT, actually try to build either in a laboratory or end‑to‑end experiments to look at how much of this encoding goes away and it would be interesting then you could build codings on top of the algorithm that you have.
NUNO M. GARCIA: For sure, we have that right on the real network. For sure.
BRIAN TRAMMELL: So I would really look forward to something like that coming back to MAT WG because part of what this approach is it measures impairment in the network and that is pretty cool. Thank you.
NUNO M. GARCIA: Don't go away. We have looked to almost every other proposal reliable UDP protocols and so on, what you say is true. Most people think, a ‑‑ UDP is very cool so it doesn't rely on connection and control and so on. So what they do, they implement like UDP and they put TCP on top of that to make the control.
BRIAN TRAMMELL: That is the wrong way doing because we already have TCP. The partial reliability extension for ‑‑
NUNO M. GARCIA: The base ground for this research was that we don't need to communicate in realtime communication, we don't need to communicate back.
BRIAN TRAMMELL: Interactional applications.
NUNO M. GARCIA: As long as I know the communication was not so good, like in the batch after one week I can send some back statistics to the provider, this time and this time didn't go that well. Maybe could do that. Don't have to add realtime reporting of what went wrong.
BRIAN TRAMMELL: Just to ruin your day, think about how you would put security on top of this and with that I am going to run away. Thanks.
NUNO M. GARCIA: Thank you.
AUDIENCE SPEAKER: Jeff Osborne. Page 27, if could you go back to page 27. If it's a typo, then I can go to sleep tonight but over wise ‑‑ 5 C, when you corrected this, maybe it's one more slide. The result of this connection ‑‑ I am losing my mind ‑‑ when you showed the corrected packet ‑‑ the 5 C packet disappears in the final result.
NUNO M. GARCIA: Yes. Let me try to find 5 C. Yes. So it's 5 D there, okay. This is one of the problems here, actually thank you for pointing this out. One of the problems is that you can have the same packet elected twice. 5 D takes the place of 5 C but it's still present there. So there are two problems here that we did not handle yet, we have to refine this, resolve a little bit more. The first one is hidden packets, packets that were received and are never elected for some reason, and the second one is that packet that are elected more than once, and this 5C is one of the problems. Thank you so much for pointing this out. Yeah, you are looking at it. This is nice. So you had a question and you just sit. No? Are you sure?
RUEDIGER VOLK: Deutsche Telekom. I just wanted to add half sentence to Brian's closing remark about pointing to security. Actually, he was asking put security on top of it. I would ask do analysis of what attack Vectors become available because you are going there. I have faint recollections that having sequence numbers being predictable is one attack point. And I also have for particular UDP application a faint recollection that using entropy and the port space actually was an important part of securing stuff. My brains don't work fast enough to actually tell you what the conclusions out of those pointers are. But it looks like, well okay, there is something that you have to take care of.
NUNO M. GARCIA: For sure. This is a nice departing ground, I guess.
CHRISTIAN KAUFMANN: Good. Thanks a lot.
So we are fashionably late, the next speakers we will run all three and have questions and comments for both of them at the end and aggregate them. And with that we get up dates about RIPE Atlas and what the NCC is doing in that field. Thanks.
CHRISTOPHER AMIN: I am Chris from the NCC. I have been working in the R&D department for about six years, and I just want to give an update on RIPE Stat, RIPE Atlas and some of the other things that we have been working on.
So starting with RIPE Stat, which is the NCC's resource for Internet resources. It's ban period of improvement, consolidation, so there have been a lot of userability improvements on RIPE Stat. The home page, the search page has been improved, the widgets have had usability improvement and we have increased mobile friendliness.
So we have been putting extra effort into spreading the word about RIPE Stat, making sure more people use it. It's already a very widely used tool, but we know there are people who are not yet reaching. A couple of examples, you can see there, but one of the funniest ones was probably Job Snijders BGP and yank cat, you should see that. And we have put a lot of effort into improving scalability out of necessity because the tool is very widely used and you can see the statistics there, something like a 50% increase in the year to date. So that is necessitated more servers, more efficiency, we implemented zero downtime deployment so when we are deploying there is no noticeable effect on users.
So moving on to RIPE Atlas. This is the part where we normally show a bunch of statistics about the network and the coverage and this is what I am doing here, but with the special distinction that we have 10,000 active probes consistently now, and so that is a nice round base 10 number. But it's significant not just because of that but also because it's the nice round based 10 number that we declared six years ago or whatever it was, we want RIPE Atlas to have 10,000 probes and now it does. So there was an article which we kind of used this as a point in the sand to say this is where RIPE Atlas is but the significant thing now is that the target has shifted or at least the emphasis has shifted so we have a lot of probes, but what we would like to do is increase the diversity of those probes, so the explicit target is to have 10% of, I say IPv4 ASNs covered. It's not a specific goal, like we are going to do it by next year or something but we want to work on the diversity of the network especially in under‑served or under‑covered regions so we have more representative coverage.
So on the probes, the little guys that do most of the work, we are continuing to evaluate candidates for the next version of probes. So there are several hardware candidates that we are considering. There is not incredibly pressing need for the next one like right now, especially because we had some issues with the version 3 probes to do with USB sticks and so on. Those issues have been largely mitigated, there are still problems, it's still not ideal but we are doing okay and this is how we manage to get the probe count up above 10,000. We are considering for the next version of probes to have a more of a diversity of interfaces for connecting back to the command and control system, so actually receiving the measurement instructions and delivering the results, maybe being able to do that over wi‑fi or over 3G or other ways and that would really allow us to reach places that we are currently not reaching. And of course when we were doing something like that, sort of thinking back to Agustin's presentation we will make sure those were properly tagged, has a metadata system because the important then that you know that the data you are getting is from wi‑fi or from cell or whatever.
We are evaluating virtual probes, we don't have anything to say right now on that but we will do shortly. Similar considerations apply because you will probably want to know if the data you are getting is from virtual probes but a lot of people would find it useful because they would be able to deploy these in places where they can't plug in a little box. And finally on the probes last time, last RIPE meeting we said that we would, we would be kind of knot exactly retiring the version 1 and 2 probes but putting them into more of a lower maintenance state and that is what we did so the firmware for the small black probes has been frozen, if there are any security requirements we will provide small updates but new measurement types, new measurement options and so on they won't be added to these older probes but they still work and we still get useful results if you have them please keep them plugged in as long as they will tolerate it.
So on to the RIPE Atlas anchors, we have version 3 anchors so we have them now in the world. There is a labs article which explains exactly what the hardware is. Important, if you are considering hosting one of these, the price is halved compared with the previous generation so it means even more people can hopefully set one of these up. And we also have some sponsorship from the RIPE NCC for people to set up anchors. So there is not too many of those left but if you are interested then get in touch with us because it could be that the NCC could sponsor an anchor somewhere.
And we have several recommended vendors who will be able to provide a fully assembled anchor. You can, we provide full specifications for parts so you can assemble it yourself but there are options if you don't want to do that.
So new features in the platform. One thing we did, so the data for RIPE Atlas is available via web circuit streaming and rest APIs and so on but we now have dumps all of the measurement results but for the past 30 days available, which is very useful if you want to do research on a huge amount of data all at once and that is segregated according to measurement type. It includes a data for the root zone measurements which we announced last time, which measure ‑‑ which query random DNS labels, random and popular DNS labels using probe resolvers, so you can also download those and they are quite ‑‑ they are marked on the FDP as well. And we have added more measurement options, this is kind of incremental improvements for things like being able to include the probe ID as as key string in the ping pay loads, you can do TLS measurements specifying a particular host and DNS macro so you can insert the probe ID or random string or other things into DNS labels.
So there have been some performance improvements on the front end. The measurement creation form and the measurement listings page have been made faster and more usable. The main way that we have done that is by shifting as much as possible to use the APIs, which are the public APIs and making that work in all of the nice ways. The ambition is that we, entirely use the APIs for the web front end, we may not get there exactly but certainly most of the website uses the APIs that anybody else can use. Domain MON had the kind of usability improvement. TraceMon, again incremental improvements, improved stability, the geolocation information has been improved. TraceMon is, it's on GitHub so if any of you want to contribute then we are actively looking for people to help us out with that. And all of the future improvements which come ‑‑ all of the notable ones for RIPE Atlas are available on the web page.
So, more on the research side of the research and development team. We have several RIPE Labs articles which have been accomplished recently. So these two articles are kind of along the same theme, which are around using probe connect and disconnect events as a signal. So this is kind of the only probably passive part of RIPE Atlas, the active measurement platform which is probes come online and off‑line, you don't necessarily have to care what they are doing when they are there but the fact we know they are connected or disconnected is quite a good signal so Emile has a general article about that which explains what you can do and my other colleague Stephen applied this to the extreme weather event in Puerto Rico and showed how you can use this. But the tools they have used to visualise this have been made open source on GitHub so other people can use this when there is events or if you want to demonstrate anything with probe connecting events.
This was a nice little article about using the trace route measurements in order to kind of test web server availableability. Has TP P ‑‑ but if you strip down the options enough you can basically tweak a trace route measurement as TCP ping, port 80 and test it there.. and there is an article by Robert on interpreting RIPE Atlas in the context of Internet of things and how we can learn from that.
So other activities: OpenIPMap, I won't say anything because Massimo is on next. Country reports in the Services Working Group this afternoon, my colleague Christian will be talking about the country reports so that is country level information. And we have various training, we have developed various training things recently, there was the Eduka event, webinar type thing, that was very useful. As always get involved, it's a collaborative platform, community‑based and we want to see you, we want to work with you and we want to see what you can do with RIPE Atlas. And thank you to our sponsors.
CHRISTIAN KAUFMANN: Thanks. Next, Massimo, straight ahead. 24 slides in ten minutes.
MASSIMO CANDELA: Good morning everybody. I am from the Science Division of the RIPE NCC. Today I would like to present to you our effort in terms of IP geolocation. So the first question is why we are getting a lot of questions about what the NCC is going to do about IP geolocation, at the same time our Executive Board on various occasion and there is a link that you can read, stated that the NCC is not going to provide any end user geolocation because it's out of our scope. At the same time, for our kind of research is and also for our third party is really important to have infrastructure geolocation, to understand what is going on, to make sense to our data. And so we will focus only on infrastructure geolocation. We need also geographical data format that is unified that we can use across our tools. Same format, end reach so we can do complex query and accurate enoughment and third point, IP geolocation is extremely difficult and we know this, a lot of research they tried there is no 100 percent accurate solution, we are also not trying to provide that. What we are trying to provide is a way to let's say work together and improve the geolocation with the matters that we already have or that we can create.
So, here we can. The new RIPE NCC geoservice is an API, you can access to it. And there is a series of end point we will see some of them. I am going a bit fast. So, if we try, this is the first example, we tried geo.ripe.net /locate/address/best, get a geolocate object, we get a city and you can see here our unified, city name, country code, three letters, /long. If we remove the last part we get an array of geolocations. The important thing we have to see each of them has a score, there is a number. So what is the concept of the score? So we said multi‑approach geolocation, the idea is that we put together all this system developed for finding geolocation, we call each of them like geolocation engine. They run in parallel. They receive the same IP address and geolocation database as input. They do key value for geolocation and they assign a score to each of them. And tend there is a reduced function that basically gives a final score. Some of this geolocation engine are not applicable in all the cases, the Anycast, we have one Anycast engine is only applicable if the IP is Anycasted. The geolocation that actually can be used to for false positive and so has to be used together with other engines. And some of these engines are instead better when they receive some extra information like a trace route, and I will show you an example.
So the idea is that they run in parallel, they find a score enough to find a final score and we can integrate whatever research in one of these engines.
So, if we put partials at the end of the URL, you will get each engine geolocation score, I compress it and you see only the header. So it's before the reduced function, you see the entire single results. So this is an example instead of the Anycast engine where it's using the Anycast data set and I am trying to get the location of the Google DNS and I get all the instances available on the data set. If I do a post where I put a trace route from my machine to Google DNS I will get the score for the instance in Germany is boosted and this is an example of what you can do when you post also additional data.
Active geolocation, that is controversy and hot topic. Basically, when you access to an IP you ask for an IP that was never searched before, RIPE Atlas starts some ping measurements so we try to use latency to try to detect where the IP can be. We use peeringDB data and BGP data for reducing the search area. We consider only route ‑‑ less than 10 milliseconds, if it's less we don't try to geolocate it. But the real output of this process is a list of geolocation with a score, the value is the score. We try to boost the score for each city based on various factors. Some is facilities, in that city and population because I would like to remind you it's, it's best geolocation of the infrastructure. And we are working on it. So contributions are welcome.
Crowd Source is another API where you can Crowd Source IP information about your geolocation. And we have, this is an excellent slide, it's important to update your peer DB information, I will say only this because I have ‑‑ so we would like to contribute more with research, introduce new geolocation engines and integrate other like Reverse‑DNS, defining and publish some K PI, we already collect met data for this and define some user rating policy and incentive for the Crowd Sourcing. I am sorry if I am running. So, geo.ripe.net, no questions, next presentation. I hope you can give some minutes more. So Crowd Source infrastructure location with OpenIPMap. This is not any more on API, you can display trace route on a map. This starts with why. RIPE 74 I presented this tool called TraceMon that gives you a toplogical view so multiple trace route on a graph and you see this and you can explore it, each node of the graph is an IP address and you have additional information on it like autonomous system, Reverse‑DNS and blah‑blah‑blah. But sometimes you really need a map when you want to understand the distance between the various IP addresses, you want to give sets to roundtrip time and see relationship between countries. Another thing is Crowd Sourcing data, we said that we have an API for Crowd Sourcing data. Can be done better if we user map, it's a natural. Imagine this task you watch a trace route on a map and the ‑‑ there is one of the IP address that looks like big geolocated and you are Crowd Sourcing to us the information. There was a couple of days ago a prototype called OpenIPMap done by as my colleague Emile Aben and there is also the article ‑‑ and you can see the image here got much interest and we want to have a production version of it. And also we have the API that I said before, better source geolocation with various engines possible. We put everything together, and we have the new OpenIPMap. We kept the name and the basic functionalities and we have reached the functionalities and improve the UI. Before I receive e‑mails usually I forget to say this, where you can find it. Address.ripe.net,/measurements, the list of measurements and you click on trace routes because we are talking about trace route and click on one, for example, trace route to Wikipedia.org and there is a tab with all the tools, OpenIPMap is one of them. So let's go with them. I open this Wikipedia.org trace route measurement, the page loads, I close this search box, I don't need it and I click on the icon that zooms where there are trace routes. And you see that the various IPs are gee located on the map and each trace route is colour line or different colour. If you over them you find something that can ‑‑ may look weird, it says 5 Ops in that segment, this is because while we are running this probably there are some hops that are not yet geolocated and as soon as it's ready it will appear on the map. So Geneva, I ‑‑ and if I click on that trace route I can get more information on that panel. And you have each IP, for each hop with the various geolocation and this kind of green pie chart that gives you information about how much the API thinks that their geolocation is correct. And with a you can do, well, in addition to click everything, is like see on the left side there is all these information about the trace route, you have all the hops and the autonomous system number and so on. What we are doing here, we click on general eve to validate the active jokes and the we see ‑‑ we confirm it, we basically Crowd Source a plus one on that information, we do it also from Frankfurt. This is going on our database. Now, what I think is weird and this trace route and trace route Rome, is that close to Rome they are famous for their bread but I don't think for Internet infrastructure, so it looks weird to me from Rome it goes there. Just to test, I just click on that, and I discovered that actually there was an error in geolocation because it says Rome the Reverse‑DNS so I Crowd Source Rome and I submit this and it goes to our Crowd Source system.
So, when something is a new geolocated it gets updated in realtime on the map and when we don't have the city we geolocate with the country, you see this kind of green effect on a country. Just, I finish. OpenIPMap base on TraceMon so all the research that we done on TraceMon is available here, we would like to promote this as end reach and open model for trace route and I hope more collaborations with us will come. Future work:
We would like to have an OpenIPMap search hop where you can put an IP address in and we give you information about where the address is show you the trace route passing through the IP address. Instead of being a so much measurement.
AUDIENCE SPEAKER: Yes, please.
MASSIMO CANDELA: Anycast support, we need some time to implement it in the UI. A generic trace route input where you can essentially put your Shell up of a trace route in something and we parse it and you can see it on the map. We would like to improve the Crowd Source core mechanism, for now it's a bit like simple and also in this we need maybe to do some research, contributions are welcome. Some K PI for service evaluation, we are collecting metadata for this and Reverse‑DNS geolocation, click and check, the Reverse‑DNS can be done automatically and we would like also to integrate and work in them. I am sorry that I rush, I hope you got something, and questions?
CHRISTIAN KAUFMANN: Do we have quick questions or comments? I mean the coffee break started so you have chosen to stay here so you are not in a real rush, I guess.
AUDIENCE SPEAKER: From university. I run the trace route interactive map, two slides back. Most of them. I tried both of them. The OpenIPMap didn't work properly with me.
MASSIMO CANDELA: It's been deployed three days ago so probably you used the ‑‑ earlier version.
AUDIENCE SPEAKER: Alpha one. Even the trace M O N sometimes I see other probes and then I don't see any of the connectivity.
MASSIMO CANDELA: This is because in TraceMon we take into consideration the interval and sometimes this can create some issue about that. I mean, so if the trace route you get are too old compared to the instance you want to look they are considered disconnected now because we consider the inter valve the measurement. So probably if that is the case and otherwise we can check together.
AUDIENCE SPEAKER: That is what happened, because I did my measurements and after a week it came back to analyse it.
MASSIMO CANDELA: It's showing only the probes, we don't have the reason for trace route in the interval and this is the way he have emulated this connection basically we would not know if it's the same route or not. That is what we are trying to do. But we can see together later if you...
AUDIENCE SPEAKER: Thank you.
BRIAN TRAMMELL: Sorry for making you rush, really cool stuff. So, I am really interested in your on demand RTT thing, because ‑‑
MASSIMO CANDELA: I knew it.
BRIAN TRAMMELL: We should talk about that off‑line. It's interesting you chose 10 milliseconds.
CHRISTIAN KAUFMANN: On the mailing list and other people can see it as well.
BRIAN TRAMMELL: We will do that. Let's start that conversation here because everybody has decided that they are not going to the coffee break. So it's interesting that you chose ten milliseconds as the threshold there. Was that a random, it seems about 1,000 kilometres or did you actually look into the data and figure out that it was useless beyond that? Because I had a paper under submission we say it's useless beyond that.
MASSIMO CANDELA: Thank you for the contribution. I took into consideration your work while I was doing this.
BRIAN TRAMMELL: Oh okay.
MASSIMO CANDELA: There were various reasons. We were thinking with our experiment that we 10 milliseconds is not useful any more and also, but anyway, it's not useful any more and also we were even trying to shrink it even more but we were playing also a different kind of game because you were trying to discover if it's a privacy issue so we were implicitly targeting end users. It's infrastructure. So actually for us it's really much easier to detect with the address ‑‑
BRIAN TRAMMELL: And you are getting useful results with the 10 milliseconds?
MASSIMO CANDELA: We are getting useful results and I repeat, the idea is that, two things: You have the score mechanism that actually the real value added so we have values where you are adding score to the cities, for example, facility ‑‑ various way.
BRIAN TRAMMELL: That turnkey, was that on demand geolocated or some other algorithm that decided that there was infrastructure.
MASSIMO CANDELA: I think that was on demand geolocated. After this, we tune it even more.
BRIAN TRAMMELL: Let's start and thread on the list about how that works ‑‑ let's go talk off‑line and start a thread.
CHRISTIAN KAUFMANN: Last question, last comment?
AUDIENCE SPEAKER: Wolfgang. Have you considered using DNS location records?
MASSIMO CANDELA: Yes. Actually, yes, we had to, this is the initial certificate, we considered that, I know that the DNS record that are researches and they say that this record is often not up to day. But anyway, it's something that we would like to add in the near future, add that and the Reverse‑DNS, that is another one is probably more accurate when available.
CHRISTIAN KAUFMANN: Perfect. Thanks a lot. Sorry for the rush.
So and with that, I am going to say 30 seconds thank you. As you have seen on the mailing list I am stepping down after seven‑and‑a‑half years, 15 meetings, after we basically chartered the MAT Working Group, you have elected Brian as the replacement so from now on you have Brian and Nena, which you already know and I want to say thanks to a couple of people in no particular order, just because they helped a lot. So we have the people standing on stage and be the shiny ones but most of the work is done in the background so I want to thank the NCC for their help and their support to organise the meeting and help us here, I want to thank Daniel for coming up with the RIPE Atlas and/or curse him for that, I am not sure. I want to thank Kaveh would helped us, and Robert, I haven't seen, who helped us with the explanations, putting together slides were the NCC, updates and supporting us over the years and also Amanda and Suzanne for the minutes which you all read and like and publishing them so the people who haven't been at the meeting can see them as well. With that, I stop, ask you if you have any AOB?
DANIEL KARRENBERG: I think I have to say personally and I think I speak for a couple of people in the room that MAT Working Group will not be the same without you and I would like as well to give you a big round of applause for the work you have done.
BRIAN TRAMMELL: For those of you are hoping doing over to Address Policy, they got through their agenda so it's just connect, yes, in the main room. Whatever your preferences there were.
CHRISTIAN KAUFMANN: With that I officially close it and see you all at RIPE 76.