Plenary session
23rd October 2017
2 p.m.:

JAN ZORZ: Let's wait for a couple more minutes so people can sit down and then we start the session. We should start, I think. Welcome to this afternoon after lunch session. I hope you had a good lunch and you are not too sleepy. Can you please close the door. Thank you. So, we continue with the programme, and our next speaker is our friend from Nokia, Greg Hankins and he will talk about the recent BGP innovations for operational challenges, so welcome, the stage is yours, thank you.

GREG HANKINS: Hello everyone. Good afternoon. Thanks for having me, I appreciate being here in Dubai. I am going to talk about some recent innovations for operational challenges that have gone through the IETF and are making their way into vendor implementations.

So recently, we have seen increased participation by the operators in the IETF, this is really good because for a while it was really just the IVTF, as some people like to call it, the vendor task force and there was maybe not so much participation by operators, recently especially in the IDR, that is the inter‑domain Working Group and the GROW which is the global routing operations Working Group, we have seen a number of operators come and participate and bring forward some new ideas and, as a result, we have had a number of RFCs that have been published and several more thank that are in the pipeline. It's really good to see operators and implementers working together in these Working Groups to deliver these new ideas. We are going to go through some of the new innovations that are coming and as I just want to point out, it's never too late to join either of these mailing lists. These are probably the most interesting for you as network operators but obviously the IETF has a tonne of others if you are interested in those.

Hear is the agenda slide and this is also slightly the problem slide because I have 49 slides in about 30 minutes, so we are not going to be able to cover all these topics but I will just touch on some of the highlights and some of the most important ones that we haven't talked about before.

Each of these sections in the talk is actually designed to be modular so by skipping around we don't lose anything and you will still have the slides in case you want to go and look at it in the future.

I am just going to skip through this first one on security, this is basically just a new well‑known community that you can use to signal destination based blackholing. It's very wonderful. The next one is important so I want to spend sometime on this one.

So, if you have a default configuration like this, what does this do? So think about it real quick. By default, there are no policies applied. This is a default configuration on many vendors and in fact on most vendors or a lot, the peer actually starts to come up once you type the remote AS. So it's actually really bad because you could be in the middle of configuring policies or something and it's already starting to send and receive routes because the peer is actually up. What this does is, it actually leaks routes between ASes and this is probably not what you want.

RFC 8212, basically provides functionality so by default if there is no BGP policy, then it rejects import and export routes. So it's a very secure mechanism in order to prevent typos and all sorts of other misconfigurations when there is no policy and you don't want to announce all the routes in the first place anyway.

The visual implementation, this is not how it would look when you configure it, it's configured like this behind the scenes in the BGP implementation that there is now an implicit deny for the import and export policies. So this is basically thousand would be implemented in the router, not that you have to actually configure this. There is a lot of argument on this in the IDR, in fact it was one of the most discussed e‑mail threads I think in the hisry of the IDR. There are many hundreds of messages and many of you in the room participated, because it changes the base BGP spec there was a lot of opposition and concern about changing defaults, breaking things, a lot of concern about customers not reading release notes and not knowing that the defaults have changed and kind of other, all sorts of other maybe red herrings about changing defaults. In the end, everyone thought that better BGP security and better BGP defaults were better than to have concerns about changing things so in the end it went through and we have this lovely RFC. What this gives us now is consistency across platforms and vendors. There are some that already implement some form of this, and there are some vendors that implement no form of this. If you are just looking at a configuration you have no idea what each is doing. This gives us explicit configuration where you know if something is doing it or not. It makes it easier because you don't have to guess what an implementation is doing. Most of all as I mentioned, it protects default free zone, so the security of one router really affects the security of all of us and we have seen this recently, I think there are route leaks just as recently as yesterday or the day before, weren't there?

What this means to you and what this means to implementers is that because this RFC modifies the base BGP specification, implementation are now no longer compliant with the specification. There are a lot of vendors that need to do some work so it will take sometime because of and particularly this is all just changing defaults, it will take sometime for these implementations to catch up and make this new default but over time it will happen. There is a list there at the bottom of the page that we are maintaining in order for you to help keep track.

What this means to you and what you should start to do is, if you are not implementing routing policies with secure defaults then you should probably doing that now anyway. You are probably doing it wrong if you just have open defaults and you are sending and receiving everything and it's probably not what you want. So now is a good time to start, keep an eye out for when the implementations actually change the default behaviour so please read the release notes and documentation just for me as a vendor to you, it really is important, we have so many customers that don't read them and then they are surprised by changes that we make. If you follow these steps then you will be prepared in advance for when your implementation vendor does eventually implement this default.

I am going to skip to this one for large communities. They are basically like RFC 1997 communities but larger. They have support for 32‑bit ASNs so that is a big attractionor them. We have given a number of presentations around the world, there is a site here large BGP communities .net, it has almost everything that you can think of that you would need to know in terms of background, all the presentations are there, there is even configuration examples and things like that, so I will just skip that and mention that we expect next year to be the year of large BGP communities because the major router vendors will have implementation support so you you should look at this now to be prepared for next year.

On to the next one, I think we are kind of short on time so I will skip this one. Basically, this is a mechanism for you to add a text message to shutting down your BGP session. The common problem that we have is a peer goes down and you don't know why and it didn't read the maintenance notification and there is a whole conversation on tech list about ticket numbers and things like that. So the solution basically is to add a free form message to shut down so you can send a message to your peers saying this is going on, the peer is going to be shut down and this will be actually seen in their syslog or logging mechanisms. And here is a couple of examples. The one I want to spend sometime on because it's probably more interesting, we haven't about it a lot is performance and there is two types of maintenance mechanisms that I want to run through. The first is voluntary shutdown, when you do something so you take action to do some maintenance, you are going to shutdown your peer, reboot your routers or something like that. Before you do this maintenance you can take some action to minimise the impact of the traffic failure. So you can use BGP shutdown communications, that is the message sending that I talked about, you can also use graceful shutdown, which I will talk about in a minute. The other type of maintenance is involuntary shutdown, this is where you have a network and someone else is doing maintenance on a component that you connect to, so this could be lower layer network or IXP, it could be some sort of Ethernet circuit, it really doesn't matter, some network that you are connected to that is beyond your administrative control. What happens in this case the BGP sessions only go down after the whole timer expires and you could probably blackhole traffic during this time. So this is where your network provider uses something called BGP coning. The voluntary shutdown, this is where you are taking some action to do some pre‑emptive work in order to drain traffic before you do your maintenance. There is new community that is defined called graceful shutdown. This basically says, hey, this is your best path, start considering it to be the worst path because it's going to go away in a few minutes. Now is a good time to select a better path and use that for traffic forwarding. Here is how it looks as a practical example. And this is a simplified example, there is a more details in the backup slides and also a link to a couple of papers on why this is useful. So the basic problem as I mentioned, you are going to do some maintenance and the minute you turn off that BGP session the router on the other side has to convert. That can take sometime, it's probably not a long time, in the order of seconds, hopefully, but the basic idea is, if we have a mechanism to eliminate this kind of micro blackholing because it's a short blackhole, if we have the technology and mechanism to mitigate this kind of blackholing why wouldn't he want to do that? It wouldn't make sense if we can do something in advance to inform your peers you are going to do maintenance, why not do that? It's also useful if you just want to steer traffic away from your router, so, for example, you may not be shutting down your BGP peer but in this Cloud perhaps you have, you are going to do some maintenance on the capacity at that goes to that router, you can use this mechanism as a general to route traffic away from coming into your network.

The way that it works is that you send this community called graceful shutdown. It's sent by you buffer initiate the maintenance so you send it five or ten minutes or fifteen minutes, however lock you feel comfortable, send it in advance to your peers and they recognise this community and set the local preference to zero and start selecting an alternate path, it's very similar concept, you set overload to route traffic away from your router. This is the same thing over BGP. When the BGP session is down, there is really no traffic because alternate paths have been installed, and there is no micro‑blackholing.

It's very easy to implement. On the receive side, all you have to do is write a very simple policy that matches this well‑known community and sets the local preference to zero, pretty simple. Then on the operational side, on the side that is doing maintenance, you have to update your routing policy to send the shutdown some period of time, again it could be five, ten, fifteen minutes, in order to drain traffic, when enough traffic is stopped, you do the maintenance, you do whatever you want for however long you want and then before you bring up the sessions you remove the shutdown community and then off you go.

Here is a very simple example, on every vendor on the planet it's about two lines so it's not complicated at all to implement. And actually here is a lot of good news, especially the NLNOG and Swedish operator forum have been doing work to evangelise and adopt this new community, so here is a list of communities that support this, you can send the shutdown community into these networks and they will honour it. So a great job on these two operator groups for helping adopt this.

So that is the voluntary shutdown, that is where you are doing some maintenance and shutting down your peer.

The opposite side of that is the involuntary shutdown, where you are connected over some sort of Layer 2 network and it doesn't matter what kind of network it is, it could be just a simple switch, it could be a point‑to‑point circuit that spans the country, it could be a circuit that spans international region or something, it doesn't matter. The point is that the link on each side will probably stay up even though the part in the middle is down. So in this case, the things in green, the text in green like BGP is up and the link is up is bad, because the link in the middle is actually down and these indications are no longer valid about the state of the network. The link stays up and therefore BGP stays up, probably until the whole timer expires which is around 80 seconds. You don't want to blackhole traffic for three minutes, probably. The solution is that the network provider applies an ACL to block control traffic. This is how it works in more detail. Layer 4 ACL only blocks control plane traffic so what this does is effectively helps the shutdown the BGP sessions in advance of the maintenance and in advance is key so do you this again like the voluntary shutdown, some period of time before you are going to do maintenance, so five or ten or fifteen minutes. Because the BGP sessions are being blocked the whole timer doesn't expire when the BGP sessions are blocked, the whole timer expires when the timer expires. So even though the BGP sessions are not communication with each other the routers continue to afford the traffic over the network which is still up, you haven't actually started the maintenance, just applied these ACLs. Finally the whole timers expire and routers choose different paths and the lower layer network can start maintenance. After the maintenance is done they remove the ACLs, all the peers come up and traffic flows normally. Here is a couple of usage guidelines, it's very important to only apply these to directly connected IP addresses, you want to allow BGP multi‑hop and the data plane traffic, so you don't want to block all IP traffic and you wanted this for v4 and v6, and then as I said before, you start the maintenance when the data plane traffic has stopped or dropped significantly, and it possibly may not ever go to zero on an IXP, that is always that someone who is statically routing someone to someone else so you may not see a complete drop but a significant drop in the traffic and then, as I said before you do your maintenance and remove the ECLs and off you go, everything comes back as normal.

So here is the call to action and this is where it's important for you all to get involved. I mean obviously you can get involved in the Avenue, as I mentioned before, but if you don't want to, here is some things that you can do.

What is available now, and what is partially available or coming soon. So you can use graceful shutdown today, you can use the session culling today and blackhole community, that is the one that I skipped at the beginning. You can use that today. There is no implementation changes required in router vendors, that is just a policy. Things that are coming are the large communities, the shutdown communication and the secure defaults. Those will be available over time, I have some charts that I skipped that have availability for the different solutions and in general the open source implementations are far ahead of the traditional router implementations, we as router vendors move slower so it's been a great help that the they have been able to act as reference implementations and helps to have a reference implementation or several in the case of large communities, to help pass things to the IETF.

So ask your vendors, ask them to support these RFCs now. Even if it's on roadmap and again from me to you as a vendor to operator, even if it's on roadmap, ask again. Because if one customer asks and it gets on the roadmap then that is one data point but the way most work or the way the ones that I have work for, the more customers ask for something the higher priority it becomes. So if five or ten or fifteen customers or twenty customers ask for something, then that is a good indication to us that this is really important rather than one customer asking for it and everyone else assuming that it's on roadmap because this one person asked for it.

And it's also important to put this in writing, so don't just ask your account team, we'd like to see this, put it in RFPs because we pay attention to them. There is legal compliance that we have to go through so if you put it in an RFP then we know you are very serious about it. And of course vote with your wallet, you need to make it clear to your vendors that these features are important and that your RFPs or your next purchase depends on these features being there. Don't let them get off and wave their hands and say it's coming, with a smile.

So for your peers, your transit and this is also transit or transport providers, and also IXPs, ask them to support these kind of traffic engineering features. It's very useful for you to be able to signal traffic engineering through your transit providers, so ask them to support the graceful shutdown, makes a lot of sense. Ask your IXPs to use the culling during maintenance, a lot of IXPs already do this today, if your IXP doesn't encourage them to do this, and the the same with the vendors, put this in writing, in your RFP and make sure it's in writing when you ask for RFPs and you send them to your vendors because then they really will pay attention to it.

Lastly in your network, this is a good time to take a look at your routing policies, maybe you have always wanted to redo them, this could be a good time especially with large communities coming, so you may want to take advantage of some of these new traffic engineering techniques and provide some sort of techniquest to your customers and optimise your operations procedures so this is a great time to do it. You should of course document and publish your communities, everyone wants to see this, it's not secret information so you should document this and openly publish it. And also add coordination to your maintenance procedures. There is no reason just to go turn off a BGP peer when you could use some sort of communication to help your peer understand why you are shutting this thing down. And again, follow the BCPs so that you reduce these micro black holes.

Lastly, this is an interesting slide, there was a lot of credit, over 100 people that have participated in the IDR mailing list discussions on these RFCs, many of you I see in the audience today, so lots of operators again, as I said before, it's great to see so many operators participating and being very vocal in the IETF and not letting the vendors have their way and do what they think is best. Only you know your network vendors, we can only guess what you are doing so it's great to have that kind of direct feedback. So thanks again for all your support. And lastly, this presentation I wrote with my co‑author Job Snijders so we would love to take questions or if you would like these slides, we are really just thing to educate the community on this technology so glad to give you these slides for presentation, whatever local presentation you have. That is it. I guess I kind of ended a little bit early by skipping through some things. But if you have any questions or comments?

RICHARD HESSLER: With Hostserver. I have been involved in some of these standardisation process, and when using this on my network to do maintenance and for my neighbours to use it this on my network I have noticed that yes, these micro black holes have gone away. I no longer get alarms and alerts when an IXP decides to do maintenance. Previously, I would shutdown the IXP an hour or two hours before their maintenance started and then just not turn it on until the next morning when I come back into the office and I can manually check it and now that they are actually using the involuntary culling, I can just ignore it and not care. And my network is very happy with the result of this. So thank you for doing this. And all the other networks please do it yourself as well.

GREG HANKINS: Peter is being modest, he is one of the authors of open BGPd and on this slide and a big reason a number of these things made it into a references implementation, so thanks, Peter.

SANDER STEFFANN: Thank you, a lot of stuff that makes operational sense and actually some of these issues like the large communities are things that have been raised here at RIPE meetings. So thanks to you and Job for doing all the hard work on this, it's really appreciated.

GREG HANKINS: Thank you.

JAN ZORZ: We have a couple more minutes if there are other questions. Okay. Please give a round of applause, thank you.

All right. Let's go to DNS world now, and we have Babak Farrokhi and he will talk about the curious case of broken DNS responses.

BABAK FARROKHI: Can you hear me? Good afternoon, I am Babak Farrokhi and I am here today to talk about my personal experience with anomaly that I discovered with DNS responses in my network. It happened in mid‑2016, but recently I double‑check and it still was there. I am always a little bit sceptical in my upstream provider and when something happens in my network I have always to dig deep to figure out what is going on. So this was one of these cases. It actually started when I noticed that some of my outgoing e‑mails are failing, simply cannot actually find the destination server because the MX lookups were kind of broken. So, it was only happening to a certain set of come to domain names, like So I diagnosing this with my standard toolset like dig and host and things like that, and basically I started to observe weird things. First of all, I queried 8.8.8 .8, and ask for normal domain name, like, I got a normal reply. Then I queried one of those problematic like I started getting this weird responses, and then trying to look up MX record, I noticed that the response was totally broken for some reason, this is what DIG had to report.

So going a little bit deeper into the packet payload, I captured my DNS traffic and it looked kind of fine at the first glance but if you look closer you see some strange difference in time between the request and response, so in first case when I am querying I am getting a response within 128 milliseconds, pretty normal for my network because Google public DNS is at least 120 milliseconds away from me. But in second case, I am getting a response within 28 milliseconds, which is not good.

Actually, I haven't had the correct set of tools at the time to do the measurement and play with this and figure out what is going on. So I put together a simple script, called it DNS PING and it was basically a DNS client sending DNS query to a resolver or authoritative server, and measured the response time and I wanted to make it like the legacy ping, the user experience I want it to be similar to the legacy ping. Here you can see a sample, so I am pinging Google public DNS, asking for A record, for on receiving the reply, within 200 millisecond or so. This is normal ‑‑ so I started using this new tool to figure out what is going on on my network. Here I have the output from legacy PING so I am pinging the Google public DNS and as you can see, I am receiving a reply to my ICMP pings within 124 milliseconds. Again it's pretty normal for my network because I know the Google public resolver is, this actually more than 100 milliseconds away from me, and then using the DNS PING doing the same measurement on UDP, actually sending the DNS request expecting a reply, I started getting reply within very short period of time. Not good again.

So, I repeated this test with different domain names, for most I was getting reply with correct timing, more than 100 milliseconds, with certain it was like less than 20 milliseconds. So, I thought perhaps this is not the Google public resolver, maybe a rogue name server, somewhere in my network calling the whole DNS traffic and responding to me, but various ‑‑ where is this rogue name server located? I didn't know, so I really hope that I had something like trace route to figure out where it's located. I couldn't use the legacy trace route because legacy trace route does ICMP or UDP or depends on implementation and the thing here in my network was basically rerouting or traffic based on DNS request payload so my DNS resource implementation actually sent DNS queries as a probe and doing TTL trick to figure out the journey packets taking to get to the name server resolver.

So, I put together another part in the script called dnstraceroute and started probing and testing to figure out where this name server is located. So, on the left you can see I am doing dnstraceroute, querying a normal domain like asking Google public DNS. Here is the path, my packet is taking to get to the resolver. It looks pretty normal. A couple of hops, it actually leaves my network, then a jungle of routers in my upstream and then leaves 11th hop, leaves my upstream network. You can see from the actually the latency, you can see that it's going to, leaving me the list going through Europe and entering Google facility from ‑‑ through DE‑CIX and then, looks pretty normal because I compare it with legacy trace routes and I got the same result. So this is actually the path that my DNS traffic should take. But on the right side, you can see I am covering the same resolver with a different payload asking for A record for ‑‑ now my packet is taking a detour, you see from upstream network it's just going to on 7 and 8 hops going to ‑‑ I am getting no ICMP reply to address and then all of a sudden, some host 888 replies to me within 20 milliseconds, too good to be true but it is not the actual Google DNS server. So, how can I figure out who is this guy and how can I find out the actual IP address. So it's not Google because I know we don't have an instance of Google, the resolver within our country, but who is this guy who is sync‑holing my traffic. So first back to the case of broken MX responses, looking at the TCP dump output I figured out something is badly wrong. I am getting PTR responses in answer to MX queries. So, someone coded, some maybe basic DNS resolver and made a big mistake by hard coding something, returning PTR in response to my MX responses. And actually, why it was so broken because the state RDLENGTH stated I should expect 3 bytes but there were 4 bytes in payload. As you can see in the previous slide, the number 3 coloured red here is the RDLENGTH and then after that covers 4 bytes. This is why DIG complaining the reply is broken. So maybe an alternative, something ‑ it ‑ someone actually messed up the code. So using this bug I could figure out what domain names are being handled differently. So I got a list of top 10,000 domain from open project, published a list of 10,000 most queried and ran it through my scripts to figure out when I am getting this broken DNS response for this domain and I found that 139 domains from that list were getting this broken response. So it was kind of selective blackholing based on my payload.

So, back to my journey. Who is this guy and how can I figure this out? Someone pointed out that I can use a service on test on to figure out if the internal facing IP address of my resolver. So it works like this: I send a query to my resolver, ask it for maxmind.test‑ and then I am going to over‑simplifying for the sake of this example but this resolver sends a request on behalf of me to the actual authoritative server and that is configured that it sees the source IP address of the actual query coming from this resolver and then returns in a text record the IP address geolocation data, AS number, whatever it knows about this IP address. So we are now querying, got public DNS, I always expect that my query to be originated from IP address on my Google. In this case we can see it's going from AS 15169, it's for Google. So it's pretty normal. But using this trick I can figure out which actually, did the public facing IP address of the rogue name server. So this time, I turned to RIPE Atlas and created a test set using 500 probes around the world to see if I am the only one seeing this happening on my network or is there regular practice on my network operators doing this DNS sync holing for some reason. Here is the result:

From 500 probes, 484 replied, and I did this maxmind.test‑ query and collected the result, 475 I got correct response. Actually the IP address, the source IP address resolver, belonged to Google but in nine cases I saw different IP addresses for different operators. And the strange thing about this was that that was happening around the world. I mean it was not only certain countries, I have seen this across Europe, in North America, Middle East, and Asia, happening almost everywhere and 2% of those probes were receiving this, actually, the DNS traffic were being syncholed. So I repeated this very same test with the very same set of probes, this time on TCP. And I noticed that there are much less ‑‑ I mean not much less, maybe statistically insignificant but the number of actually broken responses kind of dropped, which I concluded was that whoever is doing this DNS sync holing, maybe they don't know DNS, all the work over TCP so they just actually rerouting all those UDP 53 traffic.

I repeated the same measurement over and over again with different set of probes and I almost always got the very same actually numbers. So, why would anyone want to blackhole your ‑‑ I mean synchole your DNS traffic? Why would an operator do this? So, there are two, actually, main reasons. The good reason is that they are doing for your own good. Most of the VPN service providers do this. They don't want to expose your real IP address. So they always do DNS to their own resolvers so if you run this DNS trace route thing when you are connected to public, I mean one of these privacy‑related VPN services you see this happening, or maybe your operator is doing this to filter out traffic going to CN Cs, stop malwares from spreading, whatever, but there are some also bad motivation behind this. Many people doing this to block your access to certain services, to direct your traffic, to, if you are typing domain name that is not registered they take you somewhere to some sales pitch for you, whatever.

What are the counter measures, as a user how can you prevent this from happening because perhaps this is also happening for you when you were at your home or your office, maybe hotel wi‑fi, whatever. Maybe this is happening to you as well, maybe you should force your local resolver to use TCP, and good luck with that, and hope that your upstream provider doesn't know that DNS runs also over TCP. Maybe use DNSCrypt, is a very interesting project, they have this encryption for DNS, they have this nice clients for vendors, Linux, Mac, whatever, so end users at home can easily download and install it and make sure DNS traffic is just travelling safe. But recently or in the past couple of years, I guess, IETF has published two RFCs, a multiple but two of them are DNS over TLS and over DTLS which is really interesting, so basically you use the same good old DNS but is encrypted and what I like about this new standard is that people behind the standard already set up many servers, I mean not many, a few servers, and encourage people to send traffic to those resolvers and also develope a few clients so that it's called ‑‑ so you can configure it and install it on your PC or MAC or Linux to encrypt your DNS traffic, and I also found out that Unbound and Knot Resolver and many of this these great resolvers are already implemented this and they already support this DNS over TLS, so it's just a matter of a few configuration lines so your DNS traffic is encrypted. DNSSEC won't help you much in this regard because it's not really adopted that well. I did a quick measurement over 100 top domain names and only two of them were signed, it was,, nobody else actually had DNS record in their zone. So, and also, in case of DNSSEC, if your operator is sync holing your DNS traffic to resolver that does not validate your DNS responses, it's broken, so it won't help, unless you do the DNS validation on your end point, on your laptop or PC, whatever, normal users won't do that.

To conclude: Do not trust public resolvers. They are free for a reason. Whatever reason. But there is a reason, and I don't want to expose all my DNS traffic to them, and DNS contains very important information about whatever you are doing, your browsing habits, if your anti‑virus is updating itself, OS updating itself, it always actually exposes the DNS traffic. So anyone watching your DNS traffic for a couple of hours they can figure out what you are doing, what tools you are using, what is your OS, what is your software. So, don't use public resolvers, encrypt your DNS traffic as much as you can. And according to RFC 7258, DNS monitoring or monitoring your traffic, on the whole, is considered an attack so we should take a more seriously.

A final word about the tools that I used here, dnsping and dnstraceroute and other tools, they are on GitHub, open source, feel free to hack them, open pool request and provide your feedback.

And I really want to see these tools implemented maybe in RIPE Atlas so instead of only playing old trace route, maybe we can get a protocol‑based trace route, that would be nice because we can see many interesting things going on in different networks.

And that was all. If there is any questions, I would be glad to take them.

JAN ZORZ: Thank you.


All right. Thank you for queuing up at the same mic so my job is easier. We have 13 minutes for questions. So if anyone else wants to ask a question, not a problem.

JIM REID: Thank you very much. DNS guy from Scotland. Fascinating presentation, very interesting to see what happens in other parts of the world with DNS traffic. I have quite a few observations, I will go back to the back of the line and come back. You noticed that you were seeing a difference in the roundtrip times for certain queries to the Google 888 servers. So that would obviously indicate there is some kind of man in the middle thing going oranges something is intercepting that traffic and if there is stuff going to work potentially sensitive or difficult domain names filed down the alternate path. I was wondering if you could do anything with any sort of fingerprinting tool to try and identify the broken DNS implementation that was sending back these mangled answers with the strange byte counts and stuff like that. Did you have any way of indicating what is going on, because quite often there is things like response policies zones and being implemented in DNS servers to give conditional answers based on source addresses, if you might be able to identify the software that was identified. Were you able to do that?

BABAK FARROKHI: Great question. I tried, but it looked like a home brew software, not one of these well‑known resolvers. I am sure it was unborn or Knot Resolver, or whatever, it wouldn't return PTR so it was something that someone hacked somewhere, maybe C programme or something.

PETER KOCH: Thanks very much for the presentation and for digging into it and for building the tools just to follow up on this, great stuff. I have one question. I didn't really understand the connection between the trace that you showed and then the second measurement, because as Jim just said and you confirmed, it looked like a man in the middle there and it also liked like the man in the middle was triggered by the payload so there was a man in the middle for the and somehow in the man in the middle was circumvented or let the things pass through. With the measurements you did, you had ‑‑ at least from what I saw, you didn't have the trigger in the packets that were going to the MaxMind servers. Do you have an idea why ‑‑ how that correlates with the with the questions originate from the same systems? Like is the diagnosis of the second measurement in line with the symptoms of the first?

BABAK FARROKHI: Great question. This is why I couldn't figure out the public IP address of that rogue DNS server on my network because it was only triggered based on certain list of domain names and MaxMind was not one of them. This is why I turn to RIPE Atlas, but I am still trying to find out a method to figure, maybe I set up some, I am not sure, a service like this on Internet and try to figure out some other way. But because this domain was one of those kind of list of domains I couldn't find out.


AUDIENCE SPEAKER: Filippe from Netassist. We had similar situation in our network, and their SYN was the client actually, domestic client using CPE, there was attack on the client, the DNS resolver parameters on the client itself were changed and at first the to send queries and on particular hash with server DNS IP address of attacker, and all responses from the DNS were turned back up to only single IP, which was virtually used to attack customers' private bank data to steel the money as they suggest by, I don't know what the conclusion of this situation, but what I found was disturbing for me, and sometimes domestic clients who are end clients who are actually working on, not just on issues but professional, so I suggest that tools you use in just requires some traffic implementation to make it possible for people to find out someone who is temporary and their DNS data, it's very important thing, without DNS we just cannot trust anything in the Internet. Thank you very much again.

BABAK FARROKHI: Thank you for your contribution and I will definitely put it on roadmap. That was great comment, thank you.

BENNO OVEREINDER: NLnet Labs. First, some self promotion: Any questions about DNS privacy project or study, just tap me on the shoulder. Question: Encrypt your DNS traffic; and don't reply on the public DNS service and put it in your own ISP. But how do you, if you put your resolver in your ISP and if your ‑‑ encrypting your traffic can you rely on your ISP's resolver to resolve correctly or do you have another scenario to get around this problem?

BABAK FARROKHI: Well actually, personally I have always trust issues because this is why I have multiple virtue servers around the world running the unbound, thank you NLnet Labs, and I am encrypting and kind of doing round‑robin forwarding to my dot zone to these resolvers and from theirs, one in Europe and one in North America. And they are doing the stuff for me.

AUDIENCE SPEAKER: From education network. I have a question, actually this raised a lot of the speculation with our users, because also we have the same problem seen before but in the client side. It's not in the server side. We trust our ISP, they are doing a lot of things in the network, but we are trusting them. But as extra precaution, if we install another DNS resolver in our network, it will not give us more security compared to having our ISP resolver or Google resolver. So what is the ultimate solution for this problem, that is my question?

BABAK FARROKHI: Maybe Jim is the best person to answer the question but I guess the end‑to‑end validation, maybe some day we see everyone is doing DNSSEC and DNSSEC validation is done on the first mile, on your host, then such thing wouldn't happen, hopefully. But in this situation, you are encrypting your DNS traffic, you are sending it somewhere and you should trust that somewhere end point. That wouldn't be your ISP perhaps, maybe you have configured yourself and your own administration.

JAN ZORZ: It's Jim and then scribe and then Kaveh, and I am closing the queue now because we have just four minutes.

JIM REID: Another computer guy from Scotland, listening to quite a lot at this meeting. You were asking before about, you were using 8.8.8 for resolver service, did you see the same pattern of behaviour if using a locally configured but looking up these names for Was this man in the middle paying specific to the 8.8.8 traffic but it didn't care if the queries were coming from some other source address?

BABAK FARROKHI: Great question. Actually, yes, it was not only happening to 8.8.8, it was happening to all DNS traffic on my network, whether I am directly querying 8.8.8 or I have a local resolver next to my laptop doing the resolving thing. Whatever UDP 53 was going on my network the same thing was happening to it.

JIM REID: Okay. One last thing as an observation rather than a comment. You made, one of your slides you said there is no such thing as a free lunch and I think the whole point is about the Google service and these other public resolving services there is a large amount of data being gathered and kept and stored by these companies and that can be used to personally identify you and your Internet traffic thank goes back to what Sarah was saying before lunchtime about access to that data and in what terms was that going to be made available when the authorities come looking. Part of this trade‑off is if you really don't want to be beholden to these organisations, you run your own validating resolving service and run them on each of your own devices because you can't necessarily secure the path from your edge device to the resolving server you are using, so that is what I do, my laptop always has an instance of unbound running on it.

JAN ZORZ: Scribe.

AUDIENCE SPEAKER: Romeo, two comments from the chat room. From Jaap, one is that the explicit comment unbound also supports DNSCrypt and the other one is that in the context of privacy DANE may be a good thing to look at.

BABAK FARROKHI: I am a fan of unbound, and whenever I see some, you know, new IFCs published on the security, I see I have already implemented that so highly recommend piece of software and I personally use it and DNSCrypt is supported, DNS over TLS, over DDT LS is the only software others does not support, over D TLS, and maybe ‑‑ there was a second part to that question, maybe ‑‑

AUDIENCE SPEAKER: There was a second comment that DANE is used.


AUDIENCE SPEAKER: If I read the comment. You also want to look at DANE if you talk about privacy.

BABAK FARROKHI: Exactly. Yes, but I didn't cover it in my presentation but definitely, everyone is aware of DANE.

AUDIENCE SPEAKER: Cava from RIPE NCC, I had a comment about integrating this kind of measurements in RIPE Atlas, we would look into that but just you know know, there is a public repository for contributions to RIPE Atlas, tools, we have published an article about TCP ping to do S TP measurements and like within a week we actually from the community we had tools submitted to the repository which now many people are using to do TCP, TCP ping basically, so I would encourage you or anyone in the room to contribute and that makes it much easier for us to integrate that to the tools much faster. Thank you.

JAN ZORZ: I will insert myself into the closed queue. I did some experiments with DNS over TLS and that thing it's easy and basically works, and also with DANE, so you might want to have a look at that. But thank you for this great presentation. A round of applause.

And now we have Geoff. Please, come on stage. So Geoff will talk about the death of transit and beyond. That is quite a title, isn't it? Thank you very much.

GEOFF HUSTON: Thank you very much for that and good afternoon, it's good to be here. As humans we are kind of creatures of habit, aren't we? And you often see what you expect to see even though what you are seeing changes, because it's what you expect to see. And certainly with the Internet, if you see it every day, sometimes the subtle changes that are occurring you don't actually notice. You don't notice because they are slow but insidious. And what I want to talk about today is one of those big changes that is happening on the Internet, that is taken I think all of us a bit by surprise. When I used to go to NANOGs five years ago, even RIPE meetings, there was a huge amount of discussion about transit and peering and inter relationships and BGP as a tool glue the Internet together, huge numbers of talks on it. And what I noticed is that those talks and those discussions have actually tailed off. They don't talk about that much any more. So what do we talk about? We talk about top of rack, we talk about north, south and east west and we talk about data centre engineering and that is absorbing a huge amount of our time. SDN is not a tool to remove packets around the tool but it's a dam fine tool in data centre and that is what we talk about. So we are actually changed the topic of what we do from the large scale Internet to something entirely different. And that started to make me wonder what exactly is going on? So, in contrast to the last talk, which I thought was really cool, this is not a cool technology talk, it's just not in that kind of space, I am not going to talk about particular networks or services or any particular technology, this is not a talk about anything like that. I wanted to actually elevate this to talk about the architecture and the evolution of the Internet itself and what is happening to this Internet, slowly and subtly. And then talk about what it actually means. Because as far as I can see, the death of transit is not just an observation about the industry structure, it's an observation about public policies for the Internet itself and this whole area of the discourse of public communications and what it means for us. So let's role this back a little bit to understand what we are building on. This is the telephone network of the new century, this is, don't you love that photo ‑‑ the AT&T of the 2000s. It was once predicted, by the way, I think it was about 1925, that if the American thirst for telephony continued. By 1930 two‑thirds of the American population would be doing that job right there while the other third would be making the calls to keep those people active. What a job, hey? Our heritage is indeed the telephone network, and oddly enough it looks a lot like what the Internet is. You couldn't ring up the network, you could talk through the network. Because the network's job was to connect handsets to handsets, the network itself wasn't suggest could reach contact or talk to, once we had fired all those operators. It was intentionally quite transparent. And what they built was truly remarkable, I am not sure we could build it today, realtime virtual circuits spanning the globe. This is an astonishing achievement. And the architecture was incredibly network‑centric, the edge devices were simple speakers and microphones and pulse generators, incredibly simple stuff. But that was the only bit that really changed, right? Because when computer networks came, has anyone ever gone that to that computer mews em that has one of these? This is the first of these Internet message exchange units, the power supply is bigger than all the rest. The box itself is bigger than me, and the astonishing piece of military grade engineering, huge amounts of metal and almost no silicone. These computer networks they were built in the image of telephony because we knew no different, we knew no different. So when we built a computer network it really looked like a telephone network except for computers. So the original concept was there, that this invisible substrate allowed computers to initiate or receive calls. So I didn't call the network, I called you, and I do it with TCP, so I open up a connection, wave conversation and then I close it down. I can't call up the network or I am not meant to, because that is a denial of service attack. And literally, whether the network was a piece of Ethernet cable or a large scale set of active switching elements made no difference at all to the functionality here. The network was simply this amorphous blob that carried other people's packets. The one I think that changed the Internet from a whole bunch of other networking protocols at the time was this concept of end‑to‑end design, that what the Internet did which other protocols didn't was remove all functionality from the middle of the network. So in the Internet at the time and this is no longer true, every packet truly was an adventure. And the literally packets could be dropped, reordered, anything was possible and it was up to the TCP systems at either end to figure out what was going on and to make sense of it. And the other thing that this had, which was kind of unique at the time, was a single network‑wide addressing plan that spanned the entire world even though the network didn't. I don't know if some of you could remember, but even in the '80s you could write to the Nick and get your own unique IP address prefix even though you weren't connected to the Internet. Whereas in DECNET at the time everyone was Area 1, Node 1. And that whole idea of unique addressing was actually kind of unique to the Internet. And out of that also came a routing model which was unique. So these were the elements of that end‑to‑end design that kind of made the network work. So the network itself really basic, the true intelligence and the sort of the mind bending sort of warp that made this work was actually TCP. Because no matter what the network did, TCP will cure all evils and even today when you stuff the network full of middle ware crap, what makes it all work is actually TCP. Routing around most of those middle ware evils. So the result, you know, was truly revolutionary. Because what it did was convert a highly valued and valuable activity operating the network, which in the 1980s employed G, 4 million people, 5 million people, the telephone companies were the biggest enterprise of every single country. It was a valuable, a valuable role. What did we do? We made it worthless. We stripped out all functionality and all value. You might as well do it with sewage. Because the network doesn't do anything any more, all those complex functions, all the valued functions, stability control, jitter, loss, even Synchronous traffic like voice, the network plays absolutely no role, quality of service in the network is a piece of myth oddology that only the gullible end up buying. This is how we do it. Networks have no value.

Now, there was another change that happened, and it actually happened as an outcome of what you heard this morning about the whole issue of peering and interconnect. In the telephone world, there was a carefully crafted set of agreements that started in the 1930s with radio telephony and really started to take hold in the '50s as folk became more affluent and started dialing around the world. The idea of shared costs and balanced accounting, every telephone network was an equal of all the others. If one of my subscribers made a call to one of your subscribers, I got all the money and I paid you some of the money to compensate for your costs. This was fine. Everyone was equal. The Internet we never, ever had that because we could never figure out how balanced financial accounting settlements worked. I send you a packet, it's just a packet. I make a TCP connection, so what? How do we figure out who pays who? Well, like I said, it's the jungle out there. You pay me if I can convince you to pay me, I will pay you if I can't convince you to pay me and if the two of us really didn't find any mutual value in connecting we connect anyway and call it SKA, peering, and that is the way we built it. So we started diffentiating networks and we ended up with this model, the folk who absorbed all the money were the transit networks at the top and we know who they were. They started off being UUNet, and Horizon, Level 3, there are a bunch of these folk that are truly peers of only themselves and for them, everybody else is a customer. Down at the other end is the access networks all they can do is pay and so they pay their regional aggregators who pay the transits and there was a real role for peering at the time because if you didn't want to pay you could peer. So you go to a peering point and get rid of as much traffic as you can for zero cost and only pay the residual. So the whole idea in this ecosystem was to climb upwards. You were known by your peers. And if you managed to peer at the bottom level you are stuffed. If you can climb up out of that and peer with the Tier1s, you are a tier 1 because you are known by your peers. And we created an art form out of this, many meetings were had and much time was spent and it was fine where we were all ISPs, but then content came along. And content was a conundrum. The first thing content did was it broke up the network into clients and servers. Access networks were clients. Clients weren't reachable directly by other clients because the job of a client wasn't to talk to clients. This whole idea that I really worry about traffic going from client to client has no equivalent in the network of today. People don't send people packets. People talk to servers. Servers talk to people. All of our communications these days are brokered by client services and servers. So the role of the network isn't the telephone network any more. Clients don't talk to clients. The role of the network is to carry clients to services. And the assumption is that there are billions, billions more clients than services, you all know this, this is your job.

And so the only residual question out there is that there are folk who, if you will provide services, there are content folk and there are folk who have customers, clients, the access networks. So who pays whom? The only reason why we have a network is because they are valued services. If we didn't have services, there wouldn't be any customers. So obviously the access network should pay the content folk for their content because otherwise they would have no money anyway, right? Well, that is the way the current in‑folk saw it, the access providers owed them a living and they should pay. From the access network's point of view it was kind of, hang on, I spent billions of dollars building this wonderful, shiny brand new network, why should I go Google a free ride and Amazon, why they pay like everybody else? Because there is no end‑to‑end financial settlement here, both of us should pay for our part of the access. Content is just another customer, according to the network folk; they should pay too.

You know, the carriage folk should have actually said that is okay, we will pay you. Because the way this got resolved has been disastrous for networks and carriage. The wrong answer happened. Because by saying no, we forced the content folk to get inventive, and instead of having a relationship with a small number of access providers, we forced the content folk to have a relationship with billions of end users. We saw in today's or yesterday's presentation, how many customers does Facebook have? 2 point something billion. How many employees do they have? 17,000. Wow! That is what is happening. The content folk have achieved like we'd never thought possible. They have been able to go over the top and resolve this fight by saying, okay, we won't get money from you, we will have relationships directly with end users and with advertisers and we will marry those two up and create massive amounts of money. So far, so good. But then comes the issue of what is the role of the network and where is the content. Because it used to be that the network carried the users to content. This is another Facebook slide, I watch these presentations, I think they are cool, this came from NANOG 68, and they have obviously done their work, Facebook, back in whatever it was, 2011, and they spanned the globe. Why? Because they had data centres on both the east coast and the west coast so obviously this was global. And as someone pointed out, you notice the red bits all over Africa, South America and Asia, they are a long way away and physics is what it is. Service was shit when you weren't in either the east coast or the west coast. So not everyone was enjoying the same service from Facebook. So what did Facebook do? They had lot of money, they did what everyone else did, content distribution. So instead of relying on the network to bring the user to the content, the content started chasing the user. And we have been doing that ever since. And these days the biggest thing had a has happened on the Internet is that phenomenal rise of the content distribution network where all of what you do happens within a few miles of where you are. Because all of that stuff is cached almost under your nose. So the real challenge of replicating data so that now these high delay network paths are replaced with updating local caches and swinging it straight down the access networks. Everyone is happy. The content is in your face, it is amazingly fast, Netflix works for everybody. So in that respect, happy customers, happy services, happy content folk. And so all of a sudden this is role reversal, that the networks aren't there to carry the users to the servers, the content networks take the content to the user but there is one real big difference, and those content distribution networks are not overlays on existing public access networks; they are private. You don't even know what protocol they run. It doesn't matter to you. So inside that network it's not a public function any more, it's a private function. And that has some profound implications. If you look at presentations from Teleglobe, who is building submarine cables? It's not the old telcos, it's not the new ISPs, neither of them are building new cables. It's Google. It's Facebook, it's Amazon. And what is actually happening is those fewer cables that are being built are self‑funded by content people because content dominates everything. And that massive growth in content dwarfs any other public carriage role, and it's private, not public.

So, all of a sudden the networks split into clients and servers, but all the clients do is go that last mile because the services come to them. And all the services now set in content distribution network bunkers. Why? Because we are so good at DDOS, we are phenomenally good at DDOS that outside the bunker the Internet now glows in the dark, the DDOS is so bad that if you put up a single web server on your machine, wherever you want to be, and I take an exception to it, you are dead. For any value of you and me. This stuff is now toxically bad. So unless you are in the bunker, your content is worthless and inaccessible. So we don't each out to that content, the content and their bunkers are now close to us. So here is the new architecture. And the new architecture is private CDN feeds and they have now CDN service cones. Clients don't talk to clients, clients don't even reach out from their service cone any more. This is not a global network. This is not a global network. Because users don't send packets to users. Everything comes down with CDNs. It's a private network so what is the universal service obligation? Do governments insist that I have a right to have a Google cache next door? What does that mean in this new world? What are transit service providers doing because there is nothing of value left for them to do? Because once these CDN caches sit right near the new user, even inside, Netflix now has a 2 RU model that sits inside the edge access network and the entire wide area network is irrelevant. And if the way we are going sort of has that breath of inevitability, I really wondering about peering and exchange points because all they are is the interface between content and users and quite frankly, intermediaries always have a marginal life.

But we are in an addressing world, this is RIPE, this is the address coordinating body, but hang on a second, we don't need a global address ‑‑ a network any more. So if we don't have a global network, why do we need global addressing? Why do we need the DNS? What is this whole BGP stuff? Because quite frankly, if all I need to do is get to my local Akamai cache and my Cloudflare, my local Amazon, everything else is so marginal it's economically valueless. It's not just the death of transit, it's treating the network as televisions intercontent and the distinction between private and personal is not now corporately owned, we have privatised the entirety of the Internet. So now we are in a different kind of world, a world where the carriage network, the public network, the old telephony network has been subsumed into private enterprise. Who regulates that? It's private. The FCC, your current communications regulators have no say. This is all about commerce, not carriage. Universal service obligations, net neutrality, rights of access, market, they are all concepts that have no meaning in today's Internet. No meaning whatsoever. How big are these folk? This is market capitalisation as of a few weeks ago, this is the top ten largest companies share volume by share value. You can read as well as I can. I am amazed that Berkshire Hathaway, Exon Mobil and Johnson and Johnson still exist in the list. I am amazed. So content really, really is king. This is not carriage any more, content is king. They are not service providers, they are not ISPs or not even Telcos, AT&T isn't on that list. These valuable relationships with the consumer now dominate our world. And there are only a few of them. Only a few. They are very, very dominant. From Cloudflare in August, if you are not there already, if you are going to put content on the Internet, you are going to have to use a company with a giant network like Cloudflare, Microsoft, Amazon and Ali baba, without a would largely determine what can and cannot be put online."

Now is one of them that is an amazing admission. Because with a very small number of players like that, this is no longer competition. The dominant incumbents are setting their own terms of engagement with each other and with the sector, with you and I. We have been here before. We have been here before. I don't know in any other part of the world but in America they teach the kids about the gilded age and that applied in 1870s to 1890s, the first war that was ever fought with industrial‑age machinery. The first war that actually instead of it being a cottage industry became the application of our new industrial processes to that. And after the war, we started to apply the same industrial process to our world, and we created US Steel, 500,000 people, largest steel company on the planet, Standard Oil, Exxon Mobil, AT&T, still here today. Westinghouse, he started do breaks and went on to electricity and started to make atomic bombs because you can. General Electric and of course JP Morgan, with us today. There was something about them, they were rich. They were so rich they bought governments. They certainly bought the US Senate and congress, they set their own rules, they moved so quickly, a small number of players, that completely dominated that space and time. What is different? Tell me what is different? Because I can't spot the difference. A small number of players are so dominant and so rich that they effectively dictate the terms and conditions of everybody else, including governments. And the only curbs are the ones they impose upon themselves, not the ones we are capable of imposing on them. Who are they? The same list. So all of a sudden I think we are actually in the Internet's own gilded age. That these folk have moved so fast and have amassed so much resources and so dominated the space there is nothing left for anyone else. They are not accountable any more. There is no broad common good being spoken in this area. And you kind of wonder is that what we started back in 30 years ago? Is that the Internet we were dreaming of? Because what is left for them? And I love this quote, wherever you are, yes, what are they buying? They are buying what they bought in 1890, 100 years of future. Because now it's no longer about market dominance, they have that. It's no longer about squeezing out the competition, that is over. It's now about longevity. What they are now buying is their own assured future. And that is going to be an interesting buy.

So, this is isn't a conversation about the incremental changes in technology, it's gone well beyond that. It's how you and I can strike a balance between the private sector that has been incredibly energetic, that has done its KTAs 20 times over, what they have achieved is massive overarching control of the digital world. That isn't there an argument about equity of access? Isn't there an argument we all would like to benefit from this nominal digital revolution how do we express that and sustain our own role in between these corporate giants and how do we tackle that? I don't know. And I have either got five minutes left or just a few years left in my professional life. It won't be my problem but it will be yours because when they buy 100 years of future, a bit like America through the 1920s, '30s and '40s, trying to disassemble the gilded age absorbed all of their attention and effort and I suspect we are in for a similar future. Thank you.


JAN ZORZ: Thank you very much, Geoff.
We have four minutes. Kaveh.

AUDIENCE SPEAKER: Kaveh, RIPE NCC. I think you should also put a line under Petra Hathaway because ‑‑ Apple, IBM, and they have more ‑‑ Liberty Global have 10% so they have a lot of interest in‑‑

GEOFF HUSTON: We are quibbling over who is in the inner club now are we? We recognise it's an inner club, okay.

BRIAN NISBET: HEAnet. I suppose my question is, as operators, as a ‑‑ you know, members of the NCC or as members of the RIPE community, what do we do? I mean, is this a we need to do something or are we far too late? Are there steps we can take for our own businesses or whatever else? I mean, that is one ‑‑

GEOFF HUSTON: In the 1990s, when the telephone industry was being smacked around the head by the Internet, the first thing that was being said inside the telephone companies, is for God's sake stop telling yourselves lies. Get the glasses off and look at the world as it is, not as you want it to be, and then figure out whether you have got a future or not. But step one is be honest about the world, be real. And I think we are not real about the world any more. I really wonder why we are pumping for v6, when I am not sure the world, that world, wants a global network any more. And it's kind of wow, that is a really mixed signal. And I am not sure I understand this as much as ‑‑ you know, any more than anyone else. But we have got to stop kidding ourselves that what happened in the '80s is what is happening today. It's not. And after that, I will leave it to the HEAnets, the academics, the researches and practitioners to figure out what we all do. I suspect there is one thing inside of all this that has changed, and one thing, there is a vague glimmer of hope: The telephone network knew better than the users. Here is a phone, it's black, you wanted it, you pay. In theory, as consumers, we have a lot of choice now. We are driving technology. It's what we want, that drives Google, Amazon, Microsoft. And what I suppose we are finding right now is, we are making pretty poor choices as consumers and I wish we made better ones but maybe that is where the leverage lies but I have no real answers about this. But the observation is, the world is changing really quickly and the network is changing really quickie and I am not sure what the technologies we are going to need for this network are going to be.

AUDIENCE SPEAKER: Lee Howard. You said Facebook has 2. ‑‑ 2 billion customers.

GEOFF HUSTON: There was a slide somewhere.

LEE HOWARD: I think has a few ten of billions of customers ‑‑ and a few subscribeser.

GEOFF HUSTON: You are splitting hairs, I am being generous, lets call them customers rather than victims.

LEE HOWARD: We are on the same page then. So your previous point was why are we even talking about IPv6 when these companies don't even seem to want global addressing any more or network any more except it's interesting that these companies are, Facebook, Google and Apple, have done a lot for IPv6. But I am with you, I don't know exactly ‑‑ I will give them credit for that.

GEOFF HUSTON: I saw a presentation that internally within Facebook there is a lot of v6. And that is a great thing. But it's kind of like saying, internally within Amazon there is a lot of Apple talk. Sort of, but it's internal, what difference does it make? So, yeah, I am kind of perplexed and confused too.

AUDIENCE SPEAKER: I am glad that you mentioned net neutrality as one of the concerns, that has been occurring to me for a while, if a content provider can buy servers inside access provider's network, how is that different than buying precedent bits, if you are buying latency counter, milliseconds. It seems to me that the obvious ‑‑ when I respond to each the questions that you asked to this is, I don't understand the difference between client and a server. I understand hosts, and these are systems that have a common protocol between them. And I don't know exactly what an end‑point is in this world any more, to be fair, but I understand that hosts communicate to other hosts and it sounds like the technical response to this is to say we need a globally distributed, probably open source system where hosts provide data to each other, and I am not necessarily going all the way to academic research into named based networking or whatever the current acronym for that is, I am just ‑‑ it seems to me with existing protocols we can get information off of other nearby hosts but there is more architecture to be done.

JAN ZORZ: We are running into the coffee break so please be brief.

SALAM YAMOUT: From Lebanon. I am even more puzzled, Geoff, by your presentation, especially I come from a region of the world where we even have the content players coming to us, so we are watching all of this with puzzle and we are on the, I guess disadvantaged side of the access, and our governments think they are going to save us by regulating the whole thing. So, what do you think about that and maybe can you include a few slides next time?

GEOFF HUSTON: You should look at Tim Strong's presentation to NANOG two weeks ago where he pointed out where the private network's CDN feeds were being built, Akamai's, Google, Microsoft and Amazon. Not here. Not where you are. Singapore, yeah, Tokyo, yeah, Western Europe, yes, North America, yes, everyone else, well you are not economically viable. And, you know, that is the only answer I have as disturbing as that might be.

BENEDIKT STOCKEBRAND: There is a German philosopher said history keeps repeating itself or the only thing we learn from history is we don't learn from history. There was a time when I was younger and a bit more beautiful, some like 30 years ago, when some subversive elements decide we do something on the telephone networks we are not supposed to because the telephone companies won't want us to do that and we basically started to build something like the Internet. And there was mostly reaction that we weren't satisfied with what the telephony companies offered us, the kind of services so we built something on top of it and I really wouldn't be surprised if at some point the big players we have these days find themselves in a similar situation where people actually do things outside the features and services they have come up with. I guess it's only a matter of time when this happens, and call me an optimist but I think that is eventually going to happen, be just about as quick as the mid‑'90s when it took two or three years for the Internet to come up to speed for the general.

GEOFF HUSTON: I don't know about you but the telephone companies became big in 1920s, Howard Vale did AT&T in 1918, all the rest of the telephone companies were in built and in place by 1920. It took 80 years to pull that crap apart. 80 years of suppressing technology, 80 years of monopoly, 80 years of this abuse before it finally got too much and no doubt the incumbents, we will all get sick of them, but not this year, not this generation, certainly not this decade, you will need to be patient.

AUDIENCE SPEAKER: The '90s were a short time actually so I am still optimistic.

KURTIS LINDQVIST: A little bit to that last point. There is an element of our industry believes we are unique, we are not that unique as a matter of fact what you describe as industry maturing and it happened in other industries, it tells a little bit of the old incumbents who will require ten times size of Facebook that they didn't really see this coming although the writing was on the fall, this is mature and we all know that. I am going to leave that there and they can judge that themselves. The interesting part is when we can learn something from other industries of where we are going and what is going to happen our revolution and to your point about what is the future do we need a network any more because we have these five different content providers. They need a coherent access network because what they don't want to do is build X amount towards the end users because the struggle to get economic scale and we can ask there are some again looking at other industries that really margins will get thinner for them as well. I mean that's essentially where we're going.

GEOFF HUSTON: When the current point is money no object it's difficult to overspend.

KURTIS LINDQVIST: Money tends not to be an object forever.

GEOFF HUSTON: I know. Right now they seem to think we can do whatever we want or lay down into access.

KURTIS LINDQVIST: Welcome to the incumbents.

JAN ZORZ: The queues were closed.

AUDIENCE SPEAKER: Geoff, 80% of Americans did not become telephone switchboard operators, economic problems create the impetus for economic solutions so long as you don't invite government into fix the problem first and crystalise the situation and set it in amber.

JAN ZORZ: That was a long two sentences. Thank you very much.


Well now we have a coffee break and we will be back in 20 minutes or so.

(Coffee break).