Plenary session
23 October 2017
4 p.m.

CHAIR: Hello everybody. Welcome back, can I ask the candidates to come up here, the ones that are in the room. So, we have, as, you know, we have the nominations for the Programme Committee, so we have two seats up, so we need to vote. We have three candidates, so it's kind of easy. Right here on the main website, you have the candidates. If you just go to the main RIPE 75, to the, and then you can see the biographies and you can also vote, and we have three candidates, only one is available right now to talk about themselves. So, please go ahead.

JELTE JANSEN: Hi, my name is Jelte Jansen. I work for SIDN, the top level domain registry for .nl. I have do research there and I have already been in the Programme Committee for the past two years and I'd like to continue to do so. It's not a one‑person job, I can say if you like any of the programme in the past four meetings, that was me and if there was any part that you did not like, that was the other guys. Thank you, please vote for me.


CHAIR: All right. So we have two other candidates, but, in fairness, since they are not here, we'll just leave it up to you, those who know them vote them if you believe they are good.

So we'll just jump into our talks. Our first presenter is William.

WILLIAM SYLVESTER: Good afternoon. So, this is a follow‑on presentation to the update that we gave in the last meeting, we'll jump right in. For those of you who aren't familiar we created a task force at the meeting in Madrid that was in answer to the IANA transition where we really wanted to look at our ability to gauge how we were doing from an accountability perspective in regards to the RIPE community. Currently, we have about 16 members, four of which are from the RIPE NCC acting mostly in a secretariat role helping us out. We are grateful for the support they have prided us, so thank you again for all of the support, RIPE NCC. There is a link that we have up here. If you would like to take a look at our page, we're going to be posting some additional details and some other stuff, which includes some of the work that we have done to date, including some of our reviews of documentation, structures, processes.

For all of the members of the Task Force, if you are in the room, would you stand up just so everybody can see who you are. So, if anybody has any comments or questions about accountability, please reach out to any of the Task Force members so that the Task Force can bring that feedback back in and reconstitute that.

So, from ‑‑ what is the Task Force doing and what are we trying to do? We are reviewing these structures, these processes and what does that really mean? We're really looking to see if something is described. So how do we select a Chair? How do we select, you know, the RIPE Chair? How do we select chairs within working groups? Is that a standard process, is it well‑documented, is it something we can understand? Is the documentation that's available for everyone, is that in line with what we do today or is that something that's from 20 years ago that doesn't really make sense? Our primary concern is actually not to create processes or create bureaucracy but instead to understand how, you know, is there an area of concern and, if so, we want to raise it up to the community and give the community an opportunity to resolve that.

So from a next‑steps perspective, where we stand in the life cycle of the Task Force, we have done a lot of analysis and we have reviewed a lot of the documentation in these structures. We met face‑to‑face several times and we have dug into a lot of different intricate details of our community. But what we're looking for is actually input and feedback from the community. So, we have sent some notes out to like the RIPE list, I know there is a lot of discussion and I appreciate a lot of discussion has been ongoing there, it's been helpful from the Task Force to take that and review it. One of the things we noticed is that we'll get references back to documentation or a process that was documented 20 years ago. And, for us, digging through the RIPE archives is a very laborious task and something that if you know of something or see something, we'd really encourage you to reach out to the Task Force and help us better dig in and understand. We really need help understanding and finding some these old decisions or documents or even discussions.

We're really looking to present at the Marseilles meeting more in‑depth findings and we're look to go provide a draft document that's going to be a final report that we're looking to get ratified by the community. In the interim to that, we're looking for any feedback both on some of the work that we have done to date. And ultimately, at the end, we're looking for the community to take our findings and decide what to do with them. So, once again, we're not looking to create or change anything in that regard, but mostly just to report back to the community and enable the community to take the appropriate action that they see fit.

So there is a big question of why does this really matter? And, you know, the big thing is, if you are new in the community, and let's define what new is, if you weren't here five years ago, 10 years ago, even sometimes 15 years ago, you are probably new to the community, and even if you have been engaged in the community for a long time, it helps to explain the processes. I know that when I engaged with the community, there was a lot of things that people seemed to know, but nobody could sort of point to a document, there was no area where you could go and read everything and understand everything. And so it really helps with that engagement and getting newcomers on board. It also makes our community more accessible, and we pride ourselves on being open, you know, we're not a conference, we're a meeting. Within those meetings we have Working Groups and the Working Groups collaborate to find consensus. But, most importantly, while it maintains that transparency, it also preserves for what we do as a community over time, how we act and what we choose to do, what we find is acceptable behaviour and ultimately how do we sort of conduct that business that is our community. For the guys who have been around, that were the 12 guys in the original room, for the guys that have been core in a foundation for this community, this helps provide clarity. It also ensures that we're working towards the same purpose, and avoids sort of rehashing those old discussions by enabling us to all agree that what's down on paper is what we all agree to and how we sort of conduct ourselves on a daily or meeting‑by‑meeting basis.

And so, you know, to give an update on where we are today. We have already mapped out the documentation, the structures and the processes. This includes things like the RIPE documents. Digging through those documents we found documents that were old, some of them were obsoleted and nobody understood why they were obsoleted because they still seemed to be what we are doing today but they were marked obsolete because they are old. Other times we have mailing lists that have decisions, discussions, backgrounds, feedback. Obviously the Plenary is a huge source of where we conduct a lot of our business for the community at large. Then, of course the work within each Working Group, that includes the discussions in the Working Groups, the policies, the requests made of RIPE NCC and some of the other details of how we conduct ourselves in our breakout sessions that mostly happen at our meetings, and then ultimately how these structures interact. What does it mean to have RIPE NCC versus what is the RIPE community? How does that combine with Working Group versus the Plenary, the chairs and the composition of how we have Chair selection and Chair rotation and annual selection in that regard.

So, we have put a lot of work into this. We have created a grid where we reviewed all of the documentation. We also started to review some of the community values. All in all, what we found is that, overall, accountability for the group is good. While we have identified some areas that we're still trying to find some additional information on, that stuff that we're also seeking feedback on from the community, but ultimately through capturing our community values and trying to publish some of those, it gives us an area for agreement in a solid foundation to continue this community that we have come to know and love.

Ultimately, we're looking to identify all the areas where we lack clarity. So, you know, we want to reach out to the community and say, hey, if you don't understand something, if there is a question and you ask someone and they said I don't know, I don't know where to find it, or they sent to you someone and that person said I don't know, we would love to know that because we really would love to bring clarity and either find those answers or at at least document that we don't have those answers, so the community can decide whether it's important enough to document that.

Ultimately, we're looking for feedback. And where we stand today on that feedback, we want to know where things are confusing. At the same time, I know in the mailing list discussion a lot of people referenced documentation. We would encourage parties to tell us where they found that or where it's located so that we can go and chase it. And hopefully it's something more that happened more than ten years ago and we have to dig through ten years of archives, but we're happy to do that if we need to. If there is things that you are looking out for as a community member that you want to find out, we'd also love to know those things that you'd be interested in that you have challenges with. And ultimately, you know, any potential issues you see with the community.

So, with that, we'd like to open up the floor to any questions. If anybody has any direct feedback to some of the questions that we have posed here or further discussion from the mailing list.

AUDIENCE SPEAKER: Hi. Hans Petter Holen, RIPE Chair. I want to thank you for doing an excellent work and I really appreciate all the work that the Working Group have put into ‑‑ sorry, the Task Force, I'm not quite on top of myself today ‑‑ Task Force has put into getting this documented and driving it forward. I really like the approach that you are saying that you are now identifying the gaps and you want to work on getting references to what's written down and then we can have a look at if we need further improvements. I really appreciate that. Thanks very much.

AUDIENCE SPEAKER: Daniel Karrenberg, RIPE NCC, speaking for myself.
What Hans Petter said. I really appreciate people wanting to put the effort into actually documenting the stuff that we're doing, looking ‑‑ taking a step back. I'm still sort of from this presentation I'm not clear what your plan is for the output of this. Is it going to be one document, two documents? What's your plan going forward?

WILLIAM SYLVESTER: So, right now, we're working on collecting all of the information and putting that into a format to report back. We're looking to draft, create a draft report that we'd like to present in Marseilles in addition to digging into more of the details of what we have sort of found, probably with a longer presentation, more in‑depth. But our ultimate deliverable is to identify a report and then have the community provide us feedback on that report before finalising that report.

DANIEL KARRENBERG: Okay. Can you go back a slide? What I see there is, on the first bullet point and all the sub‑points is sort of documenting procedure, right?


DANIEL KARRENBERG: What I see on the second bullet point is something totally different.


DANIEL KARRENBERG: So I wonder whether that belongs in the same document?


DANIEL KARRENBERG: Because I think it's ‑‑ you can look at it in two ways. You can basically say let's document the procedures, being very mechanical about it, or you can say, hey, what's the underpinnings of this all and one of your questions to the mailing list was, you know, is there a higher value than particular interests? And that is ‑‑ goes more to values. And I would think, you know, you could think ‑‑ look at it in a really academic way and say maybe we should first talk about the values because that informs the discussion about the procedure if we want to evolve the procedure. If you want to go and say, okay, well, we'll just, at the moment we just look at doing a stock‑taking exercise, then anything goes. But as soon as you sort of start like we want to make changes, or clarify things and so on, maybe it's a good idea to do values first and then procedure, just from the top of my head.

WILLIAM SYLVESTER: We actually started off with values where we dug into a lot of values first, and then used that to sort of inform some of the other ways that we reviewed some of these processes and structures. So...

AUDIENCE SPEAKER: Peter Koch, DENIC, member of the Task Force but of course speaking for myself. Daniel, I appreciate your suggestions, as always, but I am a bit concerned about the level of detail that we are getting into here, like, in terms of is that one document or two, how many pages and which paragraph goes where. We have had a bit of a slow start due to a certain amount of, say, misunderstandings and I would really, really get to the point where we get the feedback, we get the information from people that we are trying to do this for, which is the community at large, on issues that people don't understand or would like to explain or would like to have explained because the basic thing is that you learn things from reading it, you learn it better from writing it but you learn it best from explaining it to somebody else and this is part of the exercise that we are trying to achieve here.

Now, to the point of your message in a mathematical sense, yes, you would start from the axioms and then do the corollaries and everything else from there. That is not precluded, of course there is value in capturing the values and documenting, but let's do the work and then nail down how we document it. And what is there.

Part of the values, or part 69 documentation of the values of course is getting this unwritten history, or the unwritten rules, and what we're not trying ‑‑ what we're not trying to data networks not going to do is put them in a definitive format ourselves. The point is that the gaps that we have been talking about, that William has mentioned is, okay, so we go around and how have we been doing this or that and sometimes the response is it's always been that way, or that decision has been made by few owe bar [phonetic] and you find those things documented somewhere, or there might be consensus that it has worked well so far and a response to that would be writing exactly ‑‑ writing down these observations exactly, not in authoritative manner to borrow from DNS terminology. Does that address the issue? Thank you very much.

AUDIENCE SPEAKER: Salam Yamout, RIPE NCC Board but speaking for myself. I also wanted to congratulate you, what you are doing is very important, what we're doing is very important. And not because of the documentation and reasons said by Peter and my colleague, it's because how we look to the outside, you know, we are consumed by ‑‑ we take what we're doing is for granted yet we impact the Internet and all the users of the Internet somehow. And some people are watching us, so it's very important that we have our house in order to show to the outside. To understand for ourselves and to show to the outside how we look because somebody is watching, right.

CHAIR: Thank you very much, William.


And next up we have Kyle Spencer who will be talking to us about IX‑F and the African IXP association.

KYLE SPENCER: This is a rare case where my slide ratio is actually appropriate. So my presentation is going to be on the African IXP association and I'll also talk briefly about IX‑F. This is generally a talk to try to raise awareness about what's going on in Africa. There is not a lot of really good data out there and, as the African IXP association, we have been trying to gather some of that data and it's good to sort of present it to groups like this because, increasingly, there is focus on our region and people are trying to extend their networks etc., so trying to paint a bit of a picture about what's going on on our continent.
I am a director of the Uganda Internet Exchange point. I have been in Uganda for about ten years now. I am also the co‑coordinator of the African IXP Association. And by way of that, I'm on the board of the Internet Exchange federation, along with Bijal, who is also here from Euro‑IX and I do other stuff. I will mention that one of the projects I'm working on in partnership with Liquid Telecom is DPS. If anybody is interested on that, find me on the side and I'll be happy to chat.

So, the African IXP Association was launched in 2012 at the African peering and interconnection forum. We have been a member of the IX‑F since about 2014. There are approximately 40 Internet Exchanges in 30 countries in Africa. The first was JINX in 1996 and the newest is Senegal, which I think launched within the last month or so. Also, Togo, Djibouti launched quite recently in Madagascar, we have two meetings a year. The African Peering and Interconnection Forum, which was last in Abidjan. We host a private mailing list for IXP operators, our members, to try to encourage discussion. We research and publish data about our region, some of which I'll show you today, and we host a cool website with useful resources and information about our region. We do what we can.

So, the IX‑F is a group of regional Internet Exchange point associations. So the African IXP Association, Euro‑IX, LAC‑IX, AP‑IX, etc. We focus on coordination, policy, advocacy and research, and one of the research projects currently that we're working on is called the IXP database, we're trying to create a central authoritative database of information related to IXPs that sort of integrates and complements other sources that are already out there like peering DB etc. As well as taking input directly from IXPs through APIs and data getting exported from tools like IXP manager etc. If anybody is interested in learning more about that or wants to contribute to the project, also let me know.

So, back to Africa. This slide shows the sort of growth over the years in terms of number of exchange points, and there is correlation, sometimes, with some major projects that they were initiated by the Internet Society, like the Axis Project. AfPIF you can see was launched in 2010. That's had a big impact on the industry. The little lines at the bottom, the green lines are of course IXPs coming up. Red going down, for whatever reason, IXPs do fail, as I think you all know here, this industry has been going on in Europe for a while as well. This is an interesting graph, we were very proud of the 235 gig number in 2016, but the data that we have collected so far in 2017 shows that the traffic being carried by IXPs in Africa is climbing a vertical wall. We have over 411 gigs going through changes now in Africa. Lots of that, of course, is centred in markets like South Africa, but nontheless there's been significant growth across the region. The oldest data we have was in 2008, Michuki Mwangi did some research in our region. We had 160 megs going through exchanges back then. In ten years there's been 256,000 percent growth or something, it's quite massive.

This is a chart that shows the number of networks connected to changes in aggregate. That's not to say there is 866 networks in the region. It's to say that all the networks reported the number of networks connected to them, and this is the aggregate figures. Again, Michuki Mwangi captured some data back in 2008, the total was 136 significant growth there as well. And the cool thing that's happening within that is that there is a lot of diversity now in the networks that connect to the exchange points in Africa. Back in 2008 all the networks that were in exchanges were ISPs and a few NMOs and an occasional government network, but today we have got a broad mixture, enterprise networks, you can see the green bar is consent delivery networks, they are coming in fast, MNOs have joined the party, they have recognised the value of exchanges, they are there all the time, etc. And as a result, we have seen a trend in governance models in exchange points in Africa. So this is sort of exchange points over the years and what governance models they chose, now the blue bar at the bottom was ISP association, which was initially popular. Because back then they were the only guys operating major networks so it made sense for them to get together and run IXPs. But as network diversity increased that governance model made less and less sense because it's not really representative. In some cases ISP association IXPs would exclude not ISP networks from joining the party and so the industry sort of naturally recognise this had and you can see the red bar is private not for profit exchanges which have really won out in terms of popularity in our region. There are also some government exchanges in Africa we have a lot of governments that like to have a lot of control over what goes on in their country and so you see that too. But you are also seeing, in the blue at the end, IXPs that were ISP association governance models actually transition away from that to private not for profit. They did this themselves because they recognise it had hindered their growth.

This slide talks a bit about facility neutrality. It was a really mixed bag in 2008 and it still is today, but we're largely seeing exchanges gravitating towards neutral facilities, which of course makes sense. But this wasn't always common knowledge in our region. Your mileage may vary of course, they may not be in neutral markets, you may be forced to use a particular infrastructure provider to get into the building or whatever, and we just don't have enough data to really drill down into that. And don't have the resources to capture it but perhaps in the future.

The facilities are getting better generally. I don't have great historical data to sort of look at this over time, but I can tell you that this was not the case only a few years ago. Almost everybody has air conditioning these days, almost everybody has power backup, you know, cable trays, video surveillance, etc., but again, your mileage may vary, power backup systems maybe there but the battery isn't good. The power infrastructure in Africa is not great so they take a beating. There is still outages, reliability is not perfect.

There are an increasing number of IXPs in Africa that employee full‑time staff and pay them and this was not always the case because a lot of IXPs in Africa started under volunteers, and that legacy lingers today. There are still a lot of part time volunteer‑driven exchanges, although that is a trend that is going towards the paid full‑time direction out of necessity, IXPs are growing, they can't sustain themselves solely on volunteers etc., and some realities are kicking in as we develop.

Most IXPs in Africa now, with rare exception, require a formal contract to join. It used to be quite informal. You'd e‑mail the volunteer and say can I turn up and you'd find a way to get into the room and connect. Now you have to have a contract. Many IXPs charge fees now, because they are increasingly becoming sustainable. The ones who do not recognise the need to. The ones in light category wish to or plan to. The ones in grey on the right who have no intention to ever charge fees are Internet Exchange that are fully sponsored by the data centre, the Nap Africas of the world. They don't need to because that's not their model. Once you are inside the room you sign the contract, you pay the fees. These days you are generally allowed to peer with whoever you want. That wasn't always the case. Back in 2008, referring to the Michuki Mwangi survey, it was mostly mandatory multilateral peering, that's what you see in the orange on the left. Today, the model that's one out is the multilateral model where you have networks that can peer with any other network as they wish and there is a route server available to make it easier for everybody to join the collective fund. We still do have some MMLP IXPs, as well as some layer 3 IXPs, which tend to be mandatory multilateral just because of the complexities of doing anything more complicated.

You also run into an increasing number of value added services at African IXPs. In 2008, there were only three route DNS nodes that were accounted for in IXPs. Now we have at least 26 that we're aware of. 90% have at least some kind of vas, whether it's a ccTLD server, a looking glass, out of band access, whatever, this is rising as well.

It's not all sunshine and rainbows. Some IXPs still lack effective governance models, some are still in pure quality and non‑neutral facilities. Some still have power and reliability issues, etc. But, we're growing fast, you can see that in the numbers. This may not be perfect data. It's best effort. A lot of it is self‑reported but there is definitely a trend and the graphs are up and to the right. So, if you are interested in extending your network into Africa, and you are interested in engaging with Internet service providers for that reason or any other, get in touch with me and I'll make sure you get pointed in the right direction.

Thank you.



CHAIR: Daniel?

AUDIENCE SPEAKER: Daniel Karrenberg. Thank you for that presentation. That was very interesting. The thing I was wondering is, your slide, when you said the trend was away from associations towards not‑for‑profit private organisations, and you only mentioned sort of briefly what the reasoning for that was. Can you go a little bit more into that to help me understand why people are doing this?

KYLE SPENCER: Yes, this slide talks about the increasing diversity of the type of networks that are connected to exchanges in Africa. On the left you have 2008 where most or all of the participants in exchanges were ISPs. On the right you have, today, where not only do you have ISPs and mobile network operators, where you have banks, content companies, you have other enterprise services, government whatever. So, if your governance model is based on the old days when it was only ISPs and you don't allow all these other networks that you have today a seat at the table, your IXP becomes unrepresentative as an association‑based model, because it excludes an increasing percentage of its members.


KYLE SPENCER: And in some cases again, as I mentioned earlier in the talk, some ISP association‑driven IXPs actively exclude networks that are not ISPs because they think that's in their benefit even if it's not.

DANIEL KARRENBERG: I can see that. But what prevents all these other people to join the association, or really the underlying question is how do you in this private not for profit model safeguard the trust and specifically the neutrality of the thing?

KYLE SPENCER: What I would say is that varies depending on exchange, right, but what you are seeing here is that exchanges are increasingly built from the ground up to accommodate this diversity and ensure that all the members are taken into account in the governance model.


AUDIENCE SPEAKER: Brian Nisbet, HEAnet. And I ask this from a ‑‑ this has been a really interesting presentation and this question comes from a position of ignorance so I make no assumptions in asking it. One of the things that I have heard or partially heard about African IXPs was that there were two problems, one of which was that they were being put in place without, in some cases, without the community behind them, and in the other kind of issue I had heard of around them was that some very kind of, a lot of equipment, a lot of unnecessary expensive equipment was being put in place to build that IXP when something smaller could have done maybe a better job or indeed left money elsewhere for other development. So, I suppose, are these problems that you are seeing or are things improving or am I completely wrong and they were never a problem in the first place?

KYLE SPENCER: No, you are correct. We often see very expensive exchanges get set up and that happen commonly occurs when there is a large grant available from ‑‑ or to a government, for example, to set up an IXP or a government puts together a budget to do it as part of a national project, and so ‑‑ I won't really go into the rationale for why that occurs, but it does occur, but it still occurs.

BRIAN NISBET: The exchanges being put in place without there being an community or otherwise to support them.

KYLE SPENCER: The complication for a long time in Africa has been awareness of the value of IXPs and so you get a small number of people who are very active in the community that get one started up with whoever is willing to participate. On that doesn't involve some of the incumbents and other big players and there is also a lack of experience and guidance especially as you go back in years, so things sometimes get set up badly. But the other thing that we stress when we're talking within our own meetings is there is no correct governance model necessarily. You have to do what works best in your context and in some cases ‑‑ I mean, you see that dark red, for example, is a for‑profit IXP. We're starting to see the emergence of IXPs just as a business, and they don't necessarily have to cater to a community governance model and that may work for them, I don't know, so it's not always necessarily the case that that's the best model but it's one that's definitely won out and people are making an attempt to do that.

CHAIR: Any other questions? Thank you.


All right, so this portion we're going to be doing lightning talks. Lightning talks are very brief talks within ten minutes, so, every speaker gets ten minutes. That includes the time it takes them to get up on the stage, and it includes, also, the Q&A. So if the presenter takes the full ten minutes, that basically means there is no questions. So you blame them, not us, just to be clear on that. Our first presenter, Roy, from ICANN, please go ahead.

ROY ARENDS: I work in the office of the CT L and since I am here I decided to submit a small presentation about the route key rollover and it was accepted, thank you to the Programme Committee.

When you do an DNS validation you may have the trust anchor for the route figure, this key doesn't live forever, and it needs to be updated once in a while. And you can do this either automatically, or manually. But whatever you do, we at ICANN, can't see what you have done. We don't know if you have the new key configured or not. So we knew this and for that reason we have had efforts to role the trust anchor for the route. We started in 2014 with a design team. We published plans, we blogged about it, we talked about it. I remember standing on a similar stage at the RIPE meeting in Madrid talking about this, and following that plan we introduced a new key on July the 11th of this year. And the plan was to stop using the old key on October 11th and this is what we call rolling over the key for the route.

So far so good. There is an RFC, as a proposed standard, 8145, that details and methods to basically submit your configured trust anchors to the root servers and he clearly saw 30 days after the introduction of the new key, so I blatantly copied Duane's slide, and you can see this flip basically around August the 10th. The red that you see here are validators, or resolvers or instances sending information to the route server that have the old trust anchor configured and the green that you see here are implementations that have both trust anchors configured.

Now you can clear see here the flip, red goes to green around October ‑‑ sorry, August the 11th. I'm so used to saying October the 11th. Apologies for that. What he also did see ‑‑ sorry we both expected that the signal, the red signal that you see there, down there, that's about 7 percent, we both expected that to go down between August the 10th and October the 11th. But as you can see, that really didn't happen.

So, we did the same exercise with BDF and L route servers. We looked at the entire month of September and we saw about 12,000 instances or address its speaking to us, sending the signal either the new trust anchor and the old trust anchor combined, or the old trust anchor, and 500 of those 12,000 only sent the old trust anchor. That's about 4.2%. So, we had no idea why that was. Why there was a 4.2% number. We could guess, or we could basically say this is a misconfiguration or people haven't listened or they don't do validation or ‑‑ as you can see we had no idea why this happened. This was about three weeks before we planned to roll the root key and we decided to look at those 500 addresses, literally try to contact the owner of those addresses for whatever that means, and see if there is a misconfiguration, maybe there is a bug in the software that they are using. In that time, we have already identified basically two different implementations that had issues, I think bugs is a great word ‑‑ sorry, is a bad word to use here, but oversights, if you will, and we hope that they get updated sooner rather than later. But meanwhile, we have decided to postpone the key roll that we had planned on October 11th. We currently do not know when we are going to roll the key. We do know in an it's always on the 11th of the first month of a new quarter. So it might be January 11th. It may be April the 11th. But we really want to investigate those 500 addresses.

Thank you, that's it. I kept this deliberately short because I assume there are a lot of people having questions about rolling the key, or the delay to roll the key. Thank you.


AUDIENCE SPEAKER: Peter Koch, DENIC. The team involved in designing the rollover and doing the communications and so on and so forth has very much advocated in favour of RFC 5011. Now there could be a certain ground, noise, something group of people who don't people in 1511 or have reasons not to apply it in a way. Did you have a chance into looking into this particular issue where that could be some deliberate non application of 5011 and what people thought there, like, in talking to the resolve operators.

ROY ARENDS: I don't want to jump the gun about the results. We only have a few things uncovered so far and I can tell you. Managed keys versus trusts keys, that's a configuration statement in BIND, where trusted keys is the old way of having still ‑‑ not still, of having static trust anchors configured and managed keys is a way to do it with RFC 5011 with automated trust anchors. That's one. Another one is, for instance, binds, when it has trust anchors configured but decide not to do validation, it will still emit those trust anchors, right, so that's basically a false signal, in my view. Then there is, for instance, unbound, if you are very close to that 30‑day period before you roll the key, and bound, naturally 5011 leads to a 30‑day hold‑down period, so if you start that before that window of 30 days, and bound won't have that new trust anchor, then there are various things like write permissions. So you start up a name server, you check if you have write permissions, you have, then you switch user and eventually you get a new key and you can write to that file. All of these things currently exist. So ‑‑ and this is just the few that we have seen and we expect a few more. So I hope that answers your question.

PETER KOCH: Yes, mostly, thanks, but that was all incidental or accidental, I was looking after deliberate action in that people insisting or planning to manually roll and therefore not sending the signals at that point in time.

ROY ARENDS: I have only one evidence of that, one instance that has ‑‑ and this is a while back ‑‑ one ISP that has @cresolve.conf, the first IP address that did KSK 2010, the second IP address did KSK 2017. You might laugh, this actually works, do not do this. When 2010 doesn't validate the resolver returns serve fail, your implementation will automatically go to the next resolver and 2017 then works. So, I have got one evidence of that.

CHAIR: Just time keeping, we have about a minute and a half left.

AUDIENCE SPEAKER: My name is Shane Kerr, I am required to tell you that I work for Oracle but I'm not speaking for Oracle. So, we have unbound running in docker images which we deploy and those keys don't get updated. We noticed it at the end of September and we would have made a deadline, but now we're a bit more relaxed but that's at least some of these numbers.

ROY ARENDS: Yes, that's another issue we have seen as well.

AUDIENCE SPEAKER: Jim Reid, computer programmer. I'm going to ask a very unfair question and I think I know what your answer is going to be but I'm going to ask it anyway. What is ICANN going to do if there are people running resolver operators where they can't or won't switch to the new key signing key? At some point are those guys going to prevent a rollover to the new key forever or at some point are we going to bite the bullet and risk something breaking?

ROY ARENDS: We will let you know when we have the answer to that.

JIM REID: You disappoint me. I thought you were going to say that decision is somewhere above my pay grade.

ROY ARENDS: That decision is somewhere above my pay grade.

AUDIENCE SPEAKER: Stefan from DENIC, as an operator I was pretty disappointed you did not do the rollover at that time, and, from my point of view, DNSSEC is currently in a state where we can risk, from my point of view, that it's crashing, and this would cause maybe some serious amount of awareness in the companies, if you start to deploy a software in docker images which don't update, and so on, then I don't care about those guys if they want to build nasty things, they can do ‑‑ I'm just looking for an opinion. From my point of view it would be okay to rollover in the next window, and to take the risk. Is anyone against taking the risk?

ROY ARENDS: I want to address your opinion. That would be a fair opinion if there was nothing such a software bugs. If people make the manually run the key and forget about it, there is a responsibility, absolutely right. But if there is a case and we didn't know, still don't know, right, if there is a case where there is software bugs that the operator doesn't know about or the vendors don't know about or the ICANN doesn't know about or the community doesn't know about, we need to know that first and that's why we decided to error an the side of the caution, right. We had ‑‑ we completely designed this, right, in order to roll back or to delay, so we were completely prepared to do this and I think we took the right decision on delaying. Thank you.

CHAIR: Okay. Thank you very much.


For our next lightning talk we have Alexander Asimov talking about a measurement as a key for transparency.

ALEXANDER AZIMOV: Hi. Today I am going to talk to you about transparency of IXP market. And let's start with the next statement. At the very moment there is no transparency of ISP market. And if I may mention Enuga that wants to have his first contract of the upstream provider to get the first link of IP transit, he is doomed to compare next wonderful curtains and there is no other way before he will get ‑‑ before he will learn how the ‑‑ is built and so on. And no other choices.

This IXP market. But new services are emerging, we are moving to the Cloud, there are predictions that the Cloud will rule the world but maybe clouds are different. And of course they are not. Clouds are totally the same. Again, pictures and the marketing materials.

So, the only option is to move from mistake to mistake, spend time, spend money and maybe finally succeed.

So what maybe we need is some new tool that will create our own pictures that will show height features that we are looking for. And why do we need a test bed. And thank God we have Ripe Atlas. We decided to create a new tool that will be working on top of RIPE Atlas, IPI. It will create latency maps and lookup maps and maybe provide some other options.

And as a use case, we decided to compare the mitigation providers. Today, the computation between the documentation providers is common to result rise in number of points of presence, let's see how the number of points of presence results in latency. So, just in case, in all measurements that we used home page of the company.

The first was the company that states on its website that it has more than 100 points of presence. What can I say? The latency map looks really cool.

The second one have nearly 40 points of presence. As you can see, the latency map has some changes. There are regions that become red. And so, let's move on. Now ten POPs. And the map is mostly red. And another 10 points. As you can see, the number of points is similar, there are geographical diversity is similar. But the latency map is totally different. Let's take a look with another mode, which provides a counter view country view.

What can we see? The number of points of presence does matter. If there is no points of presence in the DE‑CIX region it will result in increased delays, but existence of points of presence in region does not automatically affect quality of service so let's compare these providers from a different angle. Let's take a look at the output of DNS. As you can see in the upper right and left corner there is different DNS output for different probes. The provider in the upper right corner uses a geoDNS. Why it is important. If we're speaking about the goal, the goal is to localise traffic. If you need to have a reasonable latencies on global scale, you don't have a lot of options. You can try to use pure BGP Anycast, but it is a hard or you can use geo DNS which is much easier to construct. The problem is that attacker in case of a geo DNS can easily ignore a DNS output. As a result it can concentrate a DNS attack in a region and the result can be devastating. What can we say as a summary?

The number of points of presence is important. But is not a silver bullet. There could be a similar number of POPs or that will give a really different latency maps. There are other features that are also important if you are speaking about quality of service. And this was all ‑‑ or this will be all impossible without RIPE Atlas. I'd like to thank the RIPE Atlas team for an opportunity to measure and compare different providers. And this was just a use case. Because this tool is used. You can check latency of your service, evaluate quality of your upstream providers, ask RIPE for additional credits, do whatever you want. It's working, install it and use it. Thank you for listening.


CHAIR: Any questions? We have time.

AUDIENCE SPEAKER: Salomon. I have one question. This is the second presentation. You are hosting in GitHub. It's only the source codes. What about the real measurement tools, is it hosted in website or...?

ALEXANDER AZIMOV: Using this tool on the GitHub, you can create these beautiful pictures by your own for any service in the world. At least you need to ‑‑ ‑‑

AUDIENCE SPEAKER: I need to compile it in my PC at this case.

ALEXANDER AZIMOV: Yes, you need a RIPE Atlas grid, that's all you need.

CHAIR: Any other questions? I didn't mean to scare you about the ten minutes. We still have three.

AUDIENCE SPEAKER: Kaveh. Not a question, I just want to thank you for contributing to RIPE Atlas and I encourage everyone in the room to do the same.


CHAIR: Brian, your time is up now.

BRIAN TRAMMELL: Hi, I am Brian Trammell and I am going to take my whole ten minutes because this is all about me asking you questions.

Question number one, who here has not heard of QUIC? So, QUIC is a UDP‑encapsulated protocol being standardised by the IETF, so, the idea here is running http 2 on top of protocol that isn't TCP, that is encapsulated UDP, it looks like UDP to the network. That is being rolled out by Google since 2014. At this point it represents about 7% of traffic on the Internet and 35% of traffic crossing Google's border. The design space for this is deployability, this is why they did it on top of UDP. The idea was we haven't any transport protocols but putting other transport protocols other than 16 or 17 in the header doesn't work, for evolvability, low latency and security. Security it has TLS 1.3 built. The initial focus is to support http 2 for this. It's been called TCP 2 in certain areas. Don't do that, it really annoys the people who work on it. It's designed to be a new general‑purpose layer for protocol.

So, everybody saw the Ruru talk yesterday morning, this was a really really cool basic TCP passive latency measurement tool. This was possible because TCP continuously radiates a whole bunch of information about loss and latency to passive observers on the path. You can look at the sequence and acknowledgment numbers and figure out what RTT and loss are.

QUIC doesn't do this right now, which means if you have an increasing amount of QUIC traffic on your network, you can't use techniques that look like Ruru to be able to do passive measurement. Is this a problem? Do you use these things to do trouble shooting on your network and will it cause awe problem if you can't do that because we are looking for input into the standardise process for the protocol of the design.

QUIC, back at TCP school for a moment, this is an animated version of Richard's slides from yesterday, how do you do be passive RTT measurement? The sender sends a packet with a sequence number, acknowledgment N plus 1 comes back. The time between those two ‑‑ the time that the second one arrives and the time the first one was sent, that's the RTT. It then can send the next packet, which is N plus M, so the sequence number that is beyond that sequence number in the sort of the window. If you are a passive observer on the path you can look at the time between N and M plus 1 in the ACK and you can figure out what the forward component of the RTT is. It's hard to look at the reverse component in the RTT if you have an asymmetric stream, so you can use the TCP timestamp option, which is about 30% of the Internet supports 30% of the content hosts. You can subtract on the other side you get RTT rev and you get something that looks like RTT plus application delay.

Looking at the quick packet header. We have a packet number, we don't have sequence numbers, acknowledgment numbers, any of this stuff. Those all go away. How do we measure this? It goes kind of black.

Why do we do this? So, a minimal wire image is explicit design goal of the protocol. The defence against is encrypt it all. If it doesn't need to be on the wire in a way that the network can see it, we are not going to put it on the wire the way the network can see it. It goes behind the encryption veil. There is two reasons to do this: one, every bit we put on the wire in a way that people understand it is a bit that we won't be able to change in the future. So one of the reasons that QUIC is being built is that TCP is no longer revolvable because there are so many middle boxes in the Internet this think they understand how TCP is working and are helping and in some ways they help in ways that make it impossible to change it. And every bit we print on the wire is a bit that may be used against us in the future. We have learned about non constant time encrypto attacks. The idea is, if we don't need to put it on the network and if we don't have a good overwhelming reason to do it, we're not going to do it.

It's not that bad. It turns out you can do at least what Ruru does, so I talked to Richard after his talk, I said okay, are you doing this based on running measurements, like look at the sequence numbers and the acknowledgement numbers and matches them or are you just looking at the handshake? He said we're just looking at the handshake now because we haven't figured out how to do the other thing. You can use heuristics on QUIC as currently defined in the handshake. Okay, I see a client initial packet, there is a something in that little type hearder, okay, I can ‑‑ you are pretty guaranteed there is only one packet coming back from the receiver and then only one packet going back to the sender once that happens. You can make assumptions there unless you are doing zero RTT resumption so QUIC also has a mode that if a client and server have already talked to each other they can do zero RTT resumption which means that the sender can send up a full flight or ten packets, so you can't actually figure out which one of those client packets is ‑‑ which one of the sender packets is the one that actually is the response to that handshake which makes this break. There might be heuristics that you can use to use this but these are not currently in the design requirements of the protocol, so we might break these.

We had this discussion in the QUIC Working Group in the IETF. If we actually do want to be able to do passive measurement, then we should explicitly design it as such. Can we do it in a way that has minimal impact on the wire image? We could define the handshake that is guaranteed recognisable in the network and we can publish the heuristic for that and we can say if you want to know what the passive RTT is at the handshake, measure it doing this. You can also do it with a single bit in the header using multiple standards per flow. If you are really interested in the details of this there is a poll request on the QUIC transport draft. The way this works is every packet has one of two colours, we can call them whatever colour you want. You start off sending a red packet and when the receiver sees it's first packet and it's a red packet it sends back a red packet. And you do this at the sender side until you see your first red packet come back and then you switch your colour to blue and then you send blue packets until you see the red packet and then ‑‑ blue packet and then you switch back to red. So on. The nice thing about this is that basically you end up with an edge triggered signal in the network that changes colour once per RTT. So you get one RTT which has the really nice property you can actually look at the RTT and the congestion window evolution through the lifetime of the packet.

So, the question that we have for the operations community is this: We talked about this at the IETF QUIC Working Group meeting in Prague, and there was pushback on this. It's like, well, if I know the RTT then maybe I can figure out where you are. Well, I can also look at the source IP address and so on. It turns out that geolocation via RTT is not that much of a risk. If you want to talk to me about that, I will bore you in the hallway forever about this. I have done a lot of work on it. And then there is the second question, which is utility. I mean, are we going to go to the effort, are we going to burn 1 bit per packet in the operations community when nobody cares? I'm asking the community. A show of hands. People who presently use passive RTT measurement on your network? That's depressing. People who plan to use passive RTT, people who looked at Ruru and said, wow, that's really neat? Okay, less depressing. Still not many. People who have no idea what I'm talking about? That's even more depressing.

All right, so people who actually plan to use this but don't want to out themselves in a public forum? If so, come talk to me, send mail to me, I am findable. We're also within the Working Group, we have a work item, the manageability draft, draft IETF QUIC manageability, which is all about how operators can do things with QUIC traffic on their network. Please have a look at that. We're going to drop a new version of that hopefully tomorrow, and if you have suggestions as to what should be in that document, let me know. So with that, I think I have a little bit of time for questions.

CHAIR: Yes, we have just a few minutes for questions. Does anyone have any questions for Brian?

AUDIENCE SPEAKER: Lee Howard, Retevia. I have not been following the QUIC work since I figured out I lost the first battle at the BoF and I was never going to get my way at anything they did. Was there any discussion ‑‑ I haven't read the manageability draft. Is there any discussion of any other bits exposed to operators that, other than the RTT bit?

BRIAN TRAMMELL: At this point, no. There is discussion within the Working Group on packet loss. And so if we go back over here to the header. There is the header. There has actually been discussion about encrypting the packet number as well which turns out to be really, really, really uneconomical like in terms it of, like, the amount of power you have to run the servers on, it's new hydro electrical plants worth of power, so that's probably not going to happen. So, in a packet numbers looking now like it's going to be sequential, so you can do the stupid downstream loss thing where you count the packet numbers. Did you have other things that you might want?


BRIAN TRAMMELL: Do you want to talk about them publicly?

AUDIENCE SPEAKER: Regrets I have had a few. One of them is that I never deployed ECN and that would have been a really cool way to detect there is congestion and figure out ways to respond to it.

BRIAN TRAMMELL: QUIC will be ECN‑mandatory. The re‑exposure, the TCP flags are private. There has been some discussion about doing something like Conax which would re‑expose that information to the path. One of the things ‑‑ basically we had this discussion in Prague and we had an hour‑and‑a‑half of going back and forth in the Working Group meeting about how terrible it was to expose anything to the path, DNS able look at your RTT and do nasty things to you. You can see what side of that argument I'm on. So the thing that came out of that is that we formed a ten‑person design team to talk about this one bit, so the latency spin bit. And the idea there was to try and discuss these metrics one at a time. So that we could basically come to some conclusion about what's going on. So, one of the things that's happened within that design team is that some of the operators who are participating in that have said, okay, latency is great but what about loss? So that will probably come back into the discussion. It turns out if you have really high rate flows with high resolution RTT information, you can infer loss.

AUDIENCE SPEAKER: I have other ideas but I'll let somebody else talk.

AUDIENCE SPEAKER: I have a very small question. Actually, when you design a protocol, actually, you need to look at the cost and benefit from that protocol actually and the consequences of having UDP, TCP over UDP which will create a lot of traffic, CPU load and loss in the network. And I couldn't understand ‑‑ I know there is some reasons you go over UDP, but it's not economical for having this TCP over UDP. How do you justify this?

BRIAN TRAMMELL: Actually, if you have access to measurement or other evidence that there is differential treatment of TCP and UDP large scale, it would be interesting to see that. We have done as much measurements that we can and we can't actually see a differential cost for that. There is differential cost with respect to rejecting bad traffic, it's easier to reject bad TCP traffic than UDP traffic with respect to noticing this is probably a bad packet because of this State machine that's exposed in TCP and it's not exposed in QUIC at this point. We have actually talked about whether or not that should be exposed. You can possibly do slightly higher cost things with this handshake and tracking the handshake. But yeah, that is an open point in the design space and I will take that comment back to the Working Group.

AUDIENCE SPEAKER: Lee Howard again. So the reason I brought up ECN is that my extension that I have been looking for for years is that when I do capacity planning, I only know that I have a link that is running low in capacity. I don't know how much more capacity I need. So, I think it would be great if there was some way that content providers and operators or that hosts and nodes could communicate to each other on how much latent demand there is. So, for instance, if you were providing me a video source, that's great, I love that video source that you are providing me, but you may decide based on either packet drops or increase in latency or you're heuristic that there isn't the capacity, therefore you might rachet it down to a nice 320 P codek, so in aggregate then, when doing capacity planning, I might think the network guys might say there is only a few more, you know, megabits per second needed or gigabits per second or 20 gigs or whatever, when, in fact, if there was enough capacity, NetFlix or YouTube, or whoever, would provide 10 or 100 times as much throughput if there were capacity available and there is no way for me to know how much latent demand has been suppressed.

BRIAN TRAMMELL: That's a really ‑‑ we should have that discussion offline. I will say that ECN as it's presently defined and deployed which is not very much, but increasing, that's a different talk that I think I gave a couple of RIPE ago, is ‑‑ does not give you high enough resolution information to do that. There is a proposal before TSVWG called accurate ECN. That is looking like it's going forward and it's also looking like QUIC will probably do something that looks like accurate ECN. So that will be inside the control loop of the transport protocol. Re‑exposing that information is another question. But, one that is certainly ‑‑ I think it's certainly under discussion or certainly open for under discussion in the Working Group. I know that at the Seattle interim they decided to form another team to go off and discuss ECN, I don't know if they are looking at exposure. But that's probably where they are going to go with it.

CHAIR: Okay. Thank you very much.


So, a few of the usual announcements. Please rate the talks. Please rate them highly if you like them, rate them low if you don't like them. We always like ratings. Also, the voting for the elections for the Programme Committee are open, so please go to, vote on the people you would like on the programme committee. We have two workshops this evening, both of them start at 18:00. First is, Tim Armstrong will be doing a workshop on open network switches here in the main room, and then next door in the side room will be doing IPv6 and the enterprise. We also have the newcomers' welcome reception, that will be upstairs on level 6. You will need your badge. If you have a first‑time member sticker, then we strongly encourage you to join.

And then also, there is a networking event happening tonight between 2100 and 0100. There will be buses that will pick us up in front of the Conrad here starting at 20:45. So, thank you very much.