Archives

Closing Plenary
Thursday, 26 October 2017
11 a.m.

BRIAN NISBET: Hello. Welcome all to the Closing Plenary session of RIPE 75. I feel sure it should only be Tuesday at this point in time. And yet here we find ourselves on Thursday morning. This is also the first time I have actually been on this stage all week. The lights are very bright as people keep on telling me.

Anyway, I am Brian, and myself and Benno will be chairing this Closing Plenary session until we hand over to Hans Petter, who gets to vote open and close.

The first thing we want to go through is the results of the PC election. And thank you to all of you who put yourselves forward. And thank you to all of you who voted. We do try and get new people, new voices into the ‑‑ and keep that live, I suppose, into the PC, to keep providing the content for you all.

So, there were two slots up in the PC, and the two people who were elected were Klietza Mara and Dmitry Kohmanyuk. So congratulations to both of those.

We will be publishing the details of the election on the website as we always do. And certainly the PC would like to thank Jelte Jansen for the last two years of excellent contribution, and obviously he was not re‑elected, but thank you very much.

So, to move on to the last full presentation of the week, and we have a full presentation, a couple of lightning talks, and the NCC technical report and all sorts of other exciting things for you this morning. So, we have the first talk is from Frederick Korsback of NORDUNet and SUNET.

FREDRIK KORSBACK: Interesting clicker thing. Hello.

This is my first time at a RIPE meeting apparently. I am usually out and about and having a lot of speeches and conferences so it was it's quite surprising this was the first time I was here.

How hard could it be to build a new network? It's actually not that hard.

First me. People might know me as Huggo from IRC or e‑mail discussions whatever. 30 years old just recently. Live in Stockholm, working in network architecture for quite some time, attending all the meetings except this one apparently.

And I am here to talk a little bit about ‑‑ let's talk about what Sweden S, we have a king with a fancy hat. We are the best team in the world at ice hockey. We have been digging fibre for quite sometime. Everyone knows eye key an and we have a lot of snow, that is like the short version of us.

So, we're up here, quite far north, especially from this type of vantage point down in Dubai. Being this far north could sometimes present both opportunities and some challenges, we'll come to that later.

I was actually surprised when I did this size compression. I didn't know that Saudi Arabia was such a big country, but the things you learn. So before we go into the actual presentation, this is actually a really really long presentation, because the PC asked me, can you do like a two‑hour presentation, we don't have any ‑‑ there was a lot of people that couldn't come. I was like, sure, no problem, suddenly when I got here it was half an hour. So this is going to be the very super duper lightweight version of it. And we are going to try to stay, if we look at the OC model we'll stay in layer 1, some 2 and some 3 as well. This is going to differ a little from the other presentations we heard this week. There is going to be a lot of physical talk here.

So, SUNET, people may or may not know this, is many things. First, it's a network, we call is SUNET C, the C comes from the word coherent. We have an identity federation called SWAMID. We have hundreds of OTT services today. Essentially everything that people want to buy these days, they want to outsource everything so we kind of have to do it. We are one of the oldest RIPE LIRs. I heard some rumour on the street that we might be number 5, unfortunately the danish might be number 4. So that's a bit awkward. We're also one of the prime owners and customers of a network called NORDUNet, that the typically the network people know me from. I do the whole peering part, so everything peering conferences and NANOG I present NORDUNet. But today I'm here to speak about SUNET.

We are a non‑profit funded research Council of Sweden and we are one of the real OGs in IP networks. And also one thing that might differ a little bit from other research network, maybe not so much in Europe but in the rest of the world, is that we consider that every single packet can be research traffic. We are not the ones that can decide if there is research traffic on NetFlix or not. Or Bit Torrent, could be, could be not, we don't really distinguish between those kind of things.

So, our users of are the university and the university colleges, as you would assume with an NRON. Dormitories and ISP serving dormitories, I know this has been going out of style a little bit in Europe, but we still have a few of those. And of course other governmental organisations, colleges, libraries, museums, those sort of things, research institutions. We also have MOUs with the national archives, we hand all the super computers that does collations for CERN and those kinds of things. We are also happy to provide Internet access and those kind of things for people that are either non‑profit or just has a really good idea, which could be useful.

So, the back story of this presentation is that we needed to build a new network. And this was coming from quite a long time ago, a few, much smarter people than me figured out it would be cool to build a new network and not only build or like improve the old network all the time.

So, they kind of made it all up on paper here that everything should end at the same time, so we had 15‑year‑old fibre IRUs ending at December 2016. We had DPDM tender which was also really old which was set to end at almost the same time and IP packet equipment was ending at the same time. So we had a good opportunity to make something new and good

Everything was paid off, written off, it had service duty, everyone was very happy. And we also needed to improve the old assign. The old assign was not very good. It was a dual star, which Stockholm as origin, so everything was depending on Stockholm and that was because back in the days when it was built, routers were extremely expensive. It was in the ‑‑ we built that network when 10 gig was completely new. And one of the first networks that actually had a real 10 gig backbone. Back in the days routers were extremely expensive, so we could only afford two, so we bought two really big ones, and optical was cheap so we could do optical long lengths all over the place. This, of course, has a lot of drawbacks as one can imagine, that I mean Sweden is a long country so having EGP links running over 1,000 km causes a lot of problems.

Anyway, the first thing you need when you build a new network is a fibre network. We kind of plotted out what we wanted to do. We didn't want a network that was depending on Stockholm. We wanted one that could live without Stockholm. We wanted also to have triple redundancy into most regions in this case. Because, we had a few double outages and we actually still have double outages, and sort of wanted more fibre in the ground, and to connect, we have about 20 cities in Sweden which we consider that city that we need a proper POP on. And if there is a customer that's outside these 30 metre regions, we just buy the capacity that we don't need their own network there.

And also, what we did is, in Sweden we have been digging fibre for quite sometime, it's heavily subsidised by the government so there is a lot of different entities that is pulling fibre. One of the requirements we had is we only need one contract partner, because we are not that many people, we are less than 10 people in the operational organisation, so we can't keep track of 150 different contracts. So we had one contract, but there is about 150 subcontractors of that fibre, so that was also very important for us.

And we tried to look a little bit in the future like what do we need from this fibre network. And the first thing we noticed what we really wanted to have is that we had a lot of unplanned outages, like double outages that was because, oh, these two diverse fibre was in the same trench, oops; that happened all the time for us before, we required all backbone fibres it must have came C data, which is file type format you can put into Google Earth which plots out the fibre exactly where it goes. This means that the vendors couldn't cheat as much. This turns out they do cheat anyway and paying the fibres that are in different sides of the road, but they are ‑‑ that's another type of problem. But at least we have a fair chance of knowing where the fibre are; if we know there is a storm coming we can, we know which fibres are passing you know, which way the storm is coming or hurricane or something.

We also, of course, need, from these city POPs, we need access fibre from the city networks to diverse paths to the universities at least. And there was impossible to get CAMZ data from these networks, they just said networks we will not provide you with map data. First because they don't have it and also because they think there is some kind of thing with that they have like business intelligence where the fibre is going, so ‑‑ actually like last week we had problems with this order diverse fibre path in the city. It turns out they are not diverse. Like in this case, this is the ‑‑ this is two separate fibre strands, and they probably separated by 2 mill metres, so I guess, we will see them in court.

So that's ‑‑ I'm not going to say the easy part, but it doesn't ‑‑ it can not only be fibre. You need the fibre to upfill certain qualities. So, when we designed this network, we designed it for 1 terabit. That was our lowest we could go on single wavelengths. Although we, of course, using 100 gig and 200 gig now, which is ‑‑ we put the contract here on fibre on ten years. So, we need to make sure that in ten years, we will not have to redo the network again. So every calculation we did was with 1 terabit in mind.

So, one of the first things is, actually the whole thing, people always talk about dampening and how much dB on this patches is not that important on this case, it's the reflection is our biggest enemy here because to get up to the ONSR we want because that's the only thing that matters over 400 gig essentially, you need really low reflection. Because, getting low reflection means you can run a thing called Ray Monday amplifiers which is like a magic black box. But, for the sake of argument, let's say they are a little magic amplifier which amplifies in the wrong direction and they are very, very sensitive reflection, because this is like running a flashlight in your eyes, to see if we could get that down to a minimum, that would be great. So we had a requirement for minus 42 dB reflection, which is hard to achieve. We also had a course 0.25 dB per kilometre. This is also a completely new thing. No one has ever asked for this before and water peak was a known defect in fibre, manufactured, I don't know, like, in early '90s, and this makes for non‑amplifies not very effective and to be able to see these kind of things you need a special instrument which not many people have. So we bought it for them and they had to go out and measure all the fibre.

So, we also required short pulse or long pulse measurements on the fibre, this is so we can see macro and micro bands at the very first kilometres of fibre and we require all connecters to pass standard test ISE 61300, which has built in requirements that this can only be this pass, otherwise it won't past. To be able to actually get this through and not have the verdict shoot you, we required a picture of every single connector and that's about 9,500 pictures, so that's a lot of the pictures that the field engineers had to take and send to us.

And also we kind of require equal spacing between the sites.

So, this typical OTR measurement, nothing special there. It looks pretty okay, nothing wrong there. The same fibre if you do a short pulse, it looks like this. And what we are actually seeing here is we are seeing four splices and we are also seeing a lot of reflection. The height of these tops is how much reflection we see. So, this is a case where we will probably not be able to start a Ray man amplifier, that is not good because we're assuming the optical budget can actually use these amplifiers to the full extent.

This is also not very common, but I think that anyone that wants to build a network that can do more than 100 gig per wave will definitely need to be able to do this. So it could be good to train your engineers or to train whoever is doing this for you that do both always.

There was also some other thing, we got going and that was to do IP over DWDM ‑‑ a regular network which we typically have a router and ethernet interface and a DWDM box which then converts it into whatever signal you need, nowadays it's the OTM, so we can wrap it around. I mean, why do we need that box then? There was a few verdicts that wanted to do with us, now all of them do it with us. Everyone that has equipment in this space. That is to essentially take away a lot of unnecessary equipment.

It might not always work in a business type, but it's really good from a philosophical point of view to do it like this. If the main customer of your network is yourself, it makes a lot of sense.

So, I have a bunch of equipment, not very interesting, I'll go through this quickly. We have a whole bunch of big and small routers in this case it's Juniper in our case, we had all this. The gridless means we can run any size wavelengths. We can safely deploy 1 terabit wavelengths whenever they are available.

So, the optical design, this is essentially Sweden as it looks, so we have 31 drop ROADMs in the cities and 4 pure. We can drop any wavelength in any size from any direction from anywhere. Since we do run these DWDM transponders, they can retune themselves to get on to a new path.

So, what we typically do is that we build two networks on top of one network. So we have one network which currently the one we run and we have a full back plan as well so if a fibre goes down we have a secondary path for those transponders to go on, they just need to retune the wavelength. Since it's gridless we don't need to send engineers to reconnect this to get capacity up again.

So the Ramans, please check them out on Wikipedia. Quite complicated. Quite exciting technology. As I said everyone that happen wants to build networks is going to need to buy a lot of these maybe at every single span you own, so that's going to be a lot of training for field engineers to actually be able to use these ones properly, because if you paid for them, you kind of want them to fully function.

If a fibre connector looks like the one on the left, they will not work, if they look on the right they might work, depending on how every connector looks after that. So, like, when we broke the network, I think every single connector looked like the one on the left, so every single connector had to be republished and sent back to us before we approved it. Also we bought the optical network as one thing, which means that we wouldn't pay unless we accept the whole network as a whole. So, there was either fix them all or don't get paid.

So Router Design, I'm going to pop into this picture instead. We also, of course, wanted to save money as everyone wants to do. We didn't want to do the classical design, with like 2 P routers per city and connect the CPEs to this. We are just doing one core router per POP. We used DWDM to get into the other city in this case. So, we could take down the amount of core routers with half, which is quite good. And also you get the geographical redundancy with this that you are actually connected to a co‑router in two separate city. If someone bumps the city, you might be okay anyway.



People are sleepy from yesterday, including me, and coffee hasn't really kicked in yet, so let's finish this in a fun way.

So this is a typical site. We do operate ‑‑ a lot of the fibres is coming from a power grid company, so, data centre for them is to put one of these containers in the power grid substation so most of our outside the big city POPs looks like this; the inner city ones look like this. We don't really need these bunkers anymore ‑ or at least someone thought we didn't need the bunkers anymore. We just built data centres in the bunkers, maybe we build new bungers for these computers.

Everything of course has a backup generator so these are the typical ones that sits outside in the country side. And of course on the bigger sites we have bigger ones. On these power grid sites, we had to make a few adjustments. Since these are quite narrow sites, and in the case, we bought Juniper MX 960 for these sites, they need a lot of head room in the back to be able to get the power supplies out because you kind of hold the power supplies out. So you first need like a router sized space in the front for all the cabling and one in the back to get the pursue super supplies out. We didn't really think about that so we had to make something smart. So what we did is what we put the racks and rails in the floor and put them on sliding bearings, so that we didn't have to place the racks in the middle of the site because then no one else could get into the site which was not acceptable. We are using these kind of cable chains for the power and fibre to be able to slide the rack back and forth. And the racks were actually quite good because you can slide these racks with just fingers, just loaded two guys in the racks, you can slide them back and forth no problem. So that was like a solution to a very complicated problem, when it turns out we had way too big routers for 30 of our sites. So, that was nice, that we could get this done.

And also, we are very short on people, we try to not have too many unnecessary people. So, a solution to having a lot of people around and configure routers was to buy these console servers, which is using the 4G network for access. So, with the mobile ISP we made an agreement that we wanted 40 or so Sim cards who has their own separate VPN on the Sim cards, so they are not getting public Internet and, just a VPN. Then they route it back to us over just a dedicated 100 megabit connection in our office. And it has 4G so it's actually pretty quick. It's no problem. Most of these sites we can upload and download things in about 45 megabits and Junos has grown to about 4 gigs nowadays so that's not very fun to upload over X25, so that's why we also have ethernet to the boxes, which is a very handy, and that also means that we could order every single box unconfigured, or straight from factory, just put it on the site and we take it from Stockholm.

And also, of course, if we lose the sites, these will typically have access if we're lucky. Hopefully the neighbouring tower will have 4G access to this box. So we have external antennas on the site as well.

The management of our network. Waves, for example, are using GMPLS inside the ADVA network. The add DVA network could support the Juniper interfaces. We hopefully will be there soon. Of course, I mean we use NETCONf I can't think to support everything. And we are at like a level of about 90% of the configuration is owned by orchestration tool, it's not owned by a human, we want to get on this level on the DWDM as well. Now it's a lot of manual provisioning.

I also promised, I always do worse stories, I kind of see these as giving back to the community. What did we actually have problems with?

So in terms of fibre sites, the thing that was good, since we spent, you know, maybe 2,000 man hours of getting the actual fibre network good from the beginning, was that we had been able to test 100 gig coherent optics at about 4,000 kilometre spans, which is quite far. We did about 2,300 killometres on 200 gig and we had a very brief laboratory test with 200 jig as well and we are about 600 km as well. It's a test bed system to try this things out. We had Ceros which was very good because the last vendor we had didn't, had a few bugs. And of course, we think that this is the best production optical network in the world but we still think it's pretty crappy because we know there is a lot of the small adjustments we can do to make this even better.

The bad things with the fibre sites is that we had at least 3 RAMAN amplifiers that was destroyed because of dirty fibres, and these maybe run around at €12,000 euro, so it's quite unnecessary that someone didn't clean the connector and then kind of welded the connecters together because there is fat and residue. That's not good. We had about 94 full tickets open on 132 backbone which essentially is all the fibre. It could all be fix with cleaning and new patches, resplices, we also have a few near death experiences. And also the optical verdicts is not at the very forefront of developing new functions. Hopefully that will get better if a lot of people complain to them.

So, this was when I Telia arrived to a site and it was built. So this is 48 volt DC power sitting completely naked in a rack and my fingers were 3 centimetres from touching these panels which would have been very bad. So I carefully backed out of the site and reported the whole thing and they had to come fix it. Because this is live power on this, so you don't want to touch it.

So, on the other part of worst stories, so, these DWDM X which was completely new for us and everyone. He had zero issues, optical performance on the gear we had bought has typically been above spec. It helps with a good optical network but they have been performing really well in terms of optical quality which was really good.

We are running logical systems, so I didn't have time to show this, but we run logical systems on everything on the router, we can overlay different types of ISPs on top of our routers, we had a zero ticket on that, which surprises the vendor as much as us. And the 4G has had 99.4% uptime, which has been really good, helped in the provisioning process.

On the bad side, so the MPC3, which is the mother board for the DWDM mic, we had about 10% DOA, we don't know why. Neither does Juniper. We had to run Juniper, F, which has not been treating us very well. We have 26 PRs that was priority 1 and priority 1 means customer affecting. And we had about 174 J dot cases within a year excluding R MAs on different types of things, for example, the text is backwards when you do show IP route extensive. I was like what?

We also had to disable for example BGP PIC FlowSpec to get the stable network, which is also in the very good for us. We also had some congestion problems when driving around inspecting these sites. This was me driving around further up north trying to get through, but these reindeers they don't want to move; even if you hit them with the car, they don't move. We had some extra delay on that one. I'm out of time. So, I can't show more slides. This is like an hour presentation in presentation, in reality there is 100 slides. But I'll be around here all day and also a bit tomorrow if anyone has any questions, if you want to build like a new network, come talk to me. You can find my blog there at the bottom. And I think we have at least one minute for a question if someone has one.

BRIAN NISBET: So, thank you very much. We're rolling out a very similar network at the moment, and the software bugs in Junos 15.1 are still there, you didn't fix them all, we were hoping you'd take all that pain.

FREDRIK KORSBACK: We thought that as well. Let's be at the forefront of this.

BRIAN NISBET: That forefront is still going.

DANIEL KARRENBERG: Thank you very much. This is the kind of presentation I'd like to see at RIPE meetings. It reminds me of the good told times.

(Applause)

So, you mentioned that you have this 4G out‑of‑band access stuff and you actually have even an uptime figure for it. Can you maybe spend 30 seconds telling us how that's implemented, how that's automated, how is it ‑‑

FREDRIK KORSBACK: Sure. So, this is just a little small box from a vendor called Open Gear. They have two SIM card slots, so you can redundant 4G connection to them as well. We use a single one. And how this was implemented, because we were quite, you know, how to make this actually secure. But we spoke with the mobile vendor and since IoT is apparently the new next thing, I thought it was like the old thing, but apparently it's the next new thing, right, so, they actually have this already because every, you know, gas station, every gas station truck, every, you know, Tesla refilling station has a SIM card in it so the product was already there that you get the VPN on top of your similar, you get a private network or choose to delegate your own IP addresses over this network. What they do is that everything comes in over private connection and then ‑‑ but we don't trust the mobile network of course. I mean they are mobile networks they can do whatever they want so. All these boxes then starts open VPN on top of that to one of our servers, and the automation here is not that complicated. I mean we ‑‑ because these are Linux devices, they can be graphed with whatever you typically graph any server or device with. But the automation is not very ‑‑ because they also are on routers, so we can route to every single ethernet band and they act as a terminal server.

So what we did when we deployed the network is that before the router arrived, this typically was already installed. Yeah, of course, so, we send this to the site vendor because when the site vendor went there and made sure the UPS was there, that the cable was hanging from the roof and everything they also got this because this is a very small device, they just put it on top of the rack and made sure it called home. We can populate this with new firm wires and new Junos, so it was already there when the router arrived because the guys that installed the routers they were just regular electricians, they didn't now about the router, does it go back or front or that, they were just simple electricians. But it was very easy for us to take it from there then.

BRIAN NISBET: Any other questions? Thank you very much for your four times faster than expected.

So, in your programme, you will have the obliquely named Lightning Talk 1. So, it's not the Secret Working Group, that may or may not come later. It is, in fact, a presentation from AMS‑IX on the route server ‑‑ sorry, AMS‑IX introspective.

ARIS LAMBRIANIDIS: So he made a nice and detailed presentation. He made a deep dive, this is a very high‑level overview. I apologise for that. I just had to draft it basically in the early morning hours of Monday. So, this is basically what I have come up with.

So, Hi everybody. If you say my surname out loud in a front of a mirror back words three times backwards, bad things will happen. With this talk I wanted to communicate the challenges when introducing features on the route servers. That is both on a technical but also on a non‑technical level, I would arguably say that it's been more on the non‑technical level things. And what the current status of those things are, what our experience driving this forward was, and in the hopes that it might be useful information to anyone dealing with similar challenges.

I will do a quickly recap of all the steps that brought us here in the first place. But I do have to be clear that, again, this by no means any sort of deep dive. It's not a comprehensive list, and the process is not being described definitively. This is the AMS‑IX experience. Others may have other experiences. And it's a partial overview.

So, why employ our route server in the first place? This is pretty basic. The idea is to facilitate massive multilateral peering for customers. So far so good. Technical issues are mostly what's happening at this point. So, after we do that, we do need to provide some policies ‑‑ I think I skipped a slide there ‑‑ so, after we deploy the route server, we have to also consider policies for peers. There is certain ways you go about that. Most of these, I think, are Cloud use, but in any case they are also described in the RFC. We went with number 1 and number 2 basically. It mostly works, so, that's fine for us. But after that, we also had to consider filtering things, because the more you try to aggregate prefixes into a single stream, obviously you are going to accumulate cruft along the way.

The problem is that you can rely on BCPs and RFCs, that's fine. Again, it's mostly on the technical side here. We went ahead and did that initially. But then the next problem is that this isn't enough obviously. Security is a process. Address areas is obviously to catch up and you have to go to the next step.

For the prefixes going through, the people, for example, don't really like if they see their prefix going ‑‑ coming, being advertised from another ASN. The answer there is basically to filter out using either IRR or RPKI data. At this level though, the thing is that this really stops being a technical problem, at least in our case. There is much less unanimity between IXPs but also internally as to whether this makes sense at all, whether it's applicable to the IXP level or not and whether it's proportional to the risk it might introduce.

So, as with any good share of technical problems transposing themselves to the non‑technical side of things, this is not an easy fix. At the very least, it dawned on us that that we could bypass the issue. We could just introduced another set of these route servers doing the nifty stuff, yeah, that's four different ways of filtering. We resorted to provide basically tagging of that information by default, so, no information was harmed in anyway. And just tag it appropriately and we call those, that set the Falcon route servers.

Some of us did not actually believe that much that the up take would be there. Fortunately, it was. But, still we're not in the clear. People were asking us, okay, this works for us, why not just backport it to the original route servers, now adopt legacy? Again, we are faced with the same challenges there on the political side of things. This time around, we felt more confident as the technology was deployed to ask for customer feedback. We got it. Customers mostly voiced their opinion and argued not only to backport those features but also to apply IRR and RPKI as a default filtering mechanism, so, that was a discussion ongoing on the tech mailing list I think in early August this year.

And that's what we did last Friday. And hence the scarcity on details. The good thing is that we were concerned that certain legitimate traffic would be dropped in the way ‑‑ I'm sure some of it was, but it definitely wasn't visible in the total traffic graphs, although the prefixes were reduced down to about 41% of the original size.

So, I want to show some basic information about this migration. With regards to the legacy route servers, the interesting question would be how many peers would switch the default mechanism back from IRR and RPKI to just simply tagging? That is to not filter any prefixes which similar to the previous situation with the Falcon route servers, so far about 75 in IPv4 peers did that, and 56 IPv6 have done it. So, I think that's good, that's about 8 to 10% of the customer base. On the Falcon peers, the situation is mixed, and it appears that the customers trust more IRR than they trust RPKI, BIS interesting I think.

When it comes to prefixes themselves on the legacy route servers, most of them are RPKI unknowns as expected, I guess, although the more interesting point is here that there is a good amount of RPKI invalids as well. I'm not sure if this is due to misconfiguration or hijacking, most of the time, but based on some original data that we had about a year ago, so, it was mostly pertaining to misconfiguration.

On the IRR side of thing it's a more balanced situation but peers definitely could do about better in maintaining their IRR objects.

In the Falcon that's not the case, which kind of makes sense, since the peers were more aware into having proper IRR objects and even RPKI has a better uptake.

So in conclusion it seems that the adoption of the additional filtering methods seems to be coming along nicely. No discernible traffic loss and about, like I said, 8 to 10% of peers opted out of the default filtering scheme, which I think is a good thing. Whether this trend will continue in the future, of course remains to be seen. But, my personal assumption is that since there is no business case here for traffic lost or anything like that, that the filtering mechanisms more or less will stay the same.

That's it. At this point, I finish my presentation and I would like to open the floor for any questions.

BENNO OVEREINDER: Thank you Aris.

AUDIENCE SPEAKER: Erik Bais. Aris, thanks for the update. As a member of AMS‑IX, it was interesting to see how the process went. I was shocked by the drop of prefixes from 165 to 68,000, and without noticeable traffic on the platform itself. We as a member have seen some peers that actually where more specifics were dropped due to incorrect robots, where the more specifics were denied through the route server, and actually that traffic actually now came in on the more specifics on transit, where the actual covering prefix did have a valid prefix and a valid ROA. So that one actually came over the route server as a valid ROA, but the more specifics still came over transit so the traffic came over transit as well. We fixed that with that particular peer. I would like to stress AMS‑IX to start e‑mailing or notifying customers if they have invalids, because then they will actually become aware why, you know, things may not work as they intended to, because most of them are unaware. So, having a dashboard there would really, really help. Overall, kudos, I really like it.

ARIS LAMBRIANIDIS: Thank you, Erik, for the comments. As usual they have been very helpful. I want to say maybe we are smart people or stupid people, but if we don't get feedback like that, we can't understand customers' needs. We're not ISPs we're IXPs, so the more feedback you guys provide the better. I want to say that this is on the to do list definitely. It's a feature request, so I'm expecting it's going to happen in two or three months time, personal observation, maybe that's not the case, but I'm hoping.

And, yeah, like you said the traffic momentarily was lost, but on the aggregated traffic graphs that we saw, so, from Friday that this happened, compared to previous Friday, we didn't observe anything. So that was good.

BENNO OVEREINDER: Thank you. We have to wrap up. Thank you again, Aris, for introducing this. Thank you.

(Applause)

So, next speaker is Christian from Akamai, he will give an overview of the new invasions or the new changes in the core of the Akamai network.

CHRISTIAN KAUFMANN: Hi. I apparently work for Akamai, the friendly neighbourhood CDN. I gave the same presentation with different jokes, I promise, two weeks ago, at the NANOG. Did anyone see it? Okay. There is 10, 20% recycling.

Okay, let's go. At least the jokes are new and I actually promise to point them out for people who don't get them, because German jokes are not the easiest, so I'll make a highlight when you have to laugh.

I guess you know Akamai, right. We have a highly distributed platform so we are a CDN, we are distributed in 130 countries, thousands of clusters all around the world.

What is important and what we're told all the people all the time is that the different enclosures are not connected. They are islands. So, every cluster, which is basically a bunch of servers with a router in front of it was an island in itself, they were connected to the Internet, served the eyeballs, the purpose of the whole inter size, but they didn't really talk to each other or were connected with a network or whatever.

They are usually built for a particular purpose, so, they come in different sizes, there is no particular standard, so, if I build one with on an IX it looks different on a net cluster and so on.

We, of course, need to fill the caches, for that we use the internet. So in a way we always said the Internet is our backbone.

Over time, we have seen a shift in traffic. So what happened is that the clusters started to talk more and more to each other. We call that mid‑gress, with where you basically have either cache hierarchy or there were other reasons they had to talk to each other and that amount of traffic was growing over time. So now as you can imagine if you pay transit, you pay transit twice because the two clusters talk to each other and in worst case scenarios we actually had cases where we paid five times transit for a bit was delivered because it bounced between various clusters, for good reasons, but nevertheless.

We also, of course, when you use the Internet you have the Internet as the lowest common denominator, so whatever the Internet has as a problem on a particular day, tramboning, filtering, depeering between big networks, this all affects you, and you can't do much about it because you are using it.

Also, when you want to have an influence on the latency or the MTU, IPv6, all of that, you had to take what the ISP gave you at a certain time, and we had cases which are easily understandable, where you actually had two different clusters talking to each other, they were even in the same data centre in the next track but they used different transit providers which didn't peer in the same metro, probably not even in the same country, so the traffic trombones all around the place.

So what did we do to solve that? Finally, we did what everyone else did and Geoff figured that out and gave presentations about that. We built a backbone ourselves. So, for a long time people asked, why didn't you do that earlier? Why didn't you build the network and connect everything together? And the answer was relatively easy. In comparison to the people who did it before us ‑‑ Facebook, Microsoft, Google and so on ‑‑ they have a more centralised structure than we have. So they have what I would call a computing platform, a Cloud, a lot of data is stored. And then they distribute it to their CDN, their edge, however they do that. In our case, they were more equal, right, the customer for which we deliver is our origin, is where our data comes from, and that was way less centralised. So, connecting a very distributed platform, especially if the platform doesn't talk to each other, didn't really make much sense.

So, as the mid‑gress traffic grows and as we had use for it we finally made a business case fly where we started to build and connect a certain amount of cities together. And the idea is that where it makes technical sense but we also had commercially viable that we will put more and more traffic on it and actually use that.

This is what we did. What we are not doing is we are not selling IP transit layer 3 VPNs, layer 2, whatever, the idea is not to compete with our partners or peers. It's not to become a tier 1. Now the joke comes. You also will not sell voice minutes or fractional SD 1s or whatever is fancy in a particular region. So this is for our internal purpose. Pretty much as the others before.

If you sent the traffic, as I said, multiple times over transit at least twice or five times and now you transport it yourself and you have enough of it, then I guess it is easy to see that now you start saving money, so this is also to a certain point a cost saving exercise. But it also gives you the performance gains, because as I mentioned before, now the traffic doesn't trambone between different networks which might have a good or a bad thing day. So now you have it under your control. Which I guess is usually one of the big reasons why people build their own network.

There are a couple of benefits in the future, not just for us, but actually for all the people who then will peer with that network or the ones who have clusters in their network. Because what was the case before the cache fill for these particular clusters you had in your own network if you have an eyeball network, they were coming via transit. So in the future to a certain degree we now can actually send the cache fill over peering, if you peer with us, to the cluster, so it still makes a lot of sense to have the cluster so that part hasn't changed, but now we can actually dot cache fill over this and make you're life easier and certainly cheaper.

For the people who peered with us in the past, usually we had multiple sessions, right. If it was more the inbound path to us, you actually had to transport the traffic to the particular peering site, the usually part. Now, as we have a network and we can peer in various locations, we actually can pick up the traffic closer to the source, closer to you, and transport it then ourselves over that network.

S in how it looks like. So, technically speaking, we actually have three backbones, if you want to call it like that. Right now they are in three different regions. You also see basically where we had demand for it. We have the big 9 metro areas in the US; the four big ones in Europe, in Asia, currently this is the state which is actually just a link between two cities. So, all of these links are 100 gig links so we didn't even bother starting with anything smaller. We looked for routers and especially in the future, you know, which have a good 100 gig density, so you right now find, basically, mainly Juniper, a little bit of Cisco, in that. And as time goes on, the idea is to make that network bigger. So, as I said, we started ‑‑ so this is actually finished ‑‑ we start with a mid‑gress traffic and it's connected up and running and it serves traffic between our sites. In 2018 the idea is to now add origin fetch fetches, which is basically the includes err, the cache fills, and peering to that network as well. And at the moment, where we have enough traffic between the different between Europe and the US, then we will add connectivities between them but also add more cities. So I guess this is something where you see now in Phase 1 and will become bigger over time.

And with that, I think there is time for one or two minutes for questions.

BENNO OVEREINDER: Thank you Christian. Any questions?

AUDIENCE SPEAKER: Salem, from Lebanon. Thanks CK, I am going to ask the same question I asked Geoff and I talked to other people. What about economically non‑viable regions that don't ‑‑ that doesn't make sense to extend your network to? Can we hope to get a coalition of like the Akamais, Google, Facebook, to try to extend their networks to these regions for a not‑for‑profit like corporate social responsibility for development purposes?

CHRISTIAN KAUFMANN: I think we get a different shape between two fundamentally different things. One part is how we serve traffic to the eyeballs, to the end users. This is how you, when you sit in a developing country or, well in general, how you get the traffic. This one you get from a cluster, that could be an Internet exchange, all that kind of stuff. That part hasn't changed. To be honest we go to many countries. We are in 130 countries. Africa is a good example, we are in 25 of them. So we actually, I believe, do our part there to distribute the clusters so that the end user gets the traffic as close as possible. The backbone is more an internal initiative, right, it has the performance and the cost savings part, so, if it doesn't make sense to go to a particular country because there is no cost savings, then it doesn't really make sense to go there. But there is actually no disadvantage for an emerging region if we're not going to them because we still serve them locally from the cluster, we just don't transport it ourselves to that region or we don't have the cost benefit, but the performance and the reach is the same. So, this does not affect our end user delivery.

BENNO OVEREINDER: Thank you.

AUDIENCE SPEAKER: Jim Reid. I'm on the IX Scotland steering committee. We have had some success in Scotland with actually getting a bunch of content delivery networks to come to the Internet Exchange, the Scottish Internet Exchange is actually limited in traffic because pretty much everything in the UK is clustered around the London Internet Exchange. So we have had some success with that and with the content delivery networks collaborating with IX Scotland to also encourage a number of eyeball networks to come and I'd be happy to have a quiet off‑link conversation about that.

(Applause)

BENNO OVEREINDER: So, the final presentation for the Plenary is from Sjoerd Oostdijck, the RIPE NCC technical report.

SJOERD OOSTDIJCK: So. Good morning everybody. I think we can keep this quite short because it's been a very smooth meeting from a technical perspective.

We have only done a couple of new things this meeting. One of the things is we have replaced our big Dell tower servers with a couple of new servers, which are the tiny little things below the monitor over there, and there is two of them there actually and they run all the virtual machines that we need for the meeting.

And the other new thing which I hope everybody has used ‑‑ and I'll give you some statistics on it later ‑‑ is the RIPE networking app, which of course everybody has on their phone and are going to keep it there until next meeting, right.

The third thing is that the ladies from stenography used to have like a magic box which Ronan made and nobody really knew what it did

STENOGRAPHER: Neither did we.

SJOERD OOSTDIJCK: The one thing it did require was a direct cable from wherever they are sitting off to the ops room, but that's now been digitized and everything is just going over the regular network. So that saves us running a really long cable every time. I don't know how we did it, but going from here down to the ops room, if you have seen it this week, is quite as far, but somehow it always worked.


STENOGRAPHER: R.I.P. Ronan's box.

SJOERD OOSTDIJCK: The biggest issue, David and I spent a couple of days in our inventory room figuring out harmonised customs codes for customs to get the things through customs in Europe. And for those of you that don't know, it like codes that start at the very beginning, you go ‑‑ it goes something like meet, nuclear products, metal products and then you drill down all the way down to something like an H trended VGA cable without, with or without plugs on the end, and it's a code that long and you have to figure it out for every single thing that we shipped. So that took a while.

Then the next thing is, of course, DU, who provided us with excellent IPv6, at least as far as I know because we haven't heard any complaints, but I think it was for them the first time giving IPv6 to an end customer. They had it in their core, but it took them a while and they really came through. I think that was a first for them. So that's always nice.

What wasn't so nice is there is this firewall thing, and I think on the hotel network, maybe some things even worked better on the meeting network, but the only thing I seem to be able to get to work was Skype, and What'sapp voice was one way and it depended on who was calling who and sometimes it worked and sometimes it didn't.

And the last issue we had was unplugging for vacuum cleaners, because I think we lost power about four or five times this week, always at the period where nobody seemed to notice, that was all right, but the cleaners plugged them into the power distribution, yanked one out, plugged their vacuum in, and desk was offline. So

(But at least it was clean)

The part with the graphs. For the upcoming meetings, we're going to try and always provide similar graphs so you can go back in time and compare them. I didn't have time for this meeting, so, they are more or less the same statistics, but the graph might look different if you go back in time.

This is the data that we did. It's actually not that much. 60M bit‑ish, so you guys should download more or something. I don't know. Bigger encryption on your tunnels.

These are the leases, not that many. We have a hell of a lot more in the pool. But one thing we did manage to fix this meeting, nobody knows how, is that normally there is a bump over there, which is not there now. I don't know why. It's like ‑‑ but, you know, I think it's just gone forever now, right.

Viewers, for the web‑stream, seems to have been less this time, I would have expected it to be more. But perhaps everybody that wanted to see the meeting came to the meeting. I don't know. Last time we hit up to 100. So, only 80 now.

Like I said before, the RIPE Networking app, you can see we have had 250 people register the app. What I did hear from the web team was that we only got one response on the survey. So, if you guys wouldn't mind, go to the survey and leave your feedback and we'll make it better for next time.

Of course, it wouldn't have been able without the tech team. We had extra help from Saloume this time around who helped us with the setup. She was a great help, and of course the usual team who you have all seen before, you can't miss.

So, that's that. Any questions?

(Applause)

BENNO OVEREINDER: Thank you very much Sjoerd. And the whole team of course.

(Applause)

Thank you. So, with this presentation, the technical report of the NCC, we close the Plenary programme.

So, I hand over the mic and stage to Hans Petter to close the meeting as Chair of the RIPE community.

HANS PETTER HOLEN: Thank you very much, Benno, and thank you very much for an excellent programme this whole week.

So, we have had 483 attendees checked in this time. That's really good. Some of you are still here, I can see, not exhausted after yesterday's event. Some look a bit tired, I can see.

It's not the ‑‑ not the biggest meeting yet, but we are still bigger than we were last time we were here. But I trust that this trend isn't going to last. I hope that we will be able to turn the trend around.

Looking at who is here, it's actually 36% newcomers, which I think is really really good. Part of the reason for going around the service region is that we will attract new people to the meeting as well.

We will have a survey. And this will be the shortest meeting survey ever, so, I trust that all of you will actually answer this and there is a prize to win, Amazon gift vouchers or Go Fund Me voucher. The Meeting Team really wants feedback so please fill in this survey.

So, looking at who attends the meetings. We see that in addition to our service region, we have visitors from basically all over the world. What I think was interesting to see this time was to zoom in on the Middle East region and see that we have quite some visitors, participants, from Iran, Saudi Arabia, of course the Emirates, Oman and so on. So I think it's been really successful to come here and engage with the local community in this region. So with that respect, I think this location was successful.

(Applause)

Types of organisations. Not everybody here is commercial. Slightly less than half of you. We still have a good part from education, from government, from RIRs, and others.

I have been asked a couple of times in the corridors here, what's happening with the Chair replacement procedure. So, maybe somebody wants me to leave, I'm not quite sure. There was a discussion on the mailing list up until the last RIPE meeting and I have spent sometime with Miriam to actually look through the suggestions and the sort of draft proposals that were discussed and there are basically three different camps. Somebody wants public elections. Somebody wants me to appoint the successor. And somebody wants to have a nom‑com like structure. So I, with the help of Miriam, write up these proposals and circulate them on the list again after this meeting and then maybe we can figure out a way to figure out which direction we're going to do; and if we have the direction, we can sort out the details. So that's next step on that.

Nigel has already announced the appointment of a new participant to the NRO NC, that's an appointment by the NRO Executive Board. We have three representatives on the NRO NC acting as the ASO AC, just to confuse you with as many abrogations as possible. Herve Clement is appointed to replace Wilfried Woeber who has served there since the beginning, since ICANN was founded in '97, '98, '99, sometime, so there is young bloodnd a he will serve there together with Nurani and Filiz as you know that are elected by the community. So congratulations Herve.

So, when we started the meetings, this was the selection of the Working Group Chairs that have prepared the Working Groups, and chaired them through this meeting. There has been some changes. So, first of all, Brian Trammell has replaced Christian Kaufman, I think Brian left directly from the social yesterday. Rowing Working Group, Ignas Bagdonas has replaced Joao we Damas, this is where it got really confusing for me because Joao turned up in the Working Group Chairs lunch add the new Chair of the, co‑chair of the DNS Working Group replacing Yap. So, Database Working Group, David Hilario has stepped down, so they are looking for a new co‑chair, so if you are interested in co‑chairing that Working Group, they have two chairs, so it's not immediate problem, but then you can volunteer yourself there. And I'll come back to the IoT Working Group a couple of slides later because we don't have an IoT Working Group yet but we will very soon.

The Working Group Chairs also update the document describing the Working Group Chair, so it's a document RIPE‑542 and I think it was from 2011, and there's been quite some changes since that time.

PC has been introduced so the Working Group Chairs don't have to chair Plenary sessions any more, so updating the document to reflect the world as it is right now, and it's also made it clear that the final approval of charters and so on is with the RIPE Chair so there is a clear‑cut decision at the end of these procedures. It's a really short document, one page or one‑and‑a‑half page, so if you are interested in that please go and read it. It was consensus by the RIPE Chair and then ‑‑ consensus among the Working Group Chairs and then approved by me as the RIPE Chair, so we have a new document that will be published shortly.

IoT Working Group. There was an IoT session on ‑‑ earlier this week, great attendance, great presentations, we had had a BoF in the past, we had had a meeting in Manchester. There seems to be great interest in the community. Jim has volunteered as Chair elect to set up this Working Group. There was consensus in the room that we want to go ahead and set up a working group, so the final thing that is remaining though is the formal approval. I have seen no objections on the mailing list, and seen no people running towards the microphones, so, therefore...

(Applause)

We now have a new IoT Working Group with the following charter. So thank you very much to Marco, who is now running to the microphone to say something.

MARCO HOGEWONING: I would like to thank Jim Reid, especially, and a few other people in the community who helped do this and take it over. Leading the IoT within the NCC, I very much looking forward to working with Jim and the Working Group and continue this effort. The list is too long, but I also do want to give a shout out to Anna Wilson who stepped up in Budapest and presented the proto charter that allowed for this session to take place. Thank you all and good work with the Working Group.

HANS PETTER HOLEN: Thank you very much for that. Then, it's time to thank the RIPE Programme Committee. So, if the Programme Committee could come up on stage. As usually we have a small gift for you. So... as you all know, Benno is the Chair of the Working Group, Leslie is the vice‑Chair, she is unfortunately not here for this meeting, Khalid is the representative from the local host, he actually happens to be the Chair of the MENOG PC as well. Osama is the MENOG representative. And then you see we have representatives from ENOG and SEE as well, both the MENOG, ENOG and SEE representatives are appointed by their respective bodies according to their own procedures, for the others, they are appointed by this community. So, please thank them all for the great programme they put together for this meeting.

(Applause)

STENOGRAPHER: What's in the bag?

HANS PETTER HOLEN: So, I should say thank you to the outgoing PC members, Yetla, Alex and Mike and welcome our two new members that was announced earlier today, Khalid and Dmitry. So give them an applause.

(Applause)

So. Next one. This is now dame Elise Gerich. I have also heard that Axel is here and wants to say a few words.

So, Elise, it's been a pleasure seeing you coming to the RIPE meeting. I have been...

(Applause and cheers)

So this is a time where words are maybe not necessary. Elise has been part of this community for a long long time. I'm not sure how many years, she's been recently been vice‑president of ICANN and president of the PTI. Has managed IANA through a difficult transition when everybody wanted to change IANA into something completely different or keep it the same. Everybody had different opinions on this. And you have always been calm and had a steady leadership and led this around. I know that Axel is also dying to say a few words.

AXEL PAWLIK: I tried to look it up when you joined ICANN and I can barely remember, it seems a long time ago ‑‑ it was only seven years, ,and I remember I was standing there when you were announced I said, oh my God it's you, I am very, very glad; and now it's you going, I'm very, very sad. But you'll be around somehow.

ELISE GERICH: The RIPE community has always had this Secret Working Group. I did one rogue contribution this morning, but I made two and so I'll save the second one for you all.

There once was a young Michigan Miss, who thought the Internet was just bliss.
And her pleasure in life was networking with RIPE.
And this she will sorely miss.

Thank you very much.

(Applause)

HANS PETTER HOLEN: So. This has been a long meeting in a row of many many meetings. For the last ten years, we have had the pleasure of having some magic added to this meeting by our stenographers. They have learned I don't know how many acronyms. I think RIPE NCC staff have used their big data engines to figure out it's past 6 million, so a big hand to Mary, Aoife, and Anna from Doyle Court Reporters.

STENOGRAPHER: Thanks.

I look very busy. I'm probably cursing in my head.

HANS PETTER HOLEN: I will not ask you to come up on stage because then you couldn't make sure you know what we're saying. But there is a small token of appreciation for you afterwards. Please don't run away.

STENOGRAPHER: Thank you. It's probably a dictionary.

HANS PETTER HOLEN: We did something completely new this time ‑‑ next meeting I think I'll ask for a screen with the stenographer in the back and see what they're writing on it. This happened to me at a EuroDIG as well when there was a Twitter feed behind me, which was even more interesting.

For the first time, we have had a Women in Tech lunch because there has been discussions in the community on how do we increase the participation, increase diversity, not only for women, but in general. One of the first things that was proposed to us was to do a Woman in Tech lunch by, and it was supported by the Internet Society and Akamai, so, thanks to those. It was a lunch open to everyone and we had two presentations from young women in the region, Zeina and Maya, who shared their experiences, and there was discussion afterwards and other members in the community, both male and female sharing their their experiences from being new in this community and how to work in this community over time. And every seat was filled, we even had to bring in more chairs. So this was a subject that was engaging, and we hope to continue this for RIPE 76 as well.

The RIPE diversity Task Force, or it's not a task force yet but it's proposed to be a task force, there's been a BoF and a couple of sessions and discussions on this. There was a presentation in the Plenary. There was a meeting of the group and there was a discussion meeting for the group. So it was really gathering a lot of input. So there is a proposal now to set up a task force for this with the following charter.

I have seen no strong objections to this and this is the charter that they have reached consensus on. So, with that, I declare consensus and we have a new Task Force working on diversity for the coming meetings.

(Applause)

Now, they actually decided not to have a Chair, so they will work as a collective, so that will be a new interesting construction, so we will look forward to that.

And the Task Force also welcomes new members, so if you are interested in this topic, please e‑mail the group and volunteer yourself to contribute.

So, prizes! This is what you have been waiting for, right!

Every time we draw two of the two first registrations, and they have to be present in the Plenary. This is why I have a long list here. To get the prize. So, Martina is ready ‑‑ you are ready with the prizes, yes?

So, the first newcomer that registered, oh know ‑‑ Kolarik Michal. You are the first prize winner. Welcome to the community.

And then we have the two, the very first meeting registration, and it says Hans Petter Holen ‑‑ oh that's me, I am automatically entered into the system, so I don't count, I don't get the prize. The second one is Paul Thornton.

(Applause)

You must have been sitting waiting since the last meeting to register.

Second, or actually number 3, which is the second real registration, Jordi Palete Martinez. If you are not here you will lose the prize, sorry about that.

So then I go onto the next one. Sergei Myasoedov.

(Applause)

So, this is all to encourage you to register as early as possible, so that the meeting staff can understand the logistics and also to keep you to the very last Plenary so that you can collect your prizes.

Then of course, this meeting would not have happened if it hadn't been for our local host. So, if I can welcome on stage Abdulrahman Almarzouqi from our local host, the TRA.

(Applause)

So thank you very much for being a really supportive local host and making sure that this meeting was really smooth. Thank you very much.

And then of course it always helps with sponsors, so thanks to DE‑CIX, VeriSign, HillCo, AMS‑IX, Twitch, NetFlix, DU, and Edge Connect. So thank you very much.

(Applause)

Now, as all of you know, I am from Norway, or at least you know now, and a very special day in Norway is the 17th of May, so then we have a national holiday with celebrations. Sometimes when you are grown up and your kids are big so you don't have an excuse to do the celebrations any more, there is nothing to do on the 17th May. So therefore the next RIPE meeting will be between the 14th and 18th May, and since that's not necessarily a pleasant time in Norway, we are going to France, to Marseilles. So I hope to see you all in Marseilles between the 14th and 18th May next year.

And that brings me to the end of my presentation. And now I see some people coming up here. I'm not sure what's happening.

STENOGRAPHER: Bye‑bye, going off line now.



HANS PETTER HOLEN: With those words from the Secret Working Group, I thank you very much for coming here, all of you, the meeting would definitely have not been the same without all you participants. Meeting closed and have a safe trip home and see you all in Marseilles.



LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.