DNS Working Group

25th October 2017

At 2 p.m.:

SHANE KERR: Hello everyone. Shall we get started here. I became a Dutch citizen very recently, and as part of the exam, which are the sort of questions they give people who move to Holland, one of the things they make it very clear, we start on time. So it's part I of my new citizenship, I am going to make sure we follow the rules. This is the DNS Working Group, RIPE 75. If you were looking for any other Working Group, you are in the right place anyway, stay here, we are going to have a good time talking about DNS. So, I didn't hear any bell but I see people coming in.

Welcome, glad you could join us. I would like to thank the scribe, Fergal and the RFC monitor, these are RIPE NCC support staff who do a really good job of helping us. And of course our ‑‑ I don't see a screen with our realtime text but I assume that will happen in a second. Thank you.

So, I will go quickly through the agenda and if there is any points to add or remove, people can tell me, but I think it's pretty well worked out.

We have got a quick session about this Working Group Chairs. We have a quick review of the outstanding Working Group actions. We have got a number of presentations, and then finally just any other business that people want to discuss. Is there anything that anyone wants to add to the agenda? Or anything that we think we need to discuss? Going once, going twice. Okay. Great.

So the final bullet on our administrative section is the approval of the minutes from the previous RIPE meeting. I sent a mail out about that a couple of days ago, the RIPE NCC had had those published months ao, I just never sent a mail, so I apologise.

Has anyone any comments or updates to those minutes? All right. We will consider those approved and I will be sure that we get those sent to the mailing list sooner this time, for this meeting.

Our next order of business is the Working Group Chairs. We have three co‑chairs for the DNS Working Group, and what happened is we have had the same shares for many, many years, and then all three of the cochairs decided to step down, one per year, and bring in new Chairs. And so this is the final of our long‑standing Working Group Chair has stepped down. And that is Jaap Akkerhuis. And he has been involved for a long, long time. He couldn't be here this week, he is going to be at the ICANN meeting next week, just down the road. He will be in Marseille, so you can buy him a glass of wine there, I was going to say a beer, but we will be in Marseilles. I looked at the history to see when he started and the DNS Working Group in its current form rose out of the ashes of previous ‑‑ DNS Working Group activity in the RIPE community and he was there from the thyme this Working Group was formed. And he had been involved with DNR, domain name registration stuff, the history of which seems to have disappeared from the Internet. So we are losing history, he has been around longer than we have records. He sent me an e‑mail off‑line describing some more details about how this stuff happened but I won't go into, it's not super important for this meeting. But if you want to talk to me about it later I can go through the details. I would like to thank Jaap for his many years of service and support for DNS and for the RIPE community.


And of course, one person exits, one person comes in. So we put out a call for volunteers to be a new co‑chair of the DNS Working Group, a few months ago, and Joao has agreed to become a chair, he has received a lot of support on the list. I look forward to working with him. I think he is going to do a great job. I told him he has to say something but he refused so we are establishing the power relationship well. Basically as soon as this meeting is over and we wrap it up he is going to be acting as co‑chair of the DNS Working Group. Thank you, João.


Okay. Our next item is that we do a regular review of our outstanding Working Group actions, these are maintained on a list which is published on the RIPE website, it's a long‑standing action from 54.1, and there was actually some progress on this on the list, I don't know. Peter, do you want to talk about this? Peter is one of the holders of this torch.

PETER KOCH: So yes, for the pleasure of the Working Group and even another retired Working Group Chair is popping up at the microphone, so it's ground hog day. To multiple extents this document has been lingering around and Carson and I have been discussing things and a draft went to the mailing list and there was some responses and some discussion and as all good engineers, many of the participants try to dive into a bit of a bike shed things discussing numbers and TTLs and there was feedback, but there hasn't been any progress in terms of new versions pause from my/our perspective the discussion kind of ceased at a bit it was a bit inconclusive. There was some voices raised less numbers or lower values. But one of the reasoning of the previous values that were chosen in the document and I can turn my back to the Chair.

SHANE KERR: You can come up here.

PETER KOCH: That was a deliberate decision not to be on stage for all these rules, the previous set of numbers was chosen to balance between the interests of, say, the party maintaining the zone and the party maintaining the secondaries and of course the parties consuming the whole thing which is the population behind the resolvers. And the balance of this previous discussion or the concerns wasn't really reflected in this, and also there were interesting statements made like, well, you know, notify doesn't work. We love to see a bit more elaboration on this and a bit more text, so what we probably are going to do after consulting with Shane, is that we are going to revamp that discussion to have some concrete questions to respond to, as in some of the questions are already like imminent as from the structure of the document, do we like the number or don't we but giving a bit more rational for why things should be changed and put that caption and that rationale, also balancing between the different parts of the DNS ecosystem, the current feeling, as I said, that wasn't really represented in the discussion. So, if you have to contribute anything, raise it, please send ‑‑ send mail to the list, maybe following up on other previous messages or talk to me in the hallway or to Carson and me because Carson can't be here this time, to Carson and me by e‑mail but preferably on the list and we will, after we give the electorate, I was going to say, after giving the DNS Working Group members some rest after the meeting, we will start getting to the question. Is that okay, Mr. Chairman?

SHANE KERR: That sounds very good, actually.

PETER KOCH: Thank you so much.

SHANE KERR: I appreciate the work that you do on this and I look forward to progressing and I know we didn't get a lot of feedback on the list, so this time as you said we will have a specific list of questions and as Chairs we will be more attentive to poking people and saying speak now.

PETER KOCH: Yes. Just to make this clear, everybody dropped the ball but we want to take this up again and ‑‑

SHANE KERR: Yes. Thank you. Any other questions or comments about the revision of this document? Which I don't know if I said what it is, this document is the recommended SOA values which include things like retry timers and refresh timers for zones. Okay. Great. So, now we are in our presentation phase of this Working Group session. So, our first session is by Anand at RIPE NCC.

ANAND BUDDHDEV: Good afternoon. My name is Anand Buddhdev and I work at the RIPE NCC in the GII department which operates the DNS services of the RIPE NCC. And this afternoon I will give you an update about the stuff that we have been doing in the past six months since the last RIPE meeting.

First of all, I would like to talk about K‑root. We have been quite busy growing K‑root as we have been doing over the past couple of years. The growth is steady and since the last RIPE meeting we have five new instances, and we have also managed to get K‑root instances deployed in some interesting places like Greenland and Latin America, which from K‑root's perspective have been under‑served areas. So, our growth continues, we have more applications in the pipeline. We are still taking applications and so if anyone wants to, you are free to apply.

I would also like to mention that we have been busy adding more capacity to our K‑root core sites. We have five of them in Amsterdam, Tokyo, Miami, London and Frankfurt. And at some of these sites we have upgraded our capacity to 10 gigabits per second and this process will continue and we hope to add the same capacity to all our core sites.

I'd like to talk a little bit about K‑root in this Gulf region. So, we have added several nodes here since we started the expansion. We have one node in Beirut, in Lebanon, in Tehran we have three K‑root instances operated by three different parties. We have two instances in Kuwait city and we have one instance in Doha. So, the region is not as well covered as we would like but it's already doing much better than it was previously.

This is a graph showing the query rates at all the seven instances that I mentioned a short while ago. So the average query rate is about €3,500 per second, and this represents about 6% of the total query rate that we receive at K‑root. So that is quite significant and a lot of this DNS activity is at our K‑root nodes in Tehran and this is not surprising because it's a large country with a large number of Internet users. So this is also reflected by these statistics.

We have another DNS cluster that we run which we call our authoritative DNS and this is separate from the K‑root that we operate. And in this authoritative DNS cluster we provide services for our own zone, We also host the Reverse‑DNS zones for the address space that is allocated to the RIPE NCC. We also secondary Reverse‑DNS zones of the other RIRs. We also provide secondary DNS service for ccTLDs, and we have a few miscellaneous zones in there as well.

So, this authoritative DNS cluster is kind of smaller compared to K‑root. We only have three sites at the moment, in Amsterdam, London and Stockholm. But since earlier this year we have been talking about the idea of expanding this cluster in the same way that we are doing K‑root, whereby we deploy single server instances in a host network. So this idea is still experimental because we don't know, you know, how it will pan out and what the effects will be. So we are at the moment liaising with one company in Austria and we are going to do an experiment by deploying one such single server instance and depending on the outcome of that we may or may not open applications for more instances of this service to be hosted with people who might want to. One of the advantages of Anycasting this further is that we also bring our other DNS services besides K‑root closer to operators and DNS caches that would benefit from lower latency.

So, this authoritative DNS cluster also provides secondary DNS service for some ccTLDs. For those who may not be aware, the RIPE NCC has traditionally provided secondary DNS services for ccTLDs for a long time and the idea always was to encourage small and developing ccTLDs but we had no formal criteria for evaluating which ccTLDs qualified and which ones didn't. There is a RIPE document, RIPE 663, which has these criteria and we have had this for some time. And we are still following the guidance from this document. So since RIPE 74 in Budapest we have been working and evaluating all the ccTLDs and talking to them and for those ccTLDs that did not qualify under the criteria of this document, we have asked them to withdraw the service from the RIPE NCC servers and they have done so. So, 44 of these ccTLDs have withdrawn their service, there is one ccTLD that has asked for a little bit more time so they are still getting service from us. But hopefully they will also be able to find alternatives and move away. And then the remaining 32 that we have on our server still continue to qualify because they are smaller developing and we will continue to provide service to them, according to RIPE 663.

Another development which I mentioned at RIPE 74 is that we switched away from DNSCheck to something called Zonemaster. For the pre delegation checks of the Reverse‑DNS delegations that our members request from us. So DNSCheck used to be this old software that was developed by the Swedish registry, and it did all the pre delegation checks but it was no longer being maintained and a replacement for it is Zonemaster which is a collaborative work between the Swedish registry and the French registry. And after some developments, some requests from us and some packaging and stuff we were able to integrate this into the RIPE NCC's diverse provisioning system and now any domain object that is submitted into the RIPE database for creation or update will first be evaluated by Zonemaster and if all the checks in Zonemaster pass then only do we allow delegation of this Reverse‑DNS zone. So we switched to it in June 2017 and it has been running very well since then. On average, it tests about 500 reverse zones per day and the results are all logged into a database and they are available through a GUI. This is available at Zonemaster.RIPE .net so you can try it out for yourselves. There are two ways that you can request checks, you can do a pre delegation check or post delegation check and the GUI offers both of these modes. And the other thing that we have done recently is that our RIPE Stat tool is now also integrated with Zonemaster, so if you type in some address prefix into RIPE Stat, then you can look at the results of the Reverse‑DNS zones of that prefix in the DNS widget in RIPE Stat and it will query the Zonemaster database and show you the results of the last check that it performed against it. The RIPE Stat interface doesn't yet support pre‑delegation checks but this is on our list of things to do and once that is done, we plan to switch the Zonemaster GUI completely to RIPE Stat. So that there is only one entry point for all our users to request pre delegation checks.

Here is an example of the RIPE Stat DNS widget. It's showing the the results of one of our own zones, RIPE NCC's zones, 14.0.193 in‑, and in the box there are a number of results being displayed and the levels, whether info or warning or notice from Zonemaster. If there were any actual problems, then they would be flagged in red colours, so you could identify problems quite easily. And at the bottom there is a little button to say "start test" which will make Zonemaster check this zone again and report any problems if there are and/or if everything is fixed it will come up green.

The other thing that we have been discussing or thinking about at the RIPE NCC is DNSSEC. So, RIPE NCC has been doing DNSSEC for a long time, since 2005, actually. And we have been signing our zones one way or the other. In the beginning, we signed with a bunch of scripts developed by Olaf Kolkman and it was all manual work and we had to roll the keys by hand and stuff like that. And later in 2009 we began looking at trying to do some automation there, and back then we had very few choices; the DNSSEC landscape at the moment didn't offer us many tools or much automation, and at that time there was a product from a company called Secure64 and that offered DNSSEC signing in a very simple appliance, so we went for that. And we have been signing our zones with it since then. But I think that is good moment for us to explore this because our hardware is up for replacement, it's quite old and we feel that there are many good alternatives available, so it would be worth exploring them before we decide on one solution. There is OpenDNSSEC, BIND, PowerDNS, Knot DNS, all these fine products are able to do DNSSEC signing, they can work with HSMs and things like that. So we are exploring all these options and thinking about them, and we have accomplished an article on RIPE Labs, the link of which you can find on this slide. This article talks about our DNSSEC options in a bit more depth. I would encourage you to look at it. If you have any comments or questions, please do let us know or leave comments in the article, and we will take all your feedback and input into consideration when considering our options.

The other thing that we have been looking at is something called entrad da. Entrad da is an open source software solution developed by SIDN and it's for processing and indexing of DNS packet captures. This software runs on top of a framework called Hadoop, something originally developed by Yahoo, for distributed storage and processing of large data sets. So Hadoop consists of many different components, including HDFS where you store data, and then there is various other bits of software that run on top of Hadoop, one of which is Impala and that can be queried with an SQL‑like syntax and all of this forms the application suite for entrad da.

So, we are looking at entrad da because at the RIPE NCC we are already using Hadoop quite extensively. All the data from our RIPE Atlas project is fetched and stored inside our Hadoop cluster and it is process and the results are made available to various APIs, including the RIPE Atlas website. The streaming APIs, everything. We also have the RIPE NCC's RIS project where we collect BGP roots, this is stored in Hadoop and post‑process and it powers large parts of our APIs and interfaces, including RIPE Stat. So we have quite a lot of experience with Hadoop and we think that entrad da is a nice addition to this family of software that we are running.

So on all our service servers, K‑root, authoritative DNS, AS 112 we do continuous packet captures of all the queries but at the moment we rotate them after five days, so we don't have much long‑term storage and what we hope to do is download all these PCAPs, process them with entrad da and make the process results available through this entrad da interface, initially we will do this internally and then we will think about and talk about how we may even be able to open this up to the community if we feel that there is a good use case and there is a good secure way of exposing this data without giving away private information, for example. So work has started on a test set‑up, and we hope to report more at the next meeting.

And with that, that was my last slide. Thank you for listening. If you have any questions, you can always e‑mail me, you can follow me on Twitter, but I have to warn you you might see lots of photos of scrabble boards. But feel free to approach me as well. Thank you.



JOAO DAMAS: Slide 13 you talk about how you are reviewing your systems going forward. I am really happy to see you considering the option of not using HSMs because it seems to me if you look at the whole system as a whole, it's just seems to have become a little more than cover‑your‑ass devices, which some problems to them usually you get into vendor talk, so it would be good if you, once you go through this process you could make available some information of steps you went through and what decision was on that item.

ANAND BUDDHDEV: We will be happy to do that

BENNO OVEREINDER: Not a question, just I want to speak out in public that we are very grateful and I think I also speak for the other open source developers, ISC, cz.nic, it might not be very flexible to the other members here in the room that you are doing great job in testing, evaluating software us giving us very valuable feedback. So a big thank you.

ANAND BUDDHDEV: Thank you, much appreciated.

AUDIENCE SPEAKER: Stefan from DENIC. I'm mostly a networking guy so I like your 10G upgrades but my experience is people saying you are able to answer now DNS queries 10G line with, from 1 to 10Gs it means worst case from around one million packets per second up to 10 to 11 million packets per second so upgrading just 10G, gives you some headroom at least on the network side but what did you do on the server compute side, how did you scale it?

ANAND BUDDHDEV: That is a very, very good question. So our purpose behind 10G is not to turn into an even bigger canon and keep answering. One of the things we want to do of course is to be able to absorb large swarms or floods of queries that come in, but some of the things that we are doing as well in parallel to this is exploring better and better ways of filtering queries that we consider to be attack queries, for example, and be able to process and drop them on the floor. And my colleague, Colin, who is right here, has opinion very, very busy with exploring various ways of looking at things like the DPDK tool kits and employing them and bringing them as close to the networking gear as possible so we can do better and more filtering without necessarily having to answer all these queries and being a big canon.

AUDIENCE SPEAKER: I would really love to hear more about these details.

ANAND BUDDHDEV: We would be very happy to talk more about them when we have done some of this work in that area.


AUDIENCE SPEAKER: Christian, DENIC as well. You told us that you do the approach maybe not to use HSM, DNSSEC, if we change our HSMs ‑‑ but at the end we didn't do it because storage key was a little bit complicated. Do you have some thought about how to solve this ‑‑

ANAND BUDDHDEV: At the moment we haven't done too much exploration. We are still in the early phases of this. So, you know using a soft HSM or keys on disc on encrypted partitions, stuff like that is on our list to explore. What I didn't mention earlier but it's also in the article that we have written, is that we are also keeping on eye on this project called crypt tech which is open source, open designed HSM that hopefully prevents this vendor lock‑in, so crypt tech is still in very early stages of developments, I can't say that it's any way usable right now but we are very interested in it and if ‑‑ should we go in that direction we'd love to talk some more about that as well with our community.

AUDIENCE SPEAKER: I think there was a presentation about crypt tech at Copenhagen?

ANAND BUDDHDEV: I seem to remember there was a presentation about it.

SHANE KERR: I have one question as well. You mentioned that you had actually stopped serving a lot of ccTLDs which I think is interesting. Does this mean that the 30‑some you have now, you have a good contact relation with them and have spoken to them?

ANAND BUDDHDEV: This is one of the more time parts of providing secondary DNS services contacts at the ccTLDs. I wouldn't say we have solved this problem. It's an ongoing challenge let's say. But this process of evaluating them, talking to all the ccTLDs, getting them to answer some of our questionnaires has improved our contacts with all the ccTLDs and in this process we were helped along by organisations like ICANN and AFRINIC and just general contacts in the community. So things are much better now. And the one thing that we are doing with the ccTLDs that still qualify is we are signing an MoU with them, it's ‑‑ it's to formalise things a bit better, and hope plea that will allow us to have contact with them, better contact, and we also, according to RIPE 663 we are supposed to be reevaluating the ccTLDs on a periodic basis to see if they still qualify or not so we hope that this continuous evaluation will keep the contacts updated.

SHANE KERR: Great. Thank you.

ROMEO: There was one detail that I would like to add there. One of the points in RIPE 663 was an advice to the RIPE NCC to also reevaluate those ccTLDs that we have in the contract on a yearly basis.

Considering the fact that ‑‑ and the impact that the whole process has had on the RIPE NCC team, we will not do this on a yearly basis, we are planning to do this in a three yearly cycle instead. Because that from our perspective gives us the best trade‑off between operational effectiveness, load on the team and effect to changes in these ccTLDs organisationally that require a change in the contract ‑‑ or a change in the operational service.

SHANE KERR: Thanks for the information. Personally I think that is reasonable, I think it meets the spirit of the policy. But if anyone disagrees we can discuss it on the list or talk to RIPE NCC directly. Thank you again.


Our next presentation is by Jaromir, and ‑‑ this came up as comments on the blog that Anand wrote about all the RIPE NCC DNS stuff so it will be interesting to see where the discussion goes today. Thank you.

JAROMIR TALIR: Good afternoon, as well. I work for cz.nic. And I am going to talk to you about our recent DNSSEC automation that we did and the tools that we developed.

So I will briefly describe what is it about, this CDNSKEY records, I will describe the motivation we had to implement the feature and since this technology has actually two parts, there must be the signer and the processor, I would say I will talk about the stuff implemented in Knot DNS authoritative signer part and then about the registry part, which is our open source registry, named FRED. I will show some statistics from last month and some plans that we what we will hopefully do in the future.

So is it about:

Quite some time ago, in 2014, there was a new RFC created at the IETF about the possibility of automation, also KSK roll‑over. That is sort of automatic for a long time, but with the KSK roll‑over there was always an issue that the, there must be sort of manual step to upload the DS records to the parent zone for us actually to the registry, and the authors of this RFC created a new, two new DNS records, CDS and CDNSKEY that signals that DNS operator or the domain wants to upload the new DNSKEY to the parent zone. In that RFC, they mentioned that if the domain is DNS signed it's perfectly secure, so why not do that? It is expected that the parent zone operator will sort of scan the zone file for this particular DNS record but it left out the other possibilities how to get those records directly. This year, in March, the new RFC 8078 described missing features from this process, which is the introduction of new key and deleting of the key if somebody decides to switch off DNSSEC, for example, for some reasons. So, these two RFCs are approved right now and there is also one draft that is related to this technology. In the Working Group right now and change a little bit approach from the pull where somebody will scan the zone file to push model because it talks about restful RP I that will help with immediately signalling the changes by calling some rest API:

So, our motivations for implementing in cz, we have currently about 51% domains signed but we saw also that there is not big but some percentage of domains that is signed but the DS records are not accomplished in the registry so we see that there is a demand too far DNS signed domains but somehow they struggle to get the DS records up to the registry. So we think that there is a need to break those barriers to implement this chain of processing of DS records from domain to registry, and by implementing all these things we think that maybe we could bring the DNS back to the good old ages when it was in the way like set up and forget, so the DNS operators may just switch on the DNSSEC on their zone and everything, everything above is automatically processed. The other motivation was that our open source registry FRED is used by ten other countries around the world and if they will upgrade to the most recent versions and they will decide to implement this feature as well or set up this feature as well, configure it, then we may boost the option also globally.

Last but not least, all new technologies, they need to start implementations to prove that they are viable so we believe that maybe our implementation will foster some discussion if we are doing that good way or bad way or whatever.

What is current implementation of those RFCs. On the signing part the ‑‑ the OpenDNSSEC doesn't have this feature, I talked to Benno that they plan to implement it in early next year, both PowerDNS and BIND they implemented part of this nature but it still sort of sending manual, you need to call some script in the Chrome to be able to use it. I talked to new guy responsible for BIND, I forgot the name, Andrei, that it will not be 9 /12 but maybe in late versions they will focus on implementing that as well. So, the only signer, open source signer that currently supports this feature is Knot DNS. It was since the version 2.5 but there were some bug fixes in 2.6 so it's better to use the most recent version. And on the registries, our open source registry, FRED, has full support for this since recent version. I am not aware of any open source software that supports that but maybe I should not forget that there is a service, not software but a service that quite promote this approach which is Cloudflare and actually you will see statistics for our zone, it's mainly Cloudflare domains that are using this technology.

So, back to Knot DNS, how it's implemented in Knot DNS, the DNSSEC was part of DNS 4 for some time but the new version implemented KSK roll‑over with signature method and there is option that you can implement also that KSK submission via CDS and CDNSKEY record, there is periodic checks for DS existence in the parent, in the pre configured name server that you have to put in the configuration. There is a possibility that you can put all the different name servers in the configuration for the parent zone but maybe it's better to add also some DNS validating resolvers that will provide proof of existence as well, so the DS is only approved that it's visible when all the these name servers will respond correctly.

This is a configuration example, it's quite simple, to existing configurations you can just add KSK lifetime and potentially the submissions sections with the list of name servers that you want to query.

Some other features that we support, there is a possibility to use this single type signing which is called CS K, that you don't need to have the split but you have it in one key. Knot DNS provides the possibility of shared key. So this shared key scenario is also support, since the most recent version Knot DNS also support algorithm roll‑over with all the specifics to normal roll‑over. Currently, if you want to use these DNSSEC deletion via those dedicated records, that you must do it manually. So you have to switch to automation off, put those key there, sign into in manually and it will be possible to use it as well.

So this was about a signer part and as of the registry part, we called this feature in the registry automated key management because we have the concept of keyset so people are not uploading the DS records but DNSKEY records to the registry, and the keyset is a collection of those DS records can be linked to the domain. We are taking responsibility for managing these keys for the domains that accomplished that CDNSKEY. It takes roughly three hours. And we behave differently for three different kinds of ‑‑ or three different sets of domains, domains without can he certificate, new domains, have already automated key sets and some regular see keyset created by registrar. For the first group we scan only authoritative name servers from the registry database and we do it via TCP queries. So, when the CDNSKEY we keep the scanning the domain for seven more days and if the results are always the same, we create a new keyset with the material from CDNSKEY and link it to the domain and we notify the domain via e‑mail and registrar by EPP poll mechanism. For the domains without automatic keyset it's quite seesier because we only scan the domain, scan the CDNSKEY via secure channel so we only take it and update the content of the keyset. Or remove the keyset if that is that specific DNS record. And for the domains with legacy keyset it's a combination of these two approaches, if you will find CDNSKEY we create a new keyset with the same material and link it to the domain and again we notify all contacts or registrars.

That is sort of architecture of the solution in the registry. On the top there is common line tool called CDNSKEY scanner which is actually invoked by the second level tool, FRED AKM, the CDNSKEY scanner has C++ implemented with ‑‑ DNS tool, library, and it takes domains from the input and processes, do the scan via DNS and put the result of the output so it's possible to be used remotely. And the second tool, it's like the heart of the whole procedure. This is invoked from the ‑‑ day and implements all the processing logistics and stores the data in the SQL database and calls the scanner and gets the results and processes responses according to what it specified.

And at the bottom there is some registry‑specific layer that provides all the data from the domains and provides interface for notifying registrants and all these registry‑specific things. So I would say maybe the first two parts, it's written in the way that the first two parts should be potentially used by any other registry, if they wish to use it, with a small modification, either they could provide their own CORBRA server, we are using that for the communication between back end components so anybody can implement the CORBRA server, some interfaces in the FRED client to be able to put another registry back end under the, this first two tools that I mentioned.

So, some statistics. Currently, we have more than 600 domains under this new management. It's not much. And yeah, I would say maybe 90% of those domains is Cloudflare that we have and the rest is some experiment of people that are running Knot DNS themselves. You can see that when we started to run this project in June, we immediately put about 160 domains and the second peak that you see in September is when we added also those, I mentioned those three groups of domains, so in this, in September we also added this third group with already existing domains with existing key sets. So there was the second peak. And the third small peak is that the part of our own domains that we manage, that we have in our R&D department, the change to this kind of management. So, and the red bars are the roll‑overs so there is not much roll‑overs also.

What are our plans? We continuously discuss the opt out possibilities for this feature. At the beginning we maybe wrongly decided to keep the functionality of our registry log in the way that if the domains is locked by the owners so nobody can do changes, we accept that they don't want to do changes and we don't change anything but it looks like there is about something like 50 domains that probably are blocked from this process and the people are not ever about that so we will probably change this decision and we will do these changes even when the registry lock is put on the domain. And we discussed also with our registrars if there is no problem for them so we are thinking about currently none of our registrars want to implement this feature themselves but in case there would be such a request, that we are possible to implement some opt out for specific registrars and we will keep the domains of that registrar untouched but it's still the plan and still a discussion.

Currently, the architecture of the solution provides a possibility of scanning from multiple locations that RFC says that it's good, it's suggested to do it from more than one. Currently we are doing that from one location but we are in the phase of testing the possibility to have the, more locations geographically and topically different places but it will provide better trust into the results of the scans. We have also found out that the notifications that we are sending about what is going on to domain owners, it's maybe not the best one, that sometimes they go, respond to us, they do not understand what we are doing so we will have to play a little bit with the templates and text how to explain what is going on, and as I mentioned at the beginning, we would like to implement also that push model based on draft that currently at the IETF. And again, last but not least, we need to do some marketing to this new technology because it's not widely used right now, so we talked to different sessions in our registrar group, our own conferences that we are doing, so hopefully that we will see some progress soon. We are also talking to Cloudflare because they have much more domains in the dot cz but the users decided or not decided to click on the button enable DNSSEC so we would like to make some marketing towards them to do that, to enable DNSSEC in their domain because by doing that they don't need to do anything else, everything else will be done automatically.

So that's everything for me. I hope I will be able to answer your questions.

SHANE KERR: Thank you.

AUDIENCE SPEAKER: Dimitry, I am watching development and FRED development for years. Congratulations, it sounds complicated, what can possibly go wrong. I want to get this idea. You sent e‑mails to technical content so communicating to the staff of the domain owner. If these people are confused they don't know what to do, they don't read their e‑mail, can what possible go wrong, it's still rolled over, there is no action for them to take place, by the way, the CDNSKEY trigger happened, we replaced the DNS set, right, it's notify only?

JAROMIR TALIR: It's not exactly the case that every time the technical contact is real, sometimes people fill in the registry, especially for in the case for example of Cloudflare domain name server and the owners contact in the registry because that is how they do that so even the domain owners get those e‑mails that should go to the technical contacts and there are some features in the registry that automatically send to the domain owner when there is some change in the domain. So we will have to play with this a little bit, what e‑mail to send to technical contacts and to the domain owner to make it less confusing. So it's ‑‑ they don't complain, they just, they are just confused.

AUDIENCE SPEAKER: I think they are just deleting and saying this is DNSSEC. They don't need to take any action right. What is the maximum delay from the replacement of the keyset?

JAROMIR TALIR: If this is just a replacement, we are doing that once a day so they have to wait at least a day if they do that. We will implement that draft which hopefully will do that immediately, that would be much better.

AUDIENCE SPEAKER: Like a day, delay ‑‑ I mean things can go terribly wrong if something goes wrong and I am not even talking about ‑‑ cool, thanks.

AUDIENCE SPEAKER: This is Marcos from DENIC. Thank you for the presentation. I fully share your vision of setting up the DNS and letting it work forever like in the good old days and I love the, also the attitude of being the first doing this and improving that ‑‑ proving that it works or doesn't. That is excellent and thank you very much for that. That being said, let me see if I got this right: You are bootstrapping DNSSEC information in some cases, history from the DNS and that is data that you cannot actually validate the chain of trust. I am a bit concerned about that. Can you share your thoughts on why you are doing this?

JAROMIR TALIR: Well, you cannot trust that by the way of validating the DNS that is why we implemented those three things, to be at least sure and this is the TCP connection to the authoritative name server, all authoritative name servers that are in the registry. The second is that we notify the technical contacts. And third thing is that we keep that scanning for seven days. So, at least that we think that the combination of these three approaches should be even according to RFC, enough to have approved it's the intention the real owner of the domain.

AUDIENCE SPEAKER: Okay. But we know notifications sometimes are not read and they ‑‑ and just what, another question: Did you keep international tracking the registry, in which way those data came to you, did they reach you through a trusted channel or had you got it from untrusted data? Did you keep track of that to separate the good from the potential good?

JAROMIR TALIR: I am not sure if I caught the question that ‑‑

AUDIENCE SPEAKER: The DNSSEC material from trusted channel like EPP or I got it from the unvalidated DNS. So you are able to separate.

JAROMIR TALIR: Yes, we are logging ‑‑ still possibility of doing that via EPP, it's still possible. It's alternatives and we know which one was set up which way.

AUDIENCE SPEAKER: RIPE NCC. I have a remote question from Arson Stastic from ACL net, he is asking are CDNSKEY scanner and FRED AQ also open source?

JAROMIR TALIR: Yes, they are on the website, you can download that.

SHANE KERR: Thank you. Very interesting stuff, I appreciate your contribution.


Next up we have Benno from NLnet Labs.

BENNO OVEREINDER: Thank you, welcome. Living on the edge. So first, I want to explain what is the edge, what I do consider the edge. This is typical ecosystem or ecosystem how we consider our DNS system, we have stops, we have resolvers and we have authoritative name servers. And for complexity it goes from we consider them ‑‑ stop is very simple, the heavy lifting is done by the resolvers and the authoritative name servers, they are ‑‑ they are high performance and give you the answer as soon as it's back. And because the moderate, they are not super complex like the resolvers. However, if you look at the network, habitat where they live in, authoritative name servers and again I generalise here, they live in a data centre, well, if we consider the large name servers, simple end‑to‑endness, maybe if you come up with a better name for this, it's a transparent good network, it does what it should do. Going to the recursive resolvers, it's somewhere in the middle, sometimes it's in the ISP, again good connections, fairly simple, sometimes it's running in my home router where it can be noted or in another environment and it's a little bit less transparent there. For the steps, it's really at the edge, it can be everywhere, captive portinal hotel, can be here at the RIPE meeting so very many different circumstances where it has to deal with. So for the end‑to‑endness it's quite complex.

I want to tell you something about living on the edge from a sort of perspective before I just start telling about all RFCs and standards we are working on. And the thing is, I want to talk about from the ground up security, so something, how can DNS help us to build up a security and secure connections with the rest of the world. So this is our typical example, I want to use here, customer web portal. And these are the interactions for time sake I will skip this. You will recognise it. So, what can possibly go wrong. We know of DNS spoofing and I don't know, everybody knows too much or that much about too many CA problems but well sometimes CA is compromised, in the past year, I didn't track them most recently but one famous in the Netherlands was ‑‑ I am not sure certificates, I don't know, but anyway, and for TLS clients, http browsers or whatever, they cannot make a distinction between proper certificate and from compromised certificate authority. So there is ‑‑ they have a problem of too many certificate authorities.

We have solved this. Well, DNS spoofing, we have too many CAs, CA pinning, HSTS, and we can use DNSSEC of course, there we are. Spoofing DNSSEC and we have DANE. So, problem solved, can we go further? So now my story starts.

Well, not completely. So there is something also like if we rely on DNS and DNS‑based security and you want to boot strap your security from DNSSEC data, key material, you really have to rely on your resolver. And there are some tricks to deviate your traffic from your resolver, you think you are connected to rowing resolver and our routing tricks, there are ways to get there. And he can answer whatever he wants and send you again to a wrong website.

So, there is something still to be solved here. We can counter these kind of resolver hijacks by doing DNSSEC on the ‑‑ the closer you do validation the better it is perfectly or it will be perfect if it is done by the application or on the stop resolver on your machine you are working with. Works fine. So the stop does the ‑‑ all different queries, DNSSEC aware because resolver. The resolver must also be DNSSEC aware, come to that back later. It doesn't have to do the validation itself. As long as you get the key material to do the validation from this resolver. Alternatively, you can do DNS over TLS, so the stop is directly connected via DNS over TLS connection to validate recursive resolver. Two solutions to the problem. Well, also TLS sessions can be hijacked, there have been many examples already presented. I won't go into the details but you want to do, have an authenticated and not just TLS session. So, we have to do for DNS over TLS, we have to do the TLS A lookup so we need some DANE validation, is this the correct privacy or correct resolver I talked to and then set up this DNS of TLS connection. It's a little bit check enand agency problem but it works. This is a way to get it working. Alternatively, this is really neat, I think, we can get all the TLS A and DNS material in one shot with DNSSEC authentication chain extension for TLS. This allows us to do the validation, authentication of the other TLS end point. And then we can set up our authenticated TLS channel to our validating recursive resolver. So this prevents the hijack. Of course you need for the first step a resolver but if it's correct you get there at the end point, if it's a rogue one, well, you cannot validate the end point and then your denial of services but you are not tricked into other business. It's noticeable that you are being ‑‑

These are the privacy requirements. Capabilities and standards I will skip them, it's later for you to read at home. Good. Are we there yet? So we have apparently solved this. No. There are some consequences, if we are ‑‑ stop living on the edge, we are not there yet. There are some DNSSEC road blocks here, so for a recursive at the end point as a stop resolver you are behind recursive resolver and there are many broken resolvers middle boxes which do not pass the DNSKEY material so you can iterate what you want and ask what you want but you don't get DNSSEC key material. Bad luck for you. We thought we bridged, stopped all the gaps and we could go on, but no we need to think about this, how do we get along this. One thing is that a stop can do a full recursion fall back so if it finds out it cannot get the key material from its upstream, from the resolver, it falls back to full recursion and do all the queries itself. That is, that works, and that is one of the things we have implemented also. Alternatively, of course we could think, well, we could also use a DNS over TLS to get out but there we have the boot strap problem so then you need this resolver to get there. But again, DNSSEC authentication chain extension can help so if you get there you can authenticate your channel and use this validation recursive resolver of through the crappy network.

Good. Another thing we have to deal with. So something like DNSSEC 64, so DNS in an IPv6 only environment, Jen gave a presentation also here at the RIPE, she wrote up a blog, that there is really problems to validate to do some DNSSEC validation, an IPv6 only network because better validator, you only get synthesized IPv6 and the IPv4 is being signed. We have, you can solve this at a stub, not intermediate. The stub if this is ‑‑ people mention IPv6 prefix discovery can notice there is a DNS 64 and there is a DNS 64 server in between and can, how do you say that, figure out, discover the IPv4 address, original address and query the data himself and do the validation of the A records itself. Other ways to implement DNS 64 itself, so we don't need DNS 64 externally but the stub can do the DNS 64 itself, the synthesis and the validation. So this is all ongoing work. And we have to talk about one thing here, if we haven't talked about it already, the KSK root roll‑over, more road blocks. And DNSSEC validation stub must do RFC 5011 but, well, sometimes computer stops working, if the stub is running a kind of Daemon mode it can, stubby, we will come to that later, it can do some checking but host of the time if you are at end point you move away to another network or down or up again. How do we do that? We can do some inband 5011 tracking with the DNSSEC chain extension with your validated recursive resolver, this can help us to keep up with the latest KSK, very neat.

It's different for a KSK root roll‑over, for the stub library. Stub library for DANE, it's an application so it runs with user privileges, it has no system config so it has to run with zero config setting. So it has discover the current key. So, they have to boost up the DNSSEC capability. This has been described in RFC 7958 and it's kind of inbound anchor like functionality we have implemented that. This is also one way the stub will fetch from IANA the key material, the root key, signed, it's ‑‑ it's signed, the software comes with I think it's called CMS, signature or key, so there is a long‑standing key which goes over 2000 and 30 something and we can verify the data we received from IANA, different form it's a, we can verify the signature over the key they give us and we can boot strap our DNS Trust Anchor.

Good. Again, number of RFCs covering this work. And I think we close most of the ‑‑ that is up to you. Some final thoughts.

I presented a little bit about the hazardous environment or difficult environment that he can could system stub lives in and the things we have to do to close a number of gaps, also from security perspective, but also from, well, from different systems in IPv6, IPv4, IPv6 only networks, we are closing the gaps with ongoing work, there are a lot of RFCs in drafts going on, and most of the discussed work, presented work, have been implemented in getDNS and its stub resolver, the friendly help stubby, which guides you through the mind field of road blocks. And of course I want to mention again this great DNSSEC authentication extension draft will solve all your problems, well most of them or many of them at least. Thank you for your attention.


JOAO DAMAS: Glad you put those things about the KSK roll‑over. As you will notice today is October 25th so the 11th came and went by and we are still stuck, right?


JOAO DAMAS: It's good that you kind of approached this from well, there is a problem here with plaster over it, paint it, another with plaster and you close all the gaps. But this is on the other hand also very IETFy talk in the sense that here is the whole theory, now it's someone else's problem. And I think it's time for perhaps the IETF to take some responsibility for the stuff it produces and instead of just sending people into another mess down the road because we have been there several times and we are in a big one right now. So goals, awesome. Work, theoretical work to get to solve those goals, also awesome. But you have to start taking a different approach when you produce like this and perhaps some clauses there that these things don't become active until there is enough deployment of other requirements because this thing of pushing RFC 5011 and hoping for the best, it didn't work with the resolvers and the resolvers is sort of a contained number of devices on the Internet, mostly operated by professional services. If you just let this loose on the end hosts, I don't see how this makes the world a better place. You know. So perhaps take a little bit of consideration about what the deployment scenario is where would you consider that the top of the information providers to that will make it work, would be allow it to activate a mechanism like that, how much support on the end hosts, on the devices that are going to be living with these, is necessary before activating the time bomb.

JEN LINKOVA: IPv6 Working Group chair. Thanks for mentioning DNS 64 and ‑‑ I am still surprised that we kind of tried to solve on the client what seemed to be a problem on the server side. Instead of solving the problem by deploying v6, making the service v6 enabled instead all ‑‑ issues going away, we come in with some reasonable or less reasonable work around how to do this. So thanks a lot for saying this to DNS people, because apparently they are not the same people that deploy IPv6 so now they could at least go to right people and tell them we need this strange thing called IPv6 because otherwise we could not make DNS work ‑‑

BENNO OVEREINDER: One question to you then. You rather not see this kind of support in library because ‑‑


BENNO OVEREINDER: It takes the need away to move to proper IPv6?

JEN LINKOVA: We definitely need it because IPv6 only networks and NAT 64 networks is ready on the ground, like mobile providers so I don't know, I have not seen it but I am curious how it works but I use ‑‑ my phone my phone will give me 6 only network. I have not seen it but I am curious my laptop v6 only network and what is going to happen with my validated resolver we need this because I believe it's work around and the proper solution, strategic solution work around should be just go and make service v6 enabled.


JIM REID: From Scotland. I think it's great that you and your colleagues are doing this kind of work and trying to encourage more uptake of DNSSEC validation, more power to you and I am glad to see this kind of work going forward. However, I am wondering if this is really go to help things long‑term, this goes back to what João was saying a few minutes ago because in my opinion we are entering uncertain waters and choppy waters for DNS especially with what is happening at the IETF. So stuff looks like it's coming to an end, not much uptake don't have much of a future, possibly, says he being provocative, we have DNS over https coming along and we might be doing DNS over quick. So, if we do those kind of things without them having ‑‑ securing the last mile with some of these other kind of technologies and using some flavour of TLS 1.3 to secure the connection between the edge resolver and the resolving server, so we secure that communication path then we are pushing the validation things those circumstances in my opinion away from the edge device and on to some resolving server or quite possible a web browser. If there is going to be a great deal of future for validating stub resolvers?

BENNO OVEREINDER: That is a good question. Currently we do hear some different, we receive some different signals from industry. From industry is quite some interest in stubbing, more specifically. So there is a need, they have a secure channel from their stub resolver to the resolver. They really want to contain things.

JIM REID: Grateful if you could get that into an iPhone or Android app, great if we could get it there.

BENNO OVEREINDER: Well Android ‑‑ we know the people from Android, from Google and from Apple, they are all interested in DNS over TLS work and we are discussing things with them and sharing ideas but also apart from the Android and iOS communities we also have large companies from industry are also interested in DNS over TLS stubby. Indeed ‑‑ but DNS over http is something serious to consider.

SHANE KERR: Thank you, Benno.

So in some ways I am quite a bad Chair for the DNS Working Group because I find the conversation really interesting and I am reluctant to stop it, so we are a couple of minutes behind schedule, but I want to just go through and this is the last presentation and it's also going to be interesting, let's go.

SANDOCHE BALAKRICHENAN: I work with AFNIC. Why DNS should be the naming service for the IoT, when we see that for current Internet DNS is a naming service and when I say that look up between basically domain name and IP address what we have seen in Internet. So, for the next 20 minutes what I am trying to argument here is that DNS could be the naming service for IoT, I will try to provide some examples to benefit my argument.

So, some terminology here. When we say, for example, my name is Sandoche and it need not a set of rules to have this name, and it need not be unique. Why? Because for example in France we have where I work, we have number of Matthew, if I talk in my office, if there are three Matthews and I say about one Matthews, the other one who is listening will understand based on the contacts and if he doesn't which of the three Matthews we are talking about he will ask question. So we can identify which Matthew we are talking about. But unfortunately the missions are down they need to have a unique identifier. So that is why what we see here in the Internet we have identifiers which are based on naming ‑‑ based on set of rules and we need to have some facility so that this identifiers are unique.

So setting the stage for Internet of things we will see what happens in the Internet. For example, let's start with naming conventions, there is a set of rules. I will concentrate only on the domain names so these how the domain names should be constituted is provided by the IETF so there are RFCs for that. At the next we will see at the provisioning path, where we have the ICANN and the rules and then we have ccTLD registries like .FR, my company, Verisign, things like that. And out of this we have different domains, so these ‑‑ this is ‑‑ this is a set‑up that has been designed by the DNS people so that make sure that the domain names are unique. So, and also one of the advantages is that if I want to have a domain name under .FR I don't need to talk to ICANN, I can talk to some re‑seller somewhere down the level so it's easy for end users to be part of the DNS database, that is why that has been scaled for a long time. So, let's go to the resolution part at the top. Here we see that, let's look at the path, we have two applications, one is web service and other is mail. When we look at we have identifier which follows the domain name conventions, then URL and the mail address so each one of them uses the same naming service as DNS to have different applications so that is what is in the Internet.

Let's go to the Internet of things. Look at the entity ‑‑ entity of interest, any object is a thing. So I could be a thing, the mic could be a thing, whichever could be a thing. So how does this thing identify and send information across a network? So for that we need a carrier devices, that is the right end path here so the carry devices are either bar code, RFID, NF3, anything like that. So each has an identifier which is based on a set of rules, naming conventions, and they need a naming service.

So, let's go to the application at the top. What is application here. The farmer needs to know when the cow is ready for breeding. So we say that the cow has a ‑‑ 24 harvest period so farmer needs to know that before so that he can do things so that provide for breeding. So this is a real application which is in Japan so what we see here is that the farmer ‑‑ it is said that during ovulation period the cattle is quite active and walks quite a lot so that information is sent to the Cloud and we have a naming service which identifies the parameter, let's say and the farmer mail ID so that information is transmitted to the farmer. So when we see here the resolution part, we are a naming service which is private to this company, so it need not be unique all over the globe but the issue is that if tomorrow the farmer wants to use a new service or he wants to use a new technology, he has to reconfigure his entire system. So that is an issue here. So this is the example of naming service under scope of a company.

But now if you want doing large as possible there are a number of companies, a number of alliances here, in the IoT domain we have seen hundreds of thousands which are grouped around applications or technology or things like that. So for example, if the famer earlier was was using he can go from company X to Y, so this gives him more space but it is limited to scope that it is under the alliance. Let's see an example under the alliance. This is the consumer industry alliance, which is called GS 1, this is where the Walmarts and the Carrefours are there, they are part of these alliances, if you want to be part of this consumer industry, you have doing to them and ask them for an identifier, it's something like what you ask in the domain names. If you see how, the identifier is called electronic code. It is hierarchical. If you are a company in France your number will start from between 3 and 3 .75, starts from the country code, then you have a company code and in that you can divide into product ID and into serial number, something like what we have seen in the DNS.

So what happens in the consumer industry without the DNS. So the left side is the identifier, which can be either bar code or ‑‑ they are used database called GE PI R, two applications one is extended packaging if you scan using your mobile phone it will give you more information than what you see on the label of the product. The other is track and trace used by industries to you see the complete lifecycle of the product. What are they using? They are using a flat database called GE PI R, everyone wants to have information they should contact the GE PI R guys and add that information into the database unlike the DNS.

We have seen two examples, one is under the scope of company, the other one under the scope of alliances, support of them what we call the Internet of things rather than ‑‑ it's not like Internet, what we have, where I put my domain name I do whatever, I can use whatever protocols are available and be part of any services but I don't have the bother about it, but here we have to bother so we need is Internet of things.

So let's see, the same example of the consumer industry, with using the DNS. So with pressures from the European governments the industry started looking at open database so they tried different technologies and finally they decided that they will use something that is related to the DNS so it is called ONS, object naming service. So, this is a standard under the GS 1 organisation and my company has been part of this alliance to this standard.

So how it is done. Here is a E PC electronic protocol and then it is converted into domain name and after that anybody can access the DNS to find the information regarding the product. But the sad thing is as of now there are only one or two domain names, one is managed by Verisign and other managed by Orange, I think, and none of them, and it's not used by people, so it is just for namesake and what we understood from being part of this alliances is that the industries, they are being using this infrastructure for a long time, like 30, 40 years, and they don't want to move into a new technology or they are afraid when you say DNS is open and things and there are a lot of fallacies going on in these alliances, so they don't want to move to DNS so that is the sad part.

Now, we will think, okay, these were industries well structured and they have already there and very difficult to move to new system so we wanted to have a look at it and here as well, we are working with the LP WAN, a low powered wide area network where you use the narrow band analysers to send information. So if you have heard about a company SicFox in France, there are millions of subscribers but complete from the device until the Cloud, the information is all controlled by them so there is no other company that is using it so if you are under SicFox you cannot move to any others. That is where it came to play. It's called long range alliance and there are about 500 plus companies so this is where we are going to see an example. What happens is on the left side is the radio path and on the right you have the IP path.

So the information from the device is sent to the gateway and converts so it sends this information to the Internet. So if you see detailed view of the network, let's say there are three devices, two of them on the A network and one on the B network but they broadcast information so that any gateway is ‑‑ gateway A is for A network and B ‑‑ B network so all gateways receive all information if they are in the range and they check okay this is not my ‑‑ if it is mine they send it to the network server, that is the operator. That is how it happens as of now. So, what is the issue? The issue is that if you are going to have an activation join the network from a foreign network you have an issue and if you are roaming you have an issue. So to solve the issue they say to use DNS so that if you don't know which you are on, who is your operator then you ask the DNS so this is how they want to solve this issue. And as of now there is a lower pack which uses DNS.

Now, what we have seen as of now is that we are seeing two examples, one of the DNS and loweral lines where for the Internet of things they plan to use DNS. So there are other use cases like clean slated Internet or there are saying that all of them use IPv6 as identifiers, there are lots of suggestions. But what we see is that there are some requirements so that it should be the naming service should be useful for anybody, one it should be scaleable, if you see DNS has been scaleable and as of now the around 2 billion domain names on DNS. It's moved from one ‑‑ 2 billion and another thing is that existing identifiers should also be working and new ‑‑ there are new that come, should also use the same naming service. If you see in the DNS we can have flat identifiers as well as identifiers which are hierarchical. So if we concentrate on all these different points, what we see, one naming service that is feasible as of now for Internet of things could be the DNS.

So, if you see, the different standard organisations that are working on object naming so domain names, it is under IETF and it uses DNS as name service, electronic product code, ralliance, which uses ONS which uses DNS. Object identifier ORS, the naming service used by them using DNS, digital object identifier, for the initial level it is DNS. So, most of this standardisation organisations are using variant of DNS to resolve the object. So that is why we think that as of now for the only feasible option is for Internet of things that using DNS as a naming service.


SHANE KERR: Well, certainly thank you very much, speaking on behalf of the DNS community. I think we can all agree DNS should be the name service for everything. Have you given this presentation in other venues?


SHANE KERR: What was the reception? I am curious.

SANDOCHE BALAKRICHENAN: It was under conference call, and they were sceptical because most of them from were from the telecom industry.

SHANE KERR: Right. The reason I ask is because in my discussions with people about Internet of thing naming, it seems like many industries, telecom, also telecom manufacturers and governments are very eager to invent an alternate naming system with whatever properties they consider important: Control, propriety vendor login, something new to get papers accomplished. So I am not surprised.

AUDIENCE SPEAKER: There is a few people standing at the line.

JIM REID: DNS guy. Shane's pretty much hit it on the head there. Is there a lot of competing interests here? There is almost like a land grab for alternate naming schemes or other naming schemes potentially used nor new IoT landscape that is unfolding in front of us. And what the stuff is going on behind closed doors, the different industry fora and although we would like to think DNS should be the anchor for all this stuff it's not necessarily going to be the case but certainly for the body that get their own way or another critical mass behind it. For example the DOA system, other schemes are being activated by other types of IoT types of vendors as well, where we might think the DNS might well be the answer I would be loath to see somebody else take that place. You have tried to do this before and it worked, we had ENUM which was an hierarchical numbering scheme, we put it into the DNS and it died a death. So this is a possibility we might see this thing happen again with RFED tags or RMII numbers, MAC identifiers and all that kind of stuff. We have to be aware of the fact that some of these other schemes may have advantages that cannot be represented in DNS right now, but the requirements are clear and fuzzy people are trying to reinvent the wheel because they think they can then get control over some name space or naming scheme and therefore sell lots and lots of identifier names and make money fast. So we have to be aware of that and I am not sure how we are going to tackle that as an industry in general and how we can try and encourage people not to go down these alternate paths.

PETER KOCH: I am sorry, I can't disagree with Jim here. I think we shouldn't fall victim to this idea that this is a different Internet. The IoT is the Internet, it's just a lot more things on that. So, of course, because the DNS is the naming system for the Internet it will also the naming system for the Internet with these many more things. If there are new applications, new classes of objects that are attached to the Internet and so on and so forth, there should be kind of requirements, kind of ideas phrased before we could reasonably say that the DNS is not going to fit in there. So, people should demonstrate what their actual needs are, far beyond the fantasies that are behind this promotion of the DOA system, for everything else like content control but naming. That said, the examples you presented are like slightly different, you are taking existing or newly defined identifier systems, talking about the abstract notation of the mapping because the DNS is doing the mapping function and suggesting okay all these number based or identifier based systems can be mapped into the DNS and that is also a claim that doesn't necessarily hold from the start. Because there are some issues in there and they might also be completely proprietary and be operated somewhere. I think Jim already alluded to that point. So my core message is, it's the same thing, don't let you drag away from being on the Internet, there is no need to rebuild this whole thing, although some of the approaches today already look like that, thank you.

AUDIENCE SPEAKER: Yeltsin, I was DNS guy and of course when all you have is a hammer everything looks like a nail and I have been doing a lot of IETF things lately and everything is indeed a nail, unfortunately the nails are usually the pointy end up which is a bit of an issue. So I actually agree with Jim, however I think that what he said at the end there that somebody might use that and make some propriety system to make a lot of money that is a reason to start pushing this hard everywhere. So I do like it in that sense. When I saw your title, there are two ways you can use the DNS in the IoT and in this particular example is mostly registry.


AUDIENCE SPEAKER: So that has nothing to do with how things are deployed or used in real life so I miss that connection in this particular example. So how are you going to ‑‑ how are they going to force people to actually keep this updated, keep this alive especially if they use public name space for that? Did you have any thoughts about that?

SANDOCHE BALAKRICHENAN: Sorry, what examples that I provided ‑‑

AUDIENCE SPEAKER: So you transformed one of the registration ‑‑ one of the new ideas into ‑‑


AUDIENCE SPEAKER: Yes in public DNS in the company's zone.

SANDOCHE BALAKRICHENAN: Yeah. As of now, for example, if you see GS1, and the lower alliance, it's lower alliance dot org. It is under the existing domain name.

SPEAKER: We have ‑‑ he is now free of duties, obligations, it's his always dream to change this mess, just try to put all things in one basket. Just I suggest ‑‑ want to reminder, maybe you can correct me, at the beginning of 2000 cz.nic tried to put rough idea in DNS ‑‑ what is happened you can estimate. Identify all ‑‑ logistics and so on. It's not first time, it's also comparable and also some example which can now put in Internet of things. But it doesn't work. It was interesting case we can return back. Thank you.

SHANE KERR: Thank you again for this presentation.


And thank you everyone for staying late. Apologies for being a little over schedule, but I hope it was worth it and we will hopefully see you all at RIPE 76 in Marseilles.