DNS Working Group

26 May, 2016

At 4 p.m.:

JIM REID: Good afternoon everybody. This is the second session of the DNS Working Group. Just remind everybody as before when you do say anything to the mic, please state your name and affiliation, because the session is being webcast and the people following on the Internet will want to know who is saying what. Got one or two minor little tweaks to the agenda, the running order, we are going to slip in extra couple of minor items from Shane Kerr, one is to do with the YETI project and some panel session, so first item on the agenda, is Victoria risk, going to tell us all the wonderful stuff going to be in BIND 9.11.

VICKY RISK: So, I am Vicky Risk from ISC, my e‑mail is Vicky at ‑‑ I have been coming to RIPE meetings for the past couple of years and in the time I have been coming here at least we haven't given a BIND update so I think it's about time.

So these are things I am going to talk about, a quick update on what we have been doing the last two years this one slide, I am going to talk about the 9.11 features, brag about our new performance lab and then just thank some of the folks in the community who have been helping us out.

2014 is when we released our last major BIND release, that was 9.10 in the beginning of 2014 and shortly after that in the Spring of 2014 is when we decided to cancel the BIND 10 project. After we discontinued the BIND 10 project we laid off a lot of staff, and so we did spend some of 2014 regrouping, this was quite a wrenching decision and a change in the organisation. Starting in the summer of 2014 we started to hear reports of the pseudorandom sub domain resolver DDoS problems that were causing resolvers to run out of resources and so we spent a fair amount of our resources in 2014 and through 2015 working on different mitigations for that problem. Then in the summer of 2015, the American fuzzy lop fuzz‑testing tool acquired a persistent mode that made it useful for fuzzing DNS and we started a project inside ISC to implement the AFL fuzz tool ourselves. We also started working on the features for 9.11 and we added substantially to our system tests. So here halfway through 2016 and we are getting ready to release 9.11, we have a new performance lab which has helped us to make a few performance improvements, and we are also working on a sponsored project to develop the EDNS clients identifier feature for the resolver.

Really the only purpose of this I chart is so you can see it has been about two‑and‑a‑half years since our last major release, that is a regular cadence for us and we have been supporting two trains at once, once we come out with 9.11 we will be supporting simultaneous trains. Also to point out if anybody here is running 9 .8 it's time to move that entire train is end of life.

This is a backdrop for talking about 9.11. I just want to mention that most of our resources are spent on maintenance, just in 2015 we did four regular may not men's releases, 12 security patch, two experimental releases for the resolver DDoS mitigation feature, five of our supported preview version for subscribers and we resolved 486 issues, not all bugs but those of you who are developers will recognise it for our team of this size is an incredible amount of maintenance.

These are the lovely mugs of the core BIND team. Many of you know he have an and Mark, MAC under move over from ‑‑ we do have three new folks on the team in 2015, on the top right that is Witold, he has been valuable contributor, right off the bat. He's the one that implemented our AFL fuzz testing. The bottom row is not really the core developers, this is kind of more the support team if I can characterise them that way. Stephen Morris on the bottom left is the director of software engineering and he is here sitting in the back, probably a lot of you know him. We have two QA engineers, added one in 2015, those are in the middle and on the far right is Ray who is not technically a BIND developer, he is doing DNS research at ISC but has been very helpful to the team and in particular he developed the performance lab that I am going to be talking about.

So people always ask how do we decide what to put in the software. The decision about what to put in 9.11 really was driven by the fact that we had spent two years spending a lot of our spare time working on things for resolver operators and I felt we hadn't done much lately for authoritative operators. Also, I was hearing a lot of provisioning feature requests from operators and also from the open Sec project, these requests varied quite a bit, people were looking for different kinds of templates, there were range of different feature requests but there was some kind of a problem. The other thing we were carrying a lot of complaints about provisioning performance, people would tills they had an SLA that when a change was made in a zone, or when a new zone was added, that the users would sit there refreshing their browser constantly, until they saw it show up in the DNS and they had to get it in there within a matter of minutes. The deleting a zone using ** R&DC Del zone can take five times as long as and some operators had told us they were batching up all their zone deletions for time of low traffic and this also seemed unnecessary and embarrassing. Folks that have a very large implementation and might have tiers of servers sending out overlapping noteifies also complained about congestion, just that the whole provisioning system was throwing out a lot of notifies and generally they wanted us to make it less chatty.

ISC has a very small consulting business, you may not be aware of this, we really have two kinds of people that call us and ask for consulting. The first are people who have decided to implement DNSSEC and they don't know how to do it and we send somebody out there who effectively does it for them and teaches them thousand maintain it. When people call us for consulting is when the script that has been running for the past six or seven or eight years that provisions their slaves stops working and the guy who wrote it left the company and a lot of bad things can happen, we had, we have had customers who have had their internal zones start appearing on the Internet and other negative consequences. So I became concerned that by forcing users to have these scripts, a lot of didn't want to maintain them or weren't able to, that this was causing a problem.

So, what were the requirements: We wanted a standardised provisioning mechanism that didn't require users to maintain scripts, in order to update the slaves. We also wanted faster zone deletion from the new zone file. We wanted faster updates in general, and we wanted to limit the number of notifies, or at least provide a configureable option for that. Also I was frankly a little bit jealous of the PowerDNS guys for having a nice database option. Obviously, I am aware that your ISPs have got a front end into their database it would be neat to be able to provision straight from the database, that is what the open stack project was using. And that is a feature that we had in BIND 10 that people were sorry to see when the BIND 10 project was cancelled. What we came up with what we are calling catalogue zones, a proposal Paul Dix‑ie made back in approximately 2004 that he called metazones zones, effectively we just create a new zone on the master that contains a list of all the zones or the catalogue for the slaves. And updates to this zone are propagated at the slaves the same way that changes to the zone contents are propagated.

So today, if you are adding a new zone, you add the zone to the master and then you have to add the zone to each one of the slaves, in turn. That is what these scripts would do. If you have a lot of slaves obviously this can go on for a while. With catalogue zones you add to the zone to the master the catalogue zone on the master and then you are done. The same things works for deletion, you delete it from the catalogue zone on the master and deleted from all of the slaves.

As you can perhaps see, on the master it is just a regular zone, nothing special about it. On the slave, there is a new special thing called catalogue zone and you just configure what zone on the master you want that to point to. This works with views, it works with DNSSEC, and it works even if you have a fairly large multi tiered system.

You can't put RP Z zones catalogue zones and catalogue zones can't be nested inside of other zones. We are supporting the basic zone options at the moment but I think just from what we have seen so far in testing it looks like this is going to be a very useful feature. We haven't yet released it in the 9.11 alpha that is going on now, it's going to be out in the third fall in about a week and I hope you will give it a try. I am sure there will be requests for enhancements and changes as we go forward but I think this could make the big difference. We did publish this as an ITF, draft, the current status is expired but we have not abandoned this at all, we plan on updating it before the next IETF. I do have some of our users who have been saying they would like to see some of the other DNS vendors implementing this and I hope that that happens.

So I am going to talk now about the new database back end. When I learned that BIND had so many different provisioning mechanisms I remember wondering why we needed so many until I found out that some of them were not really that usable by a large operator, D L Z in particular stores the zones, it basically retrieves the zones from a database where they are stored in text format and they have to be compiled into wire format before they are served, this is slow, if you have a lot of queries it's too slow. With a huge contribution from the Red Hat team, who had implemented ‑‑ for their free IP A project, we have a contribution of a really nice vast database back end API. This is storing the zones in the same compiled format that BIND uses so it is just as fast as native files, as far as you can tell there is no difference. It doesn't have the limitation of D L Z that can serve signed zones. At the moment the only back end is the one from Red Hat, the ‑‑ they are updating this summer but I am very hopeful there are some great possibilities for other databases out there, I am hopeful that someone with expertise in one of those would like to work with us on an alternative database back end. It should be obvious that if you have your zones in a database you can make use of standard database tools, multi master database replication, if you want to distribute your zones that way.

Stephen was telling me recently when we were talking about if you core R ND C it comes up as security vulnerability, it is our remote management API and along with the folks complaining about provisioning performance I also heard from people who were using R ND C as their machine interface for their programmes who found some limitations and it is an API, in particular when they would issue provisioning commands and failed sometimes their application could not tell why it failed, was it a connection problem, were they trying to modify a zone that didn't exist, so we have added some modification toss R ND C to make it a better, more complete API for provisioning, there are a few more commands than the ones I have listed here and we have alsos a read only mode so that you can allow people to have access to R ND C to view the zones without giving them permission to change them.

There is a long list of new features in 9.11. I am not going to talk about all of these. The box on the upper left those are the provisioning features that we already talked about. The next thing that I have in a box on the right are some features enabled, to enable DNSSEC. In 2015 we did write a new DNSSEC guide, it turned into practically a book and came up with a short quick start guide but we really felt that automation was the key. So, we have added a DNSSEC key manager utility, that is Python script, you set it up to run probably daily in a Kron job, something like that, it reads a Niall contains all of your policy definition and then it checks the current zone configuration and makes any changes that are necessary to ensure that your zones conform to your policy. So, for instance, if it checks the policy and it looks to see that you need some new keys created in order to prepare for an upcoming cube role it will create those new keys. If the policy changes all the applicable keys are corrected. You can set policies if you have elaborate set‑up, you can set policy classes so you can have some zones with higher security, you can set algorithm policies such as default key size forgiven algorithm so you can see, for instance, if you wanted to change the key size for a given algorithm could you do it once in the policy manager and this script would take care of up dating all of your keys. I wanted to specifically thank Sebastian Castro, from Dot MV, who came to the hackathon, at the IETF and worked with he have an on this. So if people have any questions about this they can ask Sebastian.

We also following some pointed suggestions from our friends at Comcast, added in IPv6 bias. We made two things: The first one was a change to the glue and they change we made back in 9.9.9 and the 9.10.4 and subsequent releases because we considered this was really a bug fix. If you are querying a BIND server over an IPv4 connection we will prefer an A record but over IPv6 we would prefer AAAA record and this is we hadn't been doing previously and we considered it was probably a bug. The new change that we are just introducing now is a positive bias with our smooth round trip time algorithm for selecting which server we send you to. We have actually added 50 millisecond advantage or head start for the for the IPv6 address assuming that both are available. This is of course configureable and you can configure it down to nothing if you like. I know this is a fairly high value we we debated to what the set value at, if you want to move people to IPv6 you need a big push.

As I mentioned there are a lot of other features that we have added, DNS tap logging I think most people are familiar with DNS tap. In our initial fall we didn't have any way to rotate the logs, if they were in native BIND format, we will be fixing that by 9.11. We have a few performance improvements, fairly specific to what your configuration is. We have an authoritative site implementation of the EDNS clan subnet. This is in the open Git since last fall, we haven't had a lot of feedback on it, we would love to get some. Many of you are familiar with the work of... Mark Anders has been doing it, for his EDNS compatibility and the tools that he has built are based on some extensions to dig in 9.11, new features for querying for EDNS options and extensions. We have had the source identity token in BIND since 9.10, we have uptated the standard, and now if you have a cookie, you will not be subject to response rate limiting. We also added a feature based on a request from a few people that teach BIND to new users, they said frequently people start BIND and they start BIND again and they have trouble managing it because they don't realise they have 3 or 4 servers running. I am getting my flag so I have got to hurry up. This is the schedule. We just released the second alpha yesterday. We will be having a third alpha on the 1st of June. That should be feature complete. This has nothing to do particularly with 9.11, but I wanted to give a pitch for a new iOS app, dig port to the Apple operating system, that **Ray Bellus did, that's coming up in the Apple store. He is soliciting Beta testers right now, if you e‑mail him, he will be happy to add you to the beta. I will give a couple of quick pictures of our performance lab. This is something we have in‑house as a development tool, schedule performance test. You can select authoritative or recursive mode, choose different compile and command line options and select different zone configurations. We run it continuously, this is showing a week each of those green lines is at least 30 runs of the test you can see this one particular test runs quite a few times a day and allows us to look for statistical significance because there is a fair amount of variation. These are the, I took this picture this morning, the tests that we run on regular ongoing basis. Every once in a while we do see significant change. This is performance improvement we saw based on a change that we made. And so, the reason that we have this, the real opportunity here, we can monitor for regressions, when we came up 9.10 we found we have a performance regression that we didn't know about until after people told us, from now on we should always be able to monitor that. We can try individual bug fixes and look at config options and see how those impact the performance.

I just want to mention, we have made an increased evident to review and accept patches, we have gotten a lot of help from Tony at Red Hat, Sebastian helped us, benefited greatly from the AFL project, Robert Edmonds has contributed the DNS tap utility to the community and Dyn DB is something we have to thank Red Hat for. Also this is finally, final contributions. As Geoff mentioned in the open source Working Group, we have only about 100 financial supporters around the world and I looked at the attendee list for this RIPE meeting and at least a fifth of them are actually here at this meeting so I just want to thank you folks for supporting us.

That's it.

JIM REID: Thank you. Are there any questions?

SHANE KERR: There is not actually a requirement that I ‑‑ Shane Kerr from Beijing Internet Institute. A question about the Dyn DB stuff, does it also support incremental zone transfers and dynamic DNS and things like that.

VICKY RISK: It should. I have to tell you I am not sure that we have tried it yet, I did ask that and I am told it should work. If you are going to use a database back end presumably you would be using that because you wanted to use the database for propagating the changes.

SHANE KERR: If you had another organisation providing secondary service, for example. And related to that kind of thing, the catalogue zones, they are not nestable but can you have multiple catalogue zones?

VICKY RISK: . Yes, you can have multiple ones in views, you can have ‑‑

SHANE KERR: I would never work with views.

ANAND BUDDHDEV: I have one comment. I notice that 9.11 has support for outputting RSAC 0 O2 stats, there is an effort underway to review the document for RSAC 0 O2 stats so some of these stats may change in the near future and if you are not already aware of this effort you may want to follow it to make sure that BIND is compliant with the new version.

VICKY RISK: We can talk off‑line. I put that in there just for you because I know there is a limited audience for that. Thank you.

AUDIENCE SPEAKER: Sara Dickinson. A quick question on privacy aspect. Do you happen to know if the implementation of EDNS con subnet supports, or recognises, the option that a client can set a 0 in the request, so that its subset isn't exposed upstream?

VICKY RISK: You are talking about this would be relevant to the resolver side implementation...


VICKY RISK. ...which we are just starting to work on. That isn't done yet. I didn't know that any clients did set that, it would be great if they did and certainly I would support us putting that in, what I am talking about here is just the authoritative side implementation.

SARA DICKINSON: What release will the resolver implementation be?

VICKY RISK: That is actually some sponsored development, it's a big project as you very well know and it is not going to be ready in time for 9.11 so assuming all goes well, maybe 9.12. You know, we are, as, you know, concerned about the performance impacts on BIND and in any case it's too big a feature to drop in at this point.

Stephen Morris: Client ending up as 0, that is one of the requirements amazingly, and the resolver side is much too big to fit in 9.11, much too late now, so the early public release will be 9.12.

JIM REID: Any more questions? OK. Thank you very much, Vicky.

And having asked a question about the DNS privacy aspects, Sara Dickinson next up to talk about a request for public resolver to support DNS privacy options.

SARA DICKINSON: Thank you. So I am here today to talk about the DNS privacy project and this talk was designed it coordinate with a thread that you have probably seen started on the Working Group mailing list so we just wanted to bring the issue to the room here today, in a timely manner with that thread. Before I start I just want to say that the overall DNS privacy project is a project which has been the result of the effort of many, many people involved in this, too many to name. I did just want to call out Allison Mankin who was instrumental in getting this particular initiative about research deployments of DNS privacy service out. Unfortunately she couldn't be here, to present, herself, today, so I will aspire to channel Alison, as I talk.

This is just a quick background slide on all the activities that there has been around privacy, a lot of it mentioned already. DPRIV has three RFCs, still ongoing work on other drafts there. Very importantly, the DNS over H LS specification is now an RFC. So, we have standards and there is a lot of work in progress on that side of things. There is also a lot of work in progress and in implementing the array of features that are needed in name servers to support this, in practice. So it's been a huge of work in getting DNS. We are currently in the process of daemonising that soul be able to use it on your laptop if you can connect to a TLS enable server and that should be available shortly. There is also a lot of work on a unbound ** which needs a few performance features, to make it production‑ready. I've also had some interesting discussions about how big a step it would be to use various load balances in terms of putting them in front of the name server to provide service. But as of today, we don't have an operational service that can offer a DNS privacy server.

So, there have been a whole range of discussions with various organisations and in particular, this week has been really, really fruitful in bringing people together and actually talking about the realities of doing this so I think it's ban great forum for that. We have talked about providing all sorts of services, I particularly want to call out OARC, who have generously agreed to provide a small scale research TLS enabled service and I will I will be talking to Gerry about that. We are also talking about the whole spectrum of how this can he deployed from simply a TLS serve up to an end‑to‑end that would do fully authenticated QNAME minimisation resolver to authoritative and looking at the impact of that and what it means to run it.

So what we are doing is talking about actually running privacy services but at this stage I want to be very clear about what we are requesting from the community and that is we are looking for experimental research offers to learn about what this means to run this in practice. So, we are talking about pilots, with very, very clear fixed end date, not services that anybody should expect to depend on for any length of time and these would have very fixed goals and limits around them. What we would like to learn from this is, first of all, providing a service to the community and see how big that community is, gain the operational experience of doing this, but also have platforms to research other privacy mechanisms alongside that.

So these questions are out on the mailing list already but we thought it would be useful to bring it to the room to get a sense of the feeling on these and the two questions that we are looking at are are there any RIPE members who would be interested in also running a privacy service because what we are now seeing is because we have a few organisations a that have expressed interest we immediate to think about how we coordinate this. We have had exploratory discussions about RIPE NCC themselves running this in a very short fixed term pilot, but obviously would require member support so I think that is a reasonable time to turn it over to see if there is any comments today.

JIM REID: Any comments?

AUDIENCE SPEAKER: Lars from Netnod. For the RIPE NCC part of it, I would be happy to see the RIPE NCC participating in any experiments for this. As for ongoing service, I don't see this as being within RIPE NCC's remit. This I would prefer to have handled by other people. We will Rye to stack things on top of RIPE NCC to do more and more, actually think we should go the other way and do less until they hand out IP addresses.

The next thing is, that is plea: Would you please implement this first over IPv6 and when you have it working over IPv6 then back fort to IPv4 if necessary. This is something I tried to put into everyone's mind that we would develop things nowadays you should develop them for IPv6 first and back port. That is this mind sit should have. That is just a small plea.

SARA DICKINSON: Heard and noted, thank you.

AUDIENCE SPEAKER: Thomas, I run DNS service called uncensored DNS which I gave a talk about at RIPE 65. I have been trying with DNS crypt and various thing and I have ended up using open ‑‑ I would absolutely love to be the open resolver which test this out. I am already speaking to Ralph from the unbound project about using unbound for it and we have a few things we need to settle about amplification problems and stuff because I have some patches for BIND which prevents amplification and I don't have this for unbound but provided that we get that sorted, it would be ideal because the services is sense ‑‑ centred around privacy and censorship and stuff.

SARA DICKINSON: Yes, I think there are a few implementation details that we need to sort out and at the moment the implementation coverage in the various name servers in the is a bit patchy, some features you do need but not all, I would like to see this as a driver for getting across the name servers and using a variety of name servers to run some of these different trials.

ONDREJ SURY: What about instead of showing this to RIPE NCC we make this collaborative effort and get the /24 for IPv4 and/whatever for IPv6 and each trusted organisation would run its own DNS over TLS resolver at own recommend SIS and export those prefixes and in that way we wouldn't burden the RIPE NCC with running another service and well, for example, cz.nic would be happy to run short a service in shared PI space.

SARA DICKINSON: Perfect and Kaveh might have some comments on that.

Kaveh: RIPE NCC. Yes, so I was talking about with Alison and she came up with the idea. Back then, what I told her that we would love to discuss that with the community and if the community wants that, we would fully support the implementation, but from my point of view two important factors running such experiment for us. I wanted to make sure at the end of the experiment we have full operational ‑‑ we would have documentation on thousand operate such a services so for example if Deutsche Telekom or Comcast or whoever we want to to run resolver with privacy extensions they can do that. That is one of the goals and I am sure there are a lot of things to consider including running in Anycast. So, that is one of the goals that if RIPE NCC runs we would want to have as an outcome. The second one would be basically if we consider consider running such a service, because I saw DNS work is also supporting the idea and it doesn't matter where it's being hosted, the second requirement for us is to make sure that the whole privacy she be bang at the end will be covered, maybe only TLS but minimisation and other stuff love to see them as well, and end‑to‑end because for example we also study the effects of this on the route and everything so I think if RIPE NCC wants to get involved, definitely and close ended project with proper staging and we want this to basically requirements, operational feedback, real operational feedback and making sure that the whole privacy is cover.

SARA DICKINSON: Some of the discussions we have had, there is lots of elements to this and what we could do is have a phased approach and look at the different aspects, learn some lessons and move on and expand the project as we go along and thing would work well.

Kaveh: I also the suggestion from cz.nic that is possible. Different participants, we can arrange, all that have there is policies for experimental address ranges so everything is in place.


GEOFF HUSTON: APNIC. This is more of an exploratory question and maybe it's IETF business but I am kind of interested in the distinction between DNS over TLS heading towards recursive serves that query in the open vary suss we ares are working in the get DNS project where you are dragging back towards the users and eliminating the recursive from the entour picture and I kind of wonder in the long or even the medium term feature of the DNS and privacy whether public recursives are a part of the picture or kind of in the way, from a privacy argument it's sort of who am I sharing my faith with and my secrets with, I can use and it's a secret between me and Google, which I am sure they both appreciate and so do I. Could you comment on that kind of tension between the intermediary V using exactly the same tech following and piloting bringing it back to me as a host.

SARA DICKINSON: One of the things we want to explore with this, how does this change the role and the requirements on the operators of recursives and there are potentially lots of legal implications there and potentially different per country. So, there is all that. Also with get DNS, that will work in both stub and recursive mode and one of the target use cases for recursive mode is really DNSSEC roadblock avoidance. So if we have services that will let you use TLS from the stub to the resolver and you trust your resolver you can do that. I think we are on a long road here from the starting point, which is everything is clear text in DNS today, to maybe we should all be running our own DNS ‑‑ talking future and we are not going to get there overnight. But I guess we are take baby steps in a way just discovering which steps along that road we want to take.

GEOFF HUSTON: Considering that the DNS is the most wildly abused system for surveillance censorship and almost every other thing that folks see as being cheap and as hey way to implement such policies, you kind of wonder if baby steps is enough, that is the thought in my head.

SARA DICKINSON: The IETF is involved, how fast do you want doing?

GEOFF HUSTON: That was to you.

SHANE KERR: I was happily enjoying the conversation and Geoff opened this can of worms and here we all here, because I think ultimately if you start down this path and I don't think it's a bad one to start down, there are privacy advantage toss resolvers as well because the queries don't go to the authority serves so as you already imply there is a tension there. I mean, I think ultimately for maximum privacy you need to look at the balance of you probably don't want to use a single resolver, don't want to use 1,000 Anycast resolvers, and somewhere in the middle going to have to be I think, there is an unanswered research question as to what the right balance is between having one or set of resolvers and I think it probably sits somewhere in between where we are today where you have no privacy and tore and that is it, really.

JIM REID: We are running over time here, I can going to close after these four people here.

AUDIENCE SPEAKER: Geoff asked is this an IETF thing and we discussed this very question in the DEPRIV Working Group so basically the two biggest drivers away from recursive to authoritative or even stub straight to authoritative, the biggest one is in order to do effective crypto, the authoritative servers would have to be promiss cuesly allowing usee CPU denial of service attacks and for those of you who follow the DNS curve stuff, Dan's great at crypto but completely ignores the fact that he opened up all the route servers to a trivial CPU denial service attack, that is why I believe we have baby steps. The other reason is because we knew that in fact today people do have a relationship somewhat with a recursive, why break that by jumping immediately away when we knew that in fact those would be the ones that could actually handle because they have ‑‑ could handle the CPU denial of service attacks because they already have ACLES, which the authoritatives don't. If you want doing back and look in the archives it was discussed for derive and we cut that off at stub 2 resolver.

Tim: DEPRIV co‑chair filling in for Warren. The whole goal was basically the baby steps thing, our goal to be all the way through the authoritative but to do recursive not just involves technology but those ICANN people, the route server operators basically have to get involved and that ‑‑ that is a much more of of a process than particular thing, TCP just won't work at scale for DNS which we ‑ dish personally feel is incorrect, and I think a lot of what we are doing is trying to prove that we can scale TCP for DNS at very high rates and that is why I think this is a great idea, I have been with Alison on this all along.

ONDREJ SURY: Also to answer to Geoff, if you give full stop mode and not the baby steps, arrive at the hotel and you don't have Internet the hotel are broken beyond repair so, well the resolver, the one recover over TLS might help you overcome those horrible hotel networks.

DUANE WESSELS: From Verisign. So I think this /24 idea is interesting but as, you know, we defined strict and opportunistic encryption and it seems like strict would be really hard to do if you have lots of different operators involved. Has GetDNS ‑‑ getDNS implemented the strict authentication stuff yet.

SARA DICKINSON: Yes it implements fully the DNS over TLS mechanism involving SP pins and experimental of host name based authentication as well but there is a flag that you just flip into, do strict or die so it does that.

JIM REID: Thank you very much, Sarah.

Next up, Shane wanted too far quick word about a similar thing to what ‑‑ the Yeti, if you make it from the ‑‑

Kaveh: To have an action item on RIPE NCC, if everybody agrees, will come up with a very lightweight short plan based on what I heard, send it to the list and if everybody agrees like, I won't fill it who will coordinate or operate, I will just put in like organisation, coordinate setting up ‑‑ getting addresses things like that and if everybody agrees with that plan we can find out who is willing to participate and how we can move forward.

JIM REID: We have an action item and so Kaveh of the NCC.

SHANE KERR: Very quickly. I sent a message to the list I believe yesterday, maybe the day before, mentioning that we have been involved with the Yeti‑DNS project and we would very much like for the RIPE NCC to operate a Yeti route server for us and possibly resolver if that makes sense. So there has been some discussion on the list, I don't know how much more discussion we need here, I have talked with Kaveh off‑line and he is very supportive, I don't think it's ‑‑ it also is a limited term project, I don't think it involves very many resource requirements and basically for us as researches it would be very helpful so that is it.

JIM REID: Thank you. One thing I would ask, you say this has got a clear end‑date. Could we please have that published. Some of my concerns about some of these initiatives, they seem doing on and don't have a definite end point.

SHANE KERR: Sure. This was a three‑year project and we are on year two.

Kaveh: Thank you and I mostly support and few questions on the list and we will continue and I will send the same comments to the list so we have proper documentation but generally we think it's an interesting research project, few questions want answered. We have one concern which we openly shared with everyone who asked and I will post to the list is running a parallel route, that is what Yeti is do, which is fine, but because RIPE NCC is route operator it might, if something goes wrong in that parallel route we have every basic promise from Yeti project that we will publish IANA's zone file, but that is only a promise by people and by project. There is no technical safeguard there. I am just ‑‑ I just want to point out that if, for whatever reason, Yeti wants to go separate way from IANA route zone file, then it might cause issues and RIPE NCC, especially as route operator as well, will be in a very strange situation. So that will be the disclaimer, it's not a show stopper at all, I want to make sure everybody understands that we are taking that risk, I don't see that as a show stopper, but that is for you to decide, if running that and from my knowledge is really not that much resource intensive so we can run it. And finally, one positive note as well is, because lot of this stuff in Internet is built on trust and generally we as RIPE NCC and I personally have a lot of trust in Shane so it came from Shane, seriously and many of you we know each other personally so that gives me additional support for the project. That is it.

JIM REID: Thank you.

SHANE KERR: Just very quickly, we already do have one of the existing ‑‑ one of the initial coordinators of the project which is the WIDE project in Japan who runs the Emirate server, they are probably have the same concern.

JIM REID: Next item on the ‑‑ because we have had interesting discussion we are way out of time or behind on time. But if the panel could assemble and we will have this discussion on key algorith agility and this is all down to Ondrej.

ONDREJ SURY: I will start and inviting all our panelists to here. I am from cz.nic and we have this little thing running with Internet Society that started at the ICANN in Marrakech and we started to talk about DNSSEC flexibility, how introducing new algorithms and for the session I thought focus also on how to make any changes to DNS protocol, what is deployed in the world. So, we already discussed how long it takes to deploy new algorithm and already had, come to ten years and we are now heading ED 255.19 as a draft in IETF so hopefully it will not take another ten years to get deployed. This was discussed in Marrakech. We had a wonderful panel discussion in Buenos Aires where we included DNS vendors and talked about the deployment cycles, and here we have some DNS operators, the people who help DNS operators so I would like to ask our panelists to say something about who they are, what DNS set‑up they run or how they are related to DNS and what is their deployment cycles for new DNS software or algorithms and how they ‑‑ and then it will continue from that.

LARS‑JOHAN LIMAN: I work for a company called Netnod, we are sponsoring the dinner tonight, please be welcome and come and eat our food. I worked with DNS for more than 25 years by now and been running a route server for the better part of those 25 years. I have also been involved in the IETF for a good many years, I was the first Chair of the DNS Working Group when it was first formed. And Netnod, one of the branches of Netnod is to provide DNS service for the route is the primary service and then we have for TLDs and other high level zones. So that is ‑‑ I am a senior systems specialist there, one of the grey beards at Netnod. Thank you.

MARCO D'ITRI: I work for Seeweb, we are an Italian Cloud infrastructure and hosting provider. We use ‑‑ host about 200,000 zones for our customers. Should I talk now about the platform or the details or what? OK. I be quick. We use Anycast resolvers both BIND and unbound. With some interest in the future to try Knot as well. Our main authoritative platform uses BIND. We also have a second authoritative platform that uses PowerDNS with the old ‑‑ the first platform is also a slave for hidden masters provided by Plesca because it is quite important and there is interesting integration for features like automatically publishing the Kim keys and this kind of stuff and for also a different DNS panel for customers that uses PowerDNS with my sequel database back end, which is works as a hidden master. We use R B ‑‑ D as well because we use it internally for our own mail operations and we also ask public for some DNS PLs.

DAVE KNIGHT: I currently do DNS at Dyn. Previously I have worked at several route operators, a gTLD registry and once upon a time at RIPE NCC. These days at Dyn I am involved in building our edge DNS platforms, and we have quite a different range of customers, we operate several different platforms. One of those, our main platform is serving kind of the enterprise and big web property market and that is implemented in our ‑‑ private fork of BIND 9 where we implement the DNS tricks. But we also have products that we offer to TLDs and that is ‑‑ stuff runs off‑the‑shelf open source BIND, have the option, NSD, Yadiva, not DNS, currently no one has asked for anything but BIND 9. In addition we have other things for bulk web hosters where we would run ‑‑ where we run PowerDNS and PostgreS and we have unbound based public resolver service. So yes, we have quite a range of different things so we have ‑‑ and each one of those has, it's different pressures in, for example, the web property one people are interested in innovation but not necessarily in the same innovation that people are interested in IETF and vice versa, you know, not interested in any kind of trickery but want to see us follow somewhere need the bleeding edge of advances in the protocol.

Phil: One of them is working with the network start‑up research centre around for 25 years or so. Building ‑‑ bringing Internet to some of the remotest places on effort and trying to build research and he had addition networks and to bring DNSSEC to the most reluctant ones, I have done a lot of trainings, in the past five or six years trainings all around the globe helping NRENs and TLDs come to grips with DNSSEC, sometimes arriving at the conclusion it's probably better for them not to deploy it at this time, even I can claim success with getting a TLD to stops their plans of signing and ‑‑ doing it correctly the same time. My other hat is I have a small company here in Denmark which has among other activities a very small hosting activity, very small, not 200,000 zones, more like a few 100 but we have got a very wide set of customers, to healthcare to private, to government, to defence at some point, and I can say that being the placebo on this panel, most of my customers do not DNSSEC sign, we try to sign most of our zones and provide DNSSEC service for our customers and provide resolution using unbound and to see the absolute knowledge of knowledge about DNSSEC and blissful ignorance, so we talk about flexibility for most of my customers, I would say they are mostly unaware we either sign for them or that they actually are being accession validating a resolver so it gives me this kind of different angles into the whole problem from both the operator side and trainer side of things and it's very different worlds.

ONDREJ SURY: Can you fill in what your DNS set‑up is.

LARS‑JOHAN LIMAN: I wasn't aware that was expected. We run an Anycast network with some 50, 60 nodes in it and we operate both BIND and DNS servers, we try to run off‑the‑shelf software which is run of the problems when it comes to deploying new stuff from my point of view. But we try to state away from fingering in the code if we can. Because we are a small company and don't have really the resources to dive into the DNS code itself unless it's extremely necessary, which it hasn't been so far. We have very close relationship with both ISC and NLnet Labs and cz.nic and all the DNS server vendors, possibly with the exception of Microsoft which we don't don't use on the other hand, so we rely on our close friend when things go bad which they very sol come to do.

ONDREJ SURY: My next question would be let's say the E D D 255.19 draft is approved this year, how ‑‑ hypothetically how long it would take to you deploy that and possibly start signing the new algorithm? And it also applies for example to DNS cookies that got arrived at the IETF, how long it would take to you grasp the technology and protocol enhancement and to deploy it for the customers or your network or ‑‑

LARS‑JOHAN LIMAN: It depends on where the requirement comes from. If it's general development of the DNS we try to follow, try to be part of groups like this, we are attend the IETF at org meetings, try to follow what is going on in the DNS market and try to evaluate what the general DNS clueful clientele thinks is important and to prioritise that. It also goes with what the software development does, as I said we don't sit down and develop something ourselves to be able to run new features so we can kind of wait until it appears in released software because that is how we can rely on having good support for it and we also know that it's been kind of vetted by people who do work code implementation for DNS and that is a very strong thing for us.

MARCO D'ITRI: In our company, we use DB N for our servers and I am a developer as well, this helps, and we have no ‑‑ I do not see any major road blocks to upgrading if some important feature will be useful. We can easily upgrade but we tend to use distribution provided packages so we will be quite happy to upgrade as long as the maintainer of the software that we use releases a back port for the current stable distribution. Only problem will probably be the PowerDNS held up based platform that back end is not currently maintained I understand so it will probably stay in touch until we will move everything to the next sequel platform.

DAVE KNIGHT: We are not doing a lot of signing but we do serve a lot of signed zones on behalf of customers so it's quite easy for us to be reactive there. Particularly in our TLD platform because we run a separate name server platform per customer for us it's really driven by customer demand and because we are using off‑the‑shelf open source software if one customer wants to do something new, it's quite easy for to us roll that out and because we are not having to worry about the difficulties of signing, it's easy for us to roll out new edge software.

ONDREJ SURY: Yes but that is why I mentioned DNS cookies, it's not signing so ‑‑ is it safe for DNS cookies like when the customer ask you you will be able to deploy it.

DAVE KNIGHT: Absolutely. As I said at opening, certainly for the TLD service that we run it's entirely directed by the customer, if they wanted to change to a completely different implementation we would do that and as they guide us.

PHIL REGNAULD: In my case I think it would be pretty much the same thing as Marco was saying, we followed standard distribution packages as long as we didn't' have to rock the boat it would happen. When it comes to training it would be interesting and would I want doing and advertise already knew what DNS cookies are and the new algorithms for signing, I will probably wait to see what the adoption and uptake is before I would start to preach that. So on the operational side I would be happen ready, teaching it I would hold off a little bit and see how it goes before recommending for people doing for that.

ONDREJ SURY: Do you think ‑‑ it's also a question for the community here ‑‑ do you think there is anything we can do to made these things more known within the people running DNS servers like hey there is going to be a new curve in the DNSSEC and you should deploy it or here ‑‑ there will be the cookies because if you look at the http servers, the people now that DANE have to upgrade the stuff quite regularly because there are new attacks with fancy names and logos and websites and they know that if they want to run HTTP 2 they need to upgrade to this version of open SL and this version of the web server so they somehow manage it, and for the DNS we are still stuck sometimes in the '80s running BIND 4. So my question is, is there anything we can do better?

AUDIENCE SPEAKER: My for the RIPE NCC and I am monitoring the chat and there is a comment for the panel so I wondered if I should read it out now. So it's from Peter van dyke from PowerDNS and he says the PowerDNS panel has been reverted into maintenance but they are looking for competent users. So that was the ‑‑ that was the comment.

LARS‑JOHAN LIMAN: I have a first go then. I think it's very important to get the package maintainers of the operating systems on board because that is how things are distributed, and if you can also, I hate to see this for general users automated updates is something useful which actually enhances things in general. Now I hate it myself, I will admit that, but also to make it possible to run alternate versions of packages within the same operating system, for this specific package I want to be at the bleeding edge and follow what is going on. That is not always easy because you have dependencies between packages so I understand it's a problem but to not have to see that a stable version of DB runs 9.8 of BIND, that would be helpful.

MARCO D'ITRI: I think that outreach from protocol developers at the events like RIPE meetings is helpful but for the vast majority of users probably new features should be pushed and that advertised by the software vendors. I think that's the most useful channel for the general public.

DAVE KNIGHT: Yes, I think where we are as part of the underlying infrastructure, we have heard a couple of times earlier I think in reference to DANE about it's browser vendors who drive a lot of what the, what users ‑‑ what the user requirements are. I am not sure how we better engage with that, but that certainly seems to be an obvious place that the pressures around what it is that we are doing in the DNS and where we should be taking it seem to come from.

PHIL REGNAULD: To comment specifically on running old software, I mean, OK, the way that I find out from my customers that the website ‑‑ server is running too old and software needs to be updated they run some kind of SSL labs tool or similar and don't get an A plus and they bitch and go, why am I not getting an A plus and at this point we stair down the rabbit hole of finding out software is too old, we are into the business dealings of are you paying for maintenance, is this a hosted platform whatever, and you find out the Apache is too old and need to generate new, and you realise if there was some kind ‑‑ I wouldn't call it name and shame if there was a place they could click and check, you are not signing your zone, I am pretty sure or they would actually more demand from the customer side. The fact that the DNS is buried in the background, if it ain't broken don't fix it approach but because there is no much focus on SSL and encryption this is what happens, we are driven by the customer demand again, if there was such a service I am pretty sure we would be asking us ‑‑ without knowing why but it would probable be a good thing.

LARS‑JOHAN LIMAN: I would like to add a comment: One of the problems I see the DNS is not viewed as an application. It is an application but it's glue between the operating system and the network and the application that you actually want to run. No one runs DNS because they want to run DNS, they want because they want to reach other services.


LARS‑JOHAN LIMAN: You and I can form a club then. The Rob is when off browser, if you run Internet explorer you see that as a Microsoft product, Firefox is a product delivered by something, DNS is something that is included in the operating system and not a specific application that you heed to keep your eye on to have it updated so if we can work together to kind of elevate it into status where it has a right of its own, we can put some light on it we might be able to bring things to a better stage.

ONDREJ SURY: I want to reiterate what Phil said, I am quite sure there is or was DNSSEC name and shame web page somewhere to check ‑‑ if was signed but nothing with A plus.

LARS‑JOHAN LIMAN: You had a good point in that it should come with the web service thing, that should check the DNS as well because what you are checking is the web service or the the SIP service or ‑‑ no, it's not there in the their minds.

ONDREJ SURY: That is quite clever idea, if you can get the SSL labs to include the DNSSEC test even without counting the score or there is something from Mozilla or something, it might help actually.

BENNO OVEREINDER: In the Netherlands there is something like a name and shame initiative, on the website Internet .nl so it's cooperation of organisations and now a club of organisations and they work together to check if ISPs, service providers, provide, well, an up‑to‑date Internet. We use popular words, do they use IPv6, DNSSEC, what is their SSL configuration or TLS, do they ‑‑ maybe that can be an inspiration for other countries also. We can export it.

ONDREJ SURY: Experience from Czech Republic, we had something similar where we tested the banks and some other websites the problem is they don't see this as a problem so maybe we need some huge attack on DNS, anybody there.

PAUL HOFFMAN: I think the last couple of comments are maybe aimed at the wrong people who would be helping here. Folks who are running a DNS server intentionally probably update it, unless they are locked into I don't update my server because I use this distro and I only use what comes in the distro, but as someone else mentioned before, browsers get updated even if they came in the distro so there is two classes of software, stuff that doesn't get updated until the whole distro gets updated and stuff that does in between. Simply getting the authoritative and recursive servers upgraded to the gets updated in between, would certainly so that as we have new algorithms or query chain, whatever, that those would get called in, I think would rapidly acset rail the bottom 30, 40 percent into being able to do it. Really it's not the customers who we are aiming at, it's the distro people that don't update easily. That is just my theory.

MARCO D'ITRI: I think that I can answer this with some authority at least for ‑‑ this is not going to happen this any way.


MARCO D'ITRI: There are two categories of software that are updated in the stable distribution to, two new major releases, stuff which is sensitive like ‑‑ software or else the browser, but the browsers are very, very special case.

AUDIENCE SPEAKER: And we are not special?

MARCO D'ITRI: Because their vendors refused to support them after six months. That is the only reason why they upgrade Firefox in stable release.

ONDREJ SURY: I was able to get DB security people to push the new php releases into the stable distribution so let me work on that for a while before we ‑‑

AUDIENCE SPEAKER: Thank you for that answer because maybe I was being too hopeful there.

DAVE KNIGHT: Just as a general comment, as DNS people we know the DNS is fantastic and very enthusiastic about it and see it innovated at the pace but to everyone else this is somewhere lower in the stack which we want to be conservative about and I totally understand the position of operating standard maintainers this could make everybody have a very bad way if we get it wrong. In a couple of years once you have burnt that end we think about putting it in there.

LARS‑JOHAN LIMAN: You will have to guide us here because you are leading the session.

AUDIENCE SPEAKER: Peter Hessler from. Our basic policy is that we recognise this problem of that it takes a long time for software to come through, which is why we have a very specific policy of we release every six months and don't support would releases, we have only having one year out of software because it's exactly this problem when you have a five‑year release cycle, support two of them, takes ten years before anything shows up into the actual wild.

LARS‑JOHAN LIMAN: I just wanted to make a mental comparison to how quickly do you get new TCP options into an operating system? Just as a ‑‑

PETER KOCH: So going back to the headline more or less, and I have a question on the DNSSEC stuff but on features, so I wonder, you have been talking about all software being responsible for not being able to run out new features and so on. Some of us provide structure and critical structure and some of us even provide critical structure in the tainted sense of the word. These people want less features. The idea of us coming up with a new EDNS option every other week and standardising matter and anti‑matter at the same time, so the nice thing is I can't pick from things. I wouldn't want the software to make or the vendor to make the decision how the service, the critical service or the really badly critical service looks like. I would like to know from these operators who interestingly enough represent different pieces in the hierarchy, to question the question as ‑‑ how many features are desirable and are we receiving good guidance from those producing the standards.

LARS‑JOHAN LIMAN: As from route server operator's perspective, you are quite right, we would like the two‑stroke piston engine, three moving parts, stability is absolutely paramount and more features and more diverse code and more code pots that are exercised that is always weighs into more bugs, which can have an influence on stability. So, I support Peter in his comments, you are quite right. That is why we as route server operators don't want to implement things too hastily because we want to see what the general DNS group of people thinking is useful and where the trends go and we want to follow carefully and step by step so you will sell come to see the route server operators implement the fastest and most recent features except when it comes to ‑‑

SHANE KERR: Do you guys run RRL?

LARS‑JOHAN LIMAN: We at I route currently do not run RRL no. We do other forms of rate limiting when it's necessary but we don't use ‑‑ make use of the BIND implementation of RRL, no. That is on our map, though, so it's not something that we ignore or step away from but for the moment, no.

SHANE KERR: The reason I mention that that is a feature that at least some of the route operators do implement. My own feeling is that the route operators are like every other operator and they want the features that they want and they don't want any other features.

LARS‑JOHAN LIMAN: That is probably true. To find out what we want is ‑‑ we are not driven by customers in the route server case but by the general community and what they see as useful options but you are quite right in your statement.

ONDREJ SURY: Any further comments from the panel on what Peter said?

MARCO D'ITRI: As a user myself I love having new features to play with. On the other hand, had a we do as a company is driven by customers, and customers typically do not request something new in their DNS services so there are ‑‑ has to be some extent and push for new features maybe. We need to recognise ourselves that we actually need this new features. Might night like I said in my intro, we serve a lot of different types of customers so who have completely different pressures on what they want so I think I have answered similar to what other people are saying, which department of the company I am standing in when I say it.

PHIL REGNAULD: Yeah, there was some talk about where the software lies and are the buried icebergs are the ones we tend to hit the most, the lower it is the more static. If the problem is that the software is not visible enough it doesn't have enough user attention, browsers update ‑‑ every time you reload a page you have to update your browser nowadays, what about all this talk of having the resolver closer to the user's laptop, lets ‑‑ you can up braid the thing every 24 hours if you feel like it. It doesn't solve the whole authoritative part of it, you are not running a still Debian back‑ported package from six years ago, I am not saying it's a solution, but if you want the software to have more visibility and dynamics move it closer to the user.

NIALL O'REILLY: I am getting more and more the sense with the exception of something Benno said earlier, that we are looking at this from inside a nested set of bubbles and I am sure it's not just because we have all got super geek T‑shirts this afternoon. We are talking about the DNS, the DNS code, then we are talking about the distros but the motivation driving all of this has to come from business cases of end customers like banks and insurance companies and the rest and need a lot more evangilisation than us in the room wondering about we need this feature or this would be good, we have to sell the advantages to the end customer somehow and I am not sure how to do that.

ONDREJ SURY: Well, Niall, it's the goal of this panel how to break these bubbles. We all ‑‑ how to outreach to the people driving the demand. Because to and to bring the ideas how to to that because I don't have, except this panel.

LARS‑JOHAN LIMAN: A quick comment. A question to you: Last time you updated your browser, was it because you needed a new feature?


LARS‑JOHAN LIMAN: The same goes for the DNS users but the browser vendor saw that it would be good that you updated because that would make new features available to other people in the communication with you. And I think the same somewhat goes for DNS, meaning that the user may not see the benefit of the new features, but the infrastructure system and the geeks may see it and it could ‑‑ that could be a reason to update, it doesn't necessarily have to be driven by ‑‑ the user at either end of the communication, it could be people who look at the infrastructure.

NIALL O'REILLY: I think it's not about new features in the DNS, approximate we don't get people motivated to sign their zones or have their DNS hosters signing their zones for them, or seeing ‑‑ seeing an strong signing the zone or to layering DANE on top of that, there is a huge marketing job to be done out there and it's not about incremental upgrades to this component or that, it's about promoting and selling the ecosystem.

LARS‑JOHAN LIMAN: Yes but it's a chicken and egg problem. You cannot motivate people unless they have the software, and push out the software, you have this new version and it can go these funny things, why don't you try it.

NIALL O'REILLY: Fair point.

VICKY RISK: I have two comments. The first one is relative to your idea about rating people's websites. One of my other jobs is I runnism SC's website, usually it's the marketing part of the company that is responsible for the website and the only thing the marketing people care about is the Google rankings so if you have want to have a rating you have to get it in the Google web master tools so when they go and look at their SEO ratings and stuff it comes up with some kind of a black box. We don't have a huge appetite for look at a lot of ratings but that is the one we care about. I have a second comment: This is something I think about a lot of course, how to get people to be concerned about standards and keep their software up to date and it's, I am sure we will be talking about it a year from now but I had this idea a while ago that if enough people were getting cyber insurance and the underwriters had some standards, so I actually spent sometime talking to people in the insurance business and trying to figure out how to get this in there, the problem is, so actually first of all, are ‑‑ has anybody here have cyber insurance in their organisation? This would be company that would help to cover your costs if you are sued because of a loss of data, lot of confidential information. So cyber insurance hasn't caught on enough yet, it's too much in its infancy. DNS generally is not in the path of the most exciting problem in cyber insurance which is losing customers' credit cards and personal identifying information, that is a things that gets the bib pay outs and we are not in charge of that. However if you don't have your zones signed and people can Phish your customers pretending they are sending e‑mail from you, that potentially is in the big money category for cyber insurance. It's just a thought that maybe that is a route for getting some standards for what is a minimally secure DNS infrastructure so look at insurance and I hesitate to say this, but there is also the ISO thing. Those are were a couple more ideas.

LARS‑JOHAN LIMAN: Didn't see that part coming.

VICKY RISK: Cheatly non‑technical.

AUDIENCE SPEAKER: From CloudFlare for those who don't know me, I am the pessimist that Ondrej talked about at the beginning. This panel is very useful to look at the one small part of the equation: The DNS ecosystem consists of producers of DNS answers, the consumers of them and the verifiers and there is something in the middle that enables sort of things to happen. Like distributing DS records, we are here talking about these guys say can be ready for something new like wonderful new curve that you got some from ‑‑

ONDREJ SURY: Let's not speak about him.

AUDIENCE SPEAKER: They can be ready in less than three years. Depending on which version. But then we are looking at when is it going to be available to distribute through the EU registries, registrars, hosting providers etc., that is going to be another long time. And then it is applications, when can they be ready or resolves. I am actually quite optimistic about the two ends, I am not that optimistic about the middle. And if we have to pick to fight and we can only fight one or two at the time we should be trying to make the middle or automated, work better so this can happen faster. Going forward. Another comment that I wanted to make on this is, DNS is infrastructure, it's not sexy, we have to think very carefully about what we want to break and how we want to break it and it would be good to break RSA.

ONDREJ SURY: I am cutting mike after John.

AUDIENCE SPEAKER: We see what happens when Google started to rank pages when mail providers show this mail was transmitted via secure link because this increases the pressure and the other hinge is this community is doing an awful good job in not breaking things. So we change things in EDNS and it is still compatible so there is no need to upgrade, it still works, which is not the case with websites. So the browser is suddenly not displaying YouTube because Chrome disables flash. So we are just too good maybe.

ONDREJ SURY: Shane will have some minutes about breaking things.

John: You are asking why people last upgraded their browser. I last updated my browser because I run Chrome and I am scared to death of flash. The other thing I want to suggest is, if there was a security vulnerability in say BIND that packages ‑‑ to fix it and people running route servers would have no trouble at the employing those patches, maybe if some things are not security updates but could be classed not as just new features but security enhancements, new algorithm is enhancement and not new feature, a third class of patch that was more easily deployed.

ONDREJ SURY: Comments from Jabber.

AUDIENCE SPEAKER: It's from Alfred Jenson and his comment is given how far behind DNS is when it comes to modern cryptography, how do you see transitions in the DNS system to world where we have quantum computers? The TLS work have started looking that this presumably.

ONDREJ SURY: Does this have to be answered at this time? Any opinion on that?

MARCO D'ITRI: It's still quite early. Let's ‑‑ we are not even sure about which crypt ‑‑ should be used so and this is still an open research topic and until this will be decided there is no ‑‑ not much DNS related work to do, I fear.

ONDREJ SURY: We have run out of time. I would like to thank our panelists, I have seen ‑‑ OK.

I have heard very good suggestions about SSL labs, Google web master tools and maybe stuff like ‑‑ compliance and cyber insurance and, well, can we fit Shane ‑‑

JIM REID: Actually, no.

ONDREJ SURY: Sorry, Shane.

JIM REID: Next time.

ONDREJ SURY: Enjoy the rest of the week.

JIM REID: Just before we close up, one or two small announcements to make. I would like to thank all of the speakers for both sessions today, to the NCC staff that were doing the scribing and monitoring the Jabber room, thanks to the stenographer as well. And one little item under any other business and it gives me great pleasure to announce this: Free beer. Corrine.

Corrine: I am researcher at the Oxford institute and working with article 19, we are human rights NGO that focuses on freedom of ‑‑ between protocols and human rights and we are going to be screening it in the room right next to this one and as already mentioned, free beer, so I hope to see you all.

JIM REID: Thanks everyone, and I will see you all in Madrid.