IPv6 Working Group
25 May, 2016, at 2 p.m.:

ANNA WILSON: Hello everyone. This is the IPv6 Working Group, you are all very welcome. This is the first of two sessions. And we have a couple of admin things to start. Your chairs are myself, Anna Wilson, Jen Linkova and Benedikt Stockebrand and we will be working over the next couple of days. I would like to thank, before we begin, the wonderful work done by our stenographer and the RIPE NCC staff who, Oliver is monitoring the chat and Mirjam is scribing the minutes today. Speaking of minutes, we should talk about the minutes from the last meeting. Does everyone approve them? You can't, you haven't seen them, I am sorry about that. We will get those sent out with the minutes from this meeting and we will sort them out next time.

One other thing to mention before we go on is we talked a year‑and‑a‑half ago now about a Working Group chair selection and the three of us Chairs were selected at the same time for three year term or maximum three year term. We'd like to make sure that we don't all step down at the same time and have to deal with that, so what we are probably going to do is one of us will step down at the next meeting and we can go subsequently after that. If you are interested in taking this spot, do think about it over the next while and I think according to the procedure a month or so before the meeting, maybe a little longer the call will go out. Keep an eye on the mailing list if you are interested in that. With that, I think ‑‑ oh, yes, this is the agenda for today. We have some interesting talks for the next hour‑and‑a‑half and of course we have tomorrow, we will be in the main room at 1,600 to finish off there, and we have a bunch more material there. And the last thing to mention before I hand over to our first speaker who is Geoff Huston, you can rate all the talks, including the ones on Working Groups so do that. With that, I will hand over to Geoff, thanks.

GEOFF HUSTON: Good afternoon, I am with APNIC, I have 20 minutes including questions, and I have 36 slides. So this is going to run a bit like a movie, OK. This is actually going to talk about large packets in v6, and you kind of go so what? But three years ago now, not quite three years, there was a presentation done at the IEPG, but Jen link and others did the work and it may have slipped past you, but there was one stat that was mind‑boggling, he reported that if you had a fragmentation extension header, he saw in his methodology a loss rate of 47% in that presentation. Get what that means. If you send a big packet and it gets fragged odds are it will only just make it. And more than half ‑‑ up to half the time it won't. Is this a problem? Shit, yeah. It couldn't abbigger problem. What does the DNS do, large packets. It's about to be an RFC, there is some reference. So what? Well the one thing we learned from ATM is that fixed size packet sized networks are bullshit. You cannot do it that way. Some things like large packets and some small, so any useful packet network like IP has to handle variable size packets, because really big packets on a bitter media is a disaster, you try and do small, there is a whole jitter area, go read, that is fine. V4, as you might be aware, had a forward frag rule, so that when a packet was too big for the next pipe, that router shredded the packet to fit the pipe so packets always went forwards, right? So, it preserved stuff, it just sent the little bits on and it was up to the receiver doing and reassemble all the fragments, the interior network didn't bother it shattered the staff and sent the frags onwards. This was fine. This is what you are using today, by and large. Your network seems to work, and this seems to work most of the time. But there was this bit that said if you want you can send a packet with don't fragment. And then comes a problem, what does the router do, I can't fragment it, it drops it. Drops are unhelp solve what it normally did in v4 was it tried to send you back an ICMP message, I dropped the packet, really bloody unhelpful. An amendment, I dropped the packet because it was too big and the size that would get through is this. So we added in v4 and MTU size. Who let's ICMP packets into their network? Two of you. No, no, the rest can put your hand down because you are lying. Most of you don't, right? So although it's a good idea most of you aren't listening and most networks don't listen to this so this is getting to be a problem because you are starting to get into trouble at the packet mill. Fragments are horrible. When you lose a TCP packet, you don't necessarily have to send everything to rebuild; you can send that single packet if you are using selective acknowledgement, FAC. But when you lose part of a fragmented packet, you can't just resend the frag that you lost, you have to send the lot again. So fragging is very inefficient, if you are getting some loss. The other thing is, too, that if someone is sending you frags and I am listening, I can send you gratuitous frags inside that same stream if I can get it on the wire, saying this is part number 30 and the packet size is a billion bites big, if you believe me you will die. ICMP can be easily spoofed and it is vulnerable. The next problem is only the front end frag has the transport header, the UDP or TCP, all the rest are just bits. So the firewall sitting there going if I admit there is a problem, if I don't admit the person doesn't get the packet. Over at the host, packet reassembly demands memory and time and sizeers, you know this, have all read this paper. The whole theology of IP and this is dating back to 1987, we believe fragmentation is really, really bad. So when v6 came along we cemented the 'don't fragment' bit to on. Now we have got this required piece of new stuff and that is the middle has to stop going forward and send the packet backward that must get there. Because if you lose the ICMP type 2 packet too big, you will end up in the path MTU hole of death. Who fitters ICMP v6 well all of you. You are in sort of death zone. We know who you are by the way.

Here is the packet headers, and the other thing that v6 did, which was different to v4, in v4 the fragmentation fields were part of sever single header, bits that were turned on but the bits were always there. In v6, those bits are only there if the packet is fragged. There are things out there, see slide 3, where if it sees an extension header it goes oh shit, and drops the entire packet. This is where the problem lies. So, both protocols you can fragment at source and support packet too big, only v4 forward fragments and v6 relies on your router, your routers, our routers pushing through packets that have that extension header. And fern an know reckoned 47% of the time that doesn't happened, oh shit. But let's just sort of accept that this works, you are a sender, you're sending packets, back comes a packet too big. What does that mean? That is a really good question. Because this is one of these lieisation problems because fragmentation occurs at the IP level, not at the end‑to‑end transport level and not at the application level. Fragmentation is meant to be invisible to even UDP and TCP. But you and I and particularly the DNS, do not treat it like that. We actually treat it as a transport and even an application level problem, we do not treat it as an IP problem. So when you are running a TCP session, and you get a packet too big, and you are a clever switched on RIPE kind of person, what should you do in your stack. You should not fragment your TCP packets. You should just use a smaller MSS. Right? You should adjust the stream of packet sizes in the TCP stream to fit through that hole. You should never, ever, ever send a frag if you are running TCP. And if you see it when you are doing a wire snap you really should look hard, scratch your head and go that is demented crap you are rung because fragging TT CP is just stupid. We know what to to, do not respond at IP, respond at transport. UDP, what are you meant to do in UDP? If anyone says resend the packet, you will be evicted from RIPE, you don't understand UDP. UDP, I sent it, I've forgotten, that's all over. I've got amnesia. So up comes an ICMP, that was too big. Packet is gone, there was none, it was too big, great, nothing I can do about this. So it's kind of for UDP the world's worst piece of feedback. There is nothing you can do. A lot of implementations do do nothing. Some of them go, I feel I have to do something, it's UDP and I have no idea why I have got this but tell you what, I will do a host entry in a forwarding table and sort of put it in saying only send packets of this size to that host. How many addresses are inside a /64? What if I send you ICMP packet too big starting at one and counting through to 64th, at what point will your table explode or your brain or something? So the problem is, that when you add a local entry that was too big I will remember that, you kind of consuming resources without knowing why, because ICMP is really readiably spoofed, if you are consuming host resources, down the path of vulnerability, you die, this is horrible, right. Bad idea. The other thing is, we get back to this, no one likes ICMP so it's filtered like crazy, you can test this out using http, basically do a small get large response, goes no a hole and TCP never recovers. ICMP 6 is a problem. We don't don't know what to do, what do you do when you get back this message, if I am DNS, and you ask me a request and I send you back a huge answer and it gets lost in the middle, what do you see? Nothing. How do you interpret nothing? How do you even know I am alive, what is a difference dead serve sending an answer that is too big, but from your point of view, you can't tell. So you ask me again, I send the same packet. We are not going to get very far doing this. So at some point you have really got a problem, if the answer is too big how do you get it through? One of the things v6 did was defined a minimum size, and it said as long as you get things into 1280, you should be OK. And here is the spec bit: Why the hell did someone big 1280? Now I have been told by two people, one of them might even be here, that 1280 is 1024 plus 256. Shit, five is two plus three, so what. The fact that these are two powers of two still doesn't mean anything. It was the most arbitrary bullshit number that has been invented since ATMs 48 bytes. There is no science to 1280. What they really should have done, is to say, if you are a tunnel and you do damage to a packet, at the end of the tunnel fix up your damage and don't export your badness to anybody else. We should have said 1,500. Hello Steve Deering time machine come back, 1500, we need you. That would have been a much better way of doing things, no we are stuck at 1280 and it's hell. So, your dual stacked and here because you are just a v6 fan attic, you have a host and it talks v6, what is its MTU setting? You should think about this, because fragmentation and loss is really important. So if you have a large MTU and I am talking to someone with a small MTU, something is going to break, I have to fragment my packets down to your small MTU. So, if I set my MTU low, what does Google run by the way? 1272. On TCP, their MTU is really low. Now, it invites fragmentation in UDP but for TCP it avoids path MTU black‑holing, so a lot of science, it's their thinking behind setting it low. If you are a DNS resolver and talking UDP, should you set it high or low? Let's find out. So, what I did was use Google ads because it's free and they are brilliant and tested a few million people and giving them UDP answers over DNS at three sizes. 131 octets, and quite frankly if you don't accept a 131 octet v6 packet, there is no Internet for you, give up all hope. But I set one at 1,400 and one at 1,700. And I set up a server that was only reachable on UDP on v6 only, no TCP back‑out, nothing. It was UDP v6 or nothing. So here is what I expected to see when I ran this experiment across a few million people. Small packets should always work. Interestingly, well, we will get to that, packets that get close to 1280, 1280 should work, there is no reason why 1280 shouldn't working the only reason why it was invented was it was just a number plucked out of adding two other numbers, anything up to 1,500 should theoretically work. But OK 1,400, there is a packet too big problem because someone is behind tunnels, thanks Hurricane Electric. 1,700, it's going to frag, the world is 1,500 octets so if Fernando was right in his presentation and there is a 47% packet loss when you frag, I expect about a fetch rate of 52% and a loss rate that is really, really bad. That is what I expect. Here is what we saw. So, 22% of the v6 resolvers asking me questions asked from a bullshit v6 address. Think about this for a second. The only reason why those work is they have a v4 equivalent F we ever have to run a v6 only network or you are stupid enough to set up a DNS service with evenly v6 address one‑fifth of these resolvers are going doing, I am sorry you can't reach me because I have contributed ‑‑ given myself if he 80 colon colon ‑‑ you can't reach me. That is an amazing bad number, but it doesn't matter most of you never looked because v4 saved your as every single type. What I filter like crazy and get rid of all that nonsense. What I did next was to get rid of those folk behind unreachibles. The other thing I found too and no matter what do you in the DNS, someone is doing it, so you heard about privacy addresses haven't you? How often do you change your interface ID? Once a day? How about once a query? Some of them do. Every single query has a slightly different interface address because you can. I don't know why, but it really makes counting resolvers bloody difficult. So what I did do was combine them all up and look at /64s, here is the next result, joined all the individual 128s, only looked at /64 and only took the ones I could reach with a small packet, the one that had sane addresses that worked. This is getting a bit interesting now. When I am running this experiment, I get around a 1 to 2% loss rate between 1280 and 1,500. And as the packet size gets bigger the loss rate gets slightly higher, but it's interesting, there is a loss point there. Why am I getting loss? Because there are some folks and it's kind of hard to see from that slide but there are some folk sending me path MTU of less than 1450. What the hell are you doing? That is an awful lot of IP in IP in IP in ATM in MPLS kind of shit. Don't do this, it's mad and crazy. There are some folk out there doing this. There is an awful lot of 1480 which is basically six in four, some amount of 1460, six in six, or maybe it's six in v4 in TCP. Whatever. So 1492 because you can. Can't you? Don't. Large packets, large packets, didn't get a 47% loss rate, only 23%. Not as bad as you think. More than 20%, but less than 50. So, there is a problem here. What I am kind of wondering is why is there a problem? So I started looking at the responses I got back, so 1143 of those v6 /64s can't get a big packet, that is a lot, one in four is a lot. Right? What is the size of the DNSKEY response for '.org'? 1650 octets. None of those 1143 resolvers can do validation on .org today, because they can't get the packet, it won't get there. 331 were giving me frag reassembly errors, the leading frag made it and trailing didn't. They have a firewall that says trailing frags are bad. God. Let them through, guys. 61 generate a packet too bigs. This is getting a little bit weird. And 751 were silent. These are the silent and deadlies, they looked like the extension header drop rates. Then I dropped my MTU because a lot of you do, you run at 1280. So now I have got a serve not running at 1,500 but 1280 like you do. All of a sudden at 1,400 bytes, the loss rate goes straight up to 19%. So dropping your MTU causes chronic failure when you try and send packets through v6 network, 19% of these packets simply did not make it through. That came from 887 /64s out of a total on the globe, as far as I can see, we measured 8 million a day for 12 days, you measure it out, about 100 million users from across the globe, and out of those, 4600 /64, 887 are broken, which you can't stand up a v6 only service with that. So, the first thing to note is that if you have a packet network that drops one‑fifth of the packets, you don't have a packet network; you just have shit, right? It's not a services you can't charge money for this. If we all said today let's run a v6 only network I for one am going on holidays because it won't work. 20% is simply unacceptable. The next thing is what is your MSS? What I want is is something that is really, really hard for most operating systems. Because the best of all possible worlds is that you run TCP at 1280 MSS, so 1280 less 60, but you run UDP as big as you can. Because the higher you get your UDP MTU size the less chance of wandering into fragmentation hell. So applications should assume that large packets die. Right? Now, they should be prepared to do a rapid cut over to TCP because packet loss is often in v6 a sign of something else. So this thing gets you to: How does the DNS even work? If you have got a DNS that kind of works, then what actually happens is, you use EDNS 0 buffer size and when you see nothing you then say here is a buffer size of 512, truncate the response and shift to TCP so. That is how the DNS hobbles through this, how long does this take? Another three RTTs to get it all together. So just goes slower but it does work. If you have a high MTU you can get over some of those problems. But what it really gets down to in v6 and I suppose this is the point of this, is that you can either try and fix the world or you can stop fragmenting. And I suspect that *Bonicar was right when you put forward a draft let's just deprecate in v6. It just doesn't work. So that is where we were. Thank you very much. I have left at least one minute for questions.

ANNA WILSON: Thank you Geoff. Any questions for Geoff?

GEOFF HUSTON: There you go, you have got a minute back.

ANNA WILSON: If do you want to ask questions of course please come to the microphones and remember to state your name and affiliation. Our next speaker is John Brzozowski who I haven't seen before the session. Are you here? Wonderful, thank you. And this talk is community wi‑fi and IPv6.

JOHN JASON BRZOZOWSKI: You didn't need to rush on my behalf. Is this Working Group meeting always this full? This is pretty cool, this is really nice to see. Into the common occurrence to see this many people in the one room at the one time talking about IPv6 so. Can you does to you. One of the things that I was asked in preparation for coming to my first here was to not only the plenary talk from yesterday but also to bring some technical subjects here for you guys to ponder and maybe throw. I don't think you will. So one of the things that you heard yesterday is information about our journey and kind of the comprehensiveness what have we are trying to do from IPv6 point of view. You heard what we do for management and broadband and video, X1. One of the things we kind of jumped over yesterday in kind of all the excitement was, some of the stuff we were doing in the wi‑fi space. So just to kind of set context here, the conversation is around like a public wi‑fi offering so when you are talking down the street in a train station in the airport and see open DS. ID.  ‑‑ there is a public SSID that exists in places like airports, train stations, etc. But what we also do is we also turn the SSID on in some of our home networking devices so that if you are a Comcast customer and go to your friend's house and don't necessarily want to be on ‑‑ they don't want you on their private wi‑fi or perhaps you are in a neighbourhood where somebody else, one of the neighbours has Comcast but maybe your friend doesn't you still have the ability to get on the Internet and we offer that to our customers for free as part of being a customer.

So, one of the things we talked about and we are going to talk about today, is in general kind of what that architecture looks like and how we are trying to take some clever approaches and adding v6 support for it, the spirit of ‑‑ can he can he can't have it adversely affect the customer or network. Ultimately we documented over the past few years we cooked this approach up a few years ago, we had written it down in that Internet draft which is currently going to be revised and it's under the v6 Ops Working Group, we are going to split it up into the technical bits and some of the stuff that is more captive portal nature but the initial goal was to make it so that when a device connected to wi‑fi it would both get v4 and v6. Today, it is only v4 and ultimately what we are really shooting for is to make it so that a laptop i‑device, or whatever, can actually have a v6 only experience because inevitably we want to be able to turn off IPv4 and that is also a recurring theme of sorts.

But again, for those of you who have deployed v6, in docs SIS networks in particular some of our links which are basically the CMTS's, they are some of the most densely populated links I have ever seen. We are talking tens of thousands of devices all generating neighbourhood discovery traffic and advertisements, we've kind of made sure from an engineering point of view we have been able to manage that on for dock SIS point of view but wi‑fi is more of a wider problem for all of us, we all have wi‑fi and everybody does DOCSIS, one of the things we are concerned about, if we turn on v6 on the wi‑fi links, what sort of neighbourhood discovery traffic? Going back many years ago I remember the first times we did v6 at a large scale venue, I did some work with Cisco folks for Cisco Live. I think Erik you were part of that as well, and that was eye opening experiences, right, computers, trying to neighbour discover to other computers and doing addresses and the chatter that resulted from that was just, it was actually paralysing from the wi‑fi network point of view. So, a lot of learnings, that we have kind of been able to tuck away over the years. So, let's dive into this a bit.

There is two parts to this story. When we do this wi‑fi, you will notice this top link here is intended to represent what you would commonly use today as your broadband connection, plug in wi‑fi router to a cable or DSM modem, you will get address and prefix and computers on, while they are on that private SSID or you will get an VIX address from the same delegated prefix, something something something /64. But, if you were to leave the private SSID and join the and I will call here the public SSID when these two cuteers both do that you are taking a different path to the Internet so up here this SSID is attached to a route server which is over here because it's mot at the picture and connect to my access income and then out to the Internet. That same access network can also act as transport for the open SSID so this SSID is using the docks I can network as transport to get to some other device, we are going to call a wi‑fi ago greater and that transport happens to be GRE. So this blue line under here and maybe this is a poor choice of colours but underneath that blue line is a GRE over v4 tunnel today. Our goal and our intention is to going over v6. Today that carries only IPv4 U E communications. Tomorrow, figuratively speaking, when we turn this on, the two laptops will do, it will carry dual stack U E traffic and what happens is, now because it's using the GRE tunnels to the transport aggregator that forwards that off to the Internet so there is a long list of reasons why we do it this way, I am happy to get that that as part of Q&A, I won't go too deeply that that. What we also decided is if we put, if this wi‑fi aggregator was simply advertising, in one /64 that means Jen's lap to be and Aaron's laptop on the same X20 WIFI SSID, or maybe the same /64. So what, do you care? Maybe you do. Do I care. I definitely care because if you guys are now, I add Jan to the mix and Geoff and, I have math math I can opportunity for a whole lot of neighbour discovery traffic. What we decided to do years ago, that what that wi‑fi aggregator is examining to do because he can and we made it do this, when Aaron comes on line he is get RA with unique prefix. Jen will get unique prefix so will Jan, Geoff and Erik. This guy here is basically like a virtual, an overlay, C MT S, but now, I am doing that without prefix delegation. So now, I am sending these RAs out and you get rDNS we are probably going to turn on some form of DHCP but only for stateless operation for configuration information because we really don't need to use DHCP for addressing at this point, every desize under the sun supports SLAAC. It's the simplest feature and one of my favourites, it just works. The other thing that they care about, now that you have a 64 your privacy addressing works, privacy address away, knock yourself out, maybe Geoff's ad system will see some more traffic. But we felt that there was a lot of interesting benefits doing it this way, for the customer, for me, offering the service, so we said OK, we like this. By the way there is an added benefit. So now, when, if Aaron and Jen want to talk to each other on there, it's routed. There is no kind of link local communications. There are two separate prefixes so it kind of surprises some of ‑‑ suppresses some of the noisiness of v6 which is out there T makes it so that, from a performance point of view, I now can make it so that that there are like no impacts to the service by adding v6 so that, was one of our biggest fears if we turn on v6 and it's on this sheer nea called wi‑fi, and we have all this multicast traffic, will we melt the wi‑fi down and will everything else suffer with it?

I think I covered the picture. The next few slides are basically all the words with go with that picture, I saved you all the pain and suffering by putting it upfront. Some of the sound bytes. I don't really talk about v4 here that much, if their day is done, eventually turn that off. The one thing I will say is for v4, only thing I will say is this guy is providing NAT today. So, one of the other interesting benefits to turning on v6 it is NAT bypassed. Anyone using v6 once we turn it on you will no longer be subject to what we all know and love is address translation for v4 ‑‑ but other than that we don't really talk a whole lot about v4, SLAAC, privacy addressing, you will see here later in the slides I do talk a bit about a future possibility for stateful DHCPv6 but we will see how that turns out. Openly, one of the things we wrestled with, one of you who watched the Android DHCPv6 knock down, drag‑out battle, I don't really care either way. The only thing I cared about with Android was, I didn't have a way until a couple of releases ago for Android to send an android device a DNS server for v6. I had a situation where I had some devices and Android devices are very pop, that was not able to achieve a v6 only experience. Not cool. Right. So when we kind of jumped in on the bug report for Android, we said give us; rDNS, stateless and there is some religion there, we didn't care, we just needed something. The thing about it for us and this is the same problem we had in the access network, we don't get to pick and choose what our customers are going to use for CPE, we had to turn on DHCPv6, we will turn on basically stateless for DNS server ‑‑ configurations and rDNS S and that will allow us to offer a v6 only experience to our customers over wi‑fi. Inevitably as we go back here and watch v4 traffic drop off that will be our queue to say maybe we can turn v4 off altogether, right.

Initially our focus is solely on the all the stuff in your hands today, your computer, laptop, tablet. But eventually we are talking about the possibility of supporting routers and I will get to that later and that is where the DHCP 6 conversation comes into play. We do like about this is we do set some L‑bits, for example, you know, l‑bit to one, which should be a queue for hosts to direct all of its, again in an effort to minimise the link local multicast traffic, L bit to one ‑‑ wait a minute ‑‑ sorry, 0 so we basically force all the traffic pointed to the router to trying to further minimise all of the link local multicast traffic. So let's talk about the futures. One of the things we are looking atm, the future is hey wait, this is interesting. So I had a conversation with some folks from Google and from Yahoo and they say you can use this in other places like corporate networks, enterprises and even some other venues, we said that is not a bad idea. And we tried to expand the work that we did with the IETF to make it so that was thank it was more generally, it could be repurposed in other deployment styles. One of the things we did look at from our perspective is say what if somebody brought a wi‑fi router ‑‑ a router that used wi‑fi as the transport to provide other ‑‑ services, so think of a mini router that uses wifi as an uplink and maybe has USB as something it connects devices to. We said, what are we going to do. And we are going to do that whole, a mobile device that will take pass through the prefix, one address is used by the phone and I think it's good they did that to get the device. But I think what where we will end with up going is that wifi router to be able to PD over wi‑fi. Why wouldn't it? So and I would not actually mind if you guys have any thoughts on that because some people say is that heavyweight, are we going too far, trying to do too much over wi‑fi in these kind of scenarios. I don't think so to. Me this is more ‑‑ kind of a clean architecture for us, right. So with that, I would like to see if you guys have anything that you want to ask or talk about, because this is something that we will want to roll out later this year, perhaps early next.

JAN ZORZ: And it's a towel day and I have a comment. Giving this /64 to each device actually brings us much more possibilities that we can even probably imagine, we could start giving an IP address to each Daemon or service on a host that means we can stop using ports. So a web server would be simply an IP address not an IP address and a port. So, we could simplify the firewalling and the security down to layer 3. So, I think this is a great way forward and we need to extend it to all devices.

JOHN JASON BRZOZOWSKI: I would agree with you. And interestingly enough that comment about 64 per host is actually almost, this is a stepping stone towards that. That is much larger kind of conversation which I think I would agree with, I mean the whole idea of 64 per host offers so many possibilities. This is just ‑‑ this is the plumbing, one way to plumb that for the future. Anything else?

AUDIENCE SPEAKER: Erik muller, question: One thing I didn't see you mention was captive portal, I know that has been a problem for various wi‑fi implementations with v6. Is that something you are addressing or working with at all

JOHN JASON BRZOZOWSKI: Yes, we have captive portal, we have something that we use internally, good friend of mine who I actually did the ‑‑ is probably around these hauls at some point, the company he now works for is part of that solution, right? We do talk about captive portal in the Internet draft but means so many different things to different people. I didn't want to zero in on the captive portal conversation, we can have that, it is an important part because otherwise it's basically really open, it's really wide open with no captive portal and that has other implications. You have to register and login with your Comcast .net credentials and register and off you go. We could have a separate lengthier discussion, but you are correct it is an important part.

AARON HUGHES: Just because you asked for the feedback, you asked something with an L bit of zero and LAN bit with ‑‑ wi‑fi on the inside is great, that would be really cool, I can think of lots of use cases for it.

JOHN JASON BRZOZOWSKI: Any side effects, anything you think would be a problem with doing that over wi‑fi.

AARON HUGHES: No, I think it's a useful approach, even if you think of it for the purposes of backup, if your actual cable line goes down and you just have another box that is sitting there with wi‑fi on one and you can make it your backup RA or whatever for redundancy purposes, I think it's quite a useful piece of equipment and idea.


ANNA WILSON: Any more comments or questions for John? If not, thank you very much, John.

JOHN JASON BRZOZOWSKI: Thank you so much.

ANNA WILSON: Next up is your wonderful Co‑Chair Jen and Jen is going to be speak being IPv6 only and DNSSEC and DNS 64 and other things that get thrown in the mix.

JEN LINKOVA: Hello. So, I want to make it clear, I am not a DNS person, so I am really glad that some of DNS people who I invited are here in the room so they could comment on my slides because I just started looking at DNSSEC recently and I am horrified at every second. Because things are much worse than I expected. So, I just to establish the common ground to make sure that everyone understands what we are talking about, let's talk about DNS 64. So we have a traditional dual stack network which I hope everyone have been ‑‑ has deployed already you most likely talking to Internet through some NAT device like router or firewall and you talk to v6 Internet natively, right. Now, I want to make v4 away, for many reasons which, and we have discussed those reasons before. Actually by the way, who is sitting on v6 only network in this conference? Great. You see, you don't have your v4 address and everything works. The question is how it works because now if I need to talk to v4 destination, I need, somehow, get my packet v4 address right and I don't have any v4 address. So, because if v4 only host, let's say example .net my v6 only for AAAA record and not getting any AAAA record so it could not use v6 to reach the host and doesn't have any v4 address to connect to these v4 only host using A record. Problem. What can do I? We have this ‑‑ could lie to us. Let's say my v6 give me v6 .net and .net doesn't have v6, v4 only server but my DNS is lying and giving me some fake address which is built by combining some prefix, some v6 prefix with v4 address over host. And I could run through this prefix to my router or NAT 64 device. And as those people who raise their hands, just seen, everything works fine. But every time I mention DNS 64 some DNS people start crying and say no, no, no it's very bad thing because it's going to break DNSSEC. Your DNS server is lying and we have developed nice protocol called DNSSEC to prevent DNS servers were lying to clients. So if my validate client who cares about DNSSEC and want to verify the DNS server is telling the truth, asking DNS 6 server for AAAA record and it would get the response, it will obviously find out that it's a lie.

So, first of all, let's clarify the wording. So, validating, you can ‑‑ in DNSSEC I think you can use two wards, security away client or resolver, it's a resolver which can understand DNSSEC information, so if sends all this DNSSEC resource record to the client, client will accept them and understand them and not going to consider this packet, this response being invalid. However it might use this information, to validate the response, or might just ignore it. So if it actually validates, right, as the answer we can call the client validating client and it's obviously the case when a DNSSEC is not going to work very well with DNS 64. And the client might tell the DNS server do not check DNSSEC information for me, right? So please return me everything you have got even if your validation fails, for example, if you ask Google on its DNS and you said this C ‑‑ Google DNS because it's always validate found that oh, this domain is incorrectly signed so basically this response is invalid it will still return it to you because you ask for it. I am going to deal with DNSSEC myself, give me all the information.

So, we have two bits, and we can get kind of metrics of how it works. For DNS 64. So, if client does not care about DNSSEC,0 we can lie, that's OK. If client basically relies on the DNS 64 serve tore validate it's kind of interesting, because RFC said if there is no AAAA and DNS 64 could not verify there is no AAAA because the domain is not signed, then OK we assume there is no AAAA, we create a faked AAAA and give it to client. However, if you can verify that there is no AAAA and this negative answer is signed, we kind of going to lie to the client, the client might discover this. So in this case RFC say you may provide this faked AAAA and ‑‑ if you said break DNSSEC two years and the main problem is where a client says OK I do care about DNSSEC and I want to get all information and I probably going to use it, we don't know if it's actually going to use it or not but it asks server to provide all DNSSEC related information even if server believes it invalid, and in this case RFC clearly says do not create AAAA record, so if you have a validating client signature behind DNS 64, it would not get AAAA so could not talk to v4 only servers. What I found is that RFCs kind of bit unclear, it says to do ‑‑ what to do with all this three cases, but apparently it does not clearly says if it applies for all cases or only for the cases when there is DNSSEC information for domain. So what to to if you have validating client who tells the DNS server, OK, I do care about DNSSEC, please give me all DNSSEC information, but there is no DNSSEC information. So, as I found out, people can read this RFC quite two different way: Some people believe even if there is no DNSSEC information we still do not perform, and there is no option is do it only if there is DNSSEC information for domain.

So for example, this is what BIND does, so there is ‑‑ there is no DNSSEC record for and despite my client explicitly said both flags I am still getting AAAA record. In other words, it's basically question what is the problem area? Are we going to break for validation clients only intersection of v4 only names for which DNSSEC is enabled or your validating client will fail for every single IPv4 only name. And one hand we try to minimise problem space, however, another reason to actually break it for everything because in this case it means by deploying DNSSEC you might break things, you are deploying DNSSEC for domain and suddenly this name starts failing for validating clients sitting behind DNS 64. I have had a chance for this week with DNS people and apparently yeah, it looks like we have some confusion in what to do and I think this needs to be clarified.

By the way, I do not have data unfortunately, but I believe that there are not so many actually validating clients sitting behind the DNS 64 because in this case we are talking about stop resolvers on end hosts on laptops and phones, and probably Geoff has some data. I just do not know how many are around. I hope almost zero. So basically it means that we, even if there are, I was curious actually if they are, if they appear one day, if tomorrow I wake up and I found that there are a lot of validating clients in my v6 only 64 network how big is the problem.

So, I looked at the and ALEXA one million at least and I resolved and found about 6% of ALEXA one million has quad record so it's not going to be a problem. So, about 1.7% of those names are signed. So either have signed A or AAAA, I was too lazy to look into details. So records for about 1.7% of those names. Because actually, I was thinking usually we have technologies and we have been trying to deploy them for long, long time, both IPv6 and DNSSEC so probably there are the same set of people doing this, all those geeks who like new technologies so probably we don't have almost, probably we don't have DNSSEC enabled domains which are not IPv6 enabled. Who knows. So let's look into this. Apparently, that is not true. Quite a lot of signed domains are actually do not have AAAA records for websites. And to summarise: V6 adoptions for ALEXA one million, about 6%. And about 21% of all DNSSEC enabled sites actually are dual stack, one‑fifth. And for DNSSEC we have 1.7% of all sites for IPv6 enabled sites is slightly higher, I feel it's a different set of people, except probably CloudFlare and Comcast.

So I was trying to, somehow, summarise the data on one slide. So again depending how DNS 64 develop with RFC you can basically break for validating clients either 1.3% of DNSSEC enabled sites or about 94. Over all solution is actually just enable v6, OK. And all problems solved.

It's not last slide yet. How much time do I have? Because let's say you cannot enable IPv6 because you don't have any, you just develop who is dealing with resolvers on the end hosts, what could do you? Oh there is solution as well. A kind of work around I should say. So you can find out if you are actually sitting behind DNS 64 NAT 64 network, it's actually quite simple. You can ask for a name which is which does not have v6 address, there is at least one well‑known name, IPv4 only ARPA, you just ask for add A and if you do get AAAA, great, prefix, so you can towelly do DNS 64 function yourself if you are so smart you are going to do validation I think you are mart enough to do DNS 64 as well. Because now nobody can guarantee you that your device is not sitting behind NAT 64 network so you should be ready for this. However, interesting thing as I mentioned that RFC says you should set no CD flag indeed, bind may default would not return your AAAA record if your bit is set, so I think actually the best ‑‑ better idea is to unset both flux but it means there is no DNSSEC, as I yesterday pointed out, it's security issue, what if a bad guy tells me, lies to me and tells me some faked prefix and got all my NAT 64 traffic. Because I am asking for this AAAA record I am getting some prefix but how can I trust DNS response. DNSSEC again. Can we still use DNSSEC. Apparently, yes. There is a nice way to reliably discover the prefix. So first you ask for AAAA for v4 only name, you get a prefix. Then, you ask a BT R for that IP address and then, and if your DNS 4 operator was smart enough they will return you a name of the NAT 64 device and you ask for AAAA record again and this will be signed because your DNS 64 operator supports DNSSEC. So basically asking for v4 name then asking for PTR and AAAA record again, you can receive DNSSEC ‑‑ receive answer which will be validated DNSSEC. So basically now we kind of discovered NAT 64 prefix in securer way.

So conclusions: Depending on how people people read standard, and depending on would you like to encourage people to deploy DNSSEC or scare them telling them if you deploy you might break validating clients behind NAT 64, the failure rate might be between 1.3% or 94% for validating clients, which I do not not exist. Should service owners enable IPv6 and there is no need for my talks any more, I can go home. If you are dealing with resolvers, you better assume that you might be behind NAT 64 network, so please if you for some reason decided to play with DNSSEC, don't forget about another shiny new technology called IPv6. And I have few backup slides for those interested looking at the PDF for like distribution, I was curious there was any obvious things indicating that like all DNSSEC enabled IPv6 on the top of ALEXA 1 million list, they are not, it's actually quite random. Questions, comments?

ANNA WILSON: Sander, I think you were first.

SANDER STEFFANN: Can you go back to the slide where you are talking about that it is validatable with DNSSEC, this one. Actually this doesn't make it any more secure. Because in the first response I can lie to you, so I give you an address from my own prefix and then after that you do the validation where I can sign the fake address that I just gave to you.

JEN LINKOVA: Wait. I mean what is not going to happen is that attack will spoof your DNS packet and start giving you prefix because you ask your DNS server, you should trust, all these chain of trust in DNSSEC for this.

SANDER STEFFANN: Yes but in the first bit you say somebody could hijack your traffic by making you think you are behind a NAT 64. If I can do that then you can do all the validation you want but if I am giving you the data then I can only provide the signatures on steps 4 to 6

JEN LINKOVA: I think you should find that this domain, information for this domain example NAT you should be able to validate because you are not able to validate information, for It all doesn't make sense and you shouldn't do this but we are talking about situation when you are able to validate the answers, for example, .net, yes. So I lie to you and give you my IP address and you do a PTR lookup on my address, I tell you you are talking to NAT 64 dot Stefan .nl which is also signed.

JEN LINKOVA: No, no, you are DNS 64 or just on a packet sitting next to me ‑‑

SANDER STEFFANN: I can still do this and sign it because I am giving you my own domain name and my own address.

JEN LINKOVA: Yes but it means you lie for all my DNS information, we have a problem anyway like.

AUDIENCE SPEAKER: I think this should work, like that you get ‑‑ getting PTR you get the name NAT 64 example .net which is the name that is somehow trust had had in your DNS 64 so you have to have some list of trusted net 64 prefixes in your system but nobody is making any assumption how this list should be filled with data or if we should ask you do ‑‑ example .net because if you trust it then it works, you can validate it using DNSSEC and everything works but this was a problem and this was my point that the best for ‑‑ best way to avoid such security issues is to use the well‑known prefix very possible, because well‑known prefix doesn't suffer from these problems.

JEN LINKOVA: Yes, yes, it's a question about ‑‑ well‑known prefix but sometimes prefer not to.

AUDIENCE SPEAKER: Paul Hoffman. So, two things: One is, earlier in the slide you said oh almost known is validating on the end host and that is true. Then you proposed something that would make it so that most people wouldn't want to do that later because we are hoping that IPv6 adoption continues doing at its rate and, which is a faster rate even though of course outside this room everyone says it's way too slow, of an adoption rate but it is faster than a validation rate on end hosts at this point which is mostly geeks. Geoff is turning around but we are talking about only end hosts here not Google and Comcast and such like that. So you are proposing to sort of cut off a, what many people think is valuable security service by doing this. But you also, this slide is a very good example, you are taking one of the most contentious protocols coming out of the IETF five years ago that thrashed through drafts making it much more complex. I would propose that is not a good idea.

JEN LINKOVA: OK. Actually I think what is a good idea just to get v6 deployed. I am just talking about there is a work around exists, there is a work around. I actually did not even say ‑‑ or did I ‑‑ that it's really good idea but if you don't have any other options.

AUDIENCE SPEAKER: Going back to what your answer to the previous person was if we do a lot more with pre configured things you can avoid all of this and there isn't much in the NAT 64 ‑‑ in the DNS 64 document about pre configuration, which is why you had the question there. That is just fine, but that is a separate topic. Pre configuring will solve a lot of this for you.

AUDIENCE SPEAKER: Andrei again. I have a few more questions. First, which DNS 64 implementation does this programme of ‑‑ AAAA records even with CD flag on, because I think it used to be broken in unbound but fixed some time ago so this is the ‑‑

JEN LINKOVA: I mean, give negative ‑‑ if negative response for AAAA validates, right, and it does not do, the thing is, I tried it on this network. I cannot remember if I tried it on Jan's resolver, I need to double‑check, but I think I did. I think it was bind. I think bind.

AUDIENCE SPEAKER: OK. And then another note I have here is that actually this DNS 64 discovery is being implemented in DNS triggers software, at least from what I know from the developers which is the software that does DNSSEC validation at your end point, at your lap to. The problem is that it has lower ‑ in much worse thing can happen when you try to validate DNSSEC on your end point which is DNSSEC signed wild card, DNS data which is much more broken than this DNS 64. So this is something that is now more interesting for developers to fix.

JEN LINKOVA: Their priorities, yes, I am trying to make sure ‑‑ looking at over lab v4 over v6 I start suspecting it's completely different set of people and they don't realise they might sit behind DNS 64 and I think they should know that that might fail because I need this had data because I wanted to know how much stuff in the worst case scenario going to break and whenever.

GEOFF HUSTON: APNIC. Look, there is kind of two things that v6 has taught us and the first thing at that came out with 6to4 and keeps on going is tunnels are really, really bad. We do not understand them and don't understand the way fragging works. The second thing is lying in the DNS is really, really bad. And it's convenient hack to do this translation of a v4 to v6 and aren't I clever, it's a lie. And all that your DNSSEC work around is trying to tell you is that is that is the machine that is lying, you don't know about the quality of the lie. Quite frankly it's bad. And what you really should be thinking about is why have I got myself into this place when the DNS has to lie to me, because what you are really saying is if it can lie to me it can lie about anything. Maybe DNS 64 was a really, really bad idea, and trying to make it worse or better with DNSSEC doesn't get over the fact it's a really, really bad idea.

JEN LINKOVA: My problem is, as someone who operates a network that I need to get v6 only network and unfortunately can I not say it's going to be only v6 network soar get about v4 Internet right. So I still need to use some dodgy hacks because there is no, I think proper way to do this. I need to get all these laptops and phones work on v6 only network.

GEOFF HUSTON: It's a classic trade off, if the cost is structuralised in the DNS, you are paying enormous price for that trade‑off and it might look convenient and might look easy, but structuralising the DNS is what national censorship is all about, creates all kinds of problems. What we want is actually validation to occur on end hosts, what we want is end host toss reject answers they cannot validate as being the real thing and if you are saying yes but except when. The network is saying I am doing some 64 tricks, I think the cost you are paying is way too high. I understand your need but I think your trade‑off lies, in the DNS, is a lousy trade‑off point from where I sit.

ANNA WILSON: To be fair to the next speaker we do have to finish up in the next few minutes so keep comments fairly brief.

JAN ZORZ: You said that you tried on my resolvers and I would like to ask you which one? I am running through implementitations.

JEN LINKOVA: Your website says currently two, I think I tried both of them. I can show you I found.

JAN ZORZ: First and third one has DNSSEC validation turned off because I want them to work.

JEN LINKOVA: Wait, I found only two. The second one has the validation turned on because I was experimenting with the exactly these things but it's now turned off again. So if you want I can turn it back on.

JEN LINKOVA: We can discuss it because I was looking at this as the RIPE DNS and your DNS, yeah.

BENEDIKT STOCKEBRAND: The problem with NAT 64, in general, is that it breaks all sorts of protocols that depend on things like zip and whatever. There is an alternative to it which is basically application level proxies that actually deal with situation and most of the stuff being http anyway that actually works in quite a number of and avoids the situation with messing around with the DNS because on all ‑‑ it's basically NAT 64 or ‑‑ DNS security they don't mix except with tremendous pain that is causing huge problem operation‑wise and yes, it's really boils down to what you say we should actually get to the point that we can use up v6 pretty much anyway.

JEN LINKOVA: Totally agree, yes, the whole point is we get into all these troubles because there is 94% of ALEXA websites which are not reachable over v6. If we could fit this great, other we are not there yet, otherwise close and going to MAT session.

JOHN JASON BRZOZOWSKI: So I think NAT 64 breaking stuff is a feature not a bug. And what I mean by that is, once you find those things being broken, you can rest assured that v6 support will be forthcoming pretty quickly thereafter. I feel the same way and I agree with Geoff, I think we all agree we don't really want to break DNSSEC but at the same time we are trying to prepare to deploy some v6 only infrastructure like very new large projects, not on ‑‑ on the corporate side not on the broadband side. And I think my answer to somebody, says hey, your DNS 64 feature is breaking my website, first thing I would say is give me AAAA record and problem solved. So I suspect, I mean I don't think either one of us want to see something like that last a long time and how annoying it is it will turn on the AAAA record and be done with it, that is DNS 64 bypass of real AAAA record.

ANNA WILSON: Any last comments or questions for Jen. If not, thank you very much, Jen. And our last speaker is Enno Rey, real life use cases and challenges when implementing link local addressing only networks.

ENNO REY: Most of you know me probably from being here at another Working Groups in the past years. Suffice to say, I am involved in a number of IPv6 activities both on the technical implementation, planning and research level, and this talk with the bulky title which is I am not going to repeat is based on a thing we stumbled across in a customer environment. So it's about a certain IPv6 specific way of doing things. I will share some like, real life problems we encountered and I will derive some conclusions from that. The background of this one is RFC 74 04, who knows this one and what it is about? Just raise your hands. That is not too many, including the author or one of the authors of this one. It's about link local addressing only on infrastructure links. I will explain this in a bit more detail in a second. For the moment, just to let you know there was a heavy discussion about this one, I see some people in the room who contributed on the mailing list, there were people who were of the opinion that the IETF should not publish this or if it got published it should contain a warning like oh we described something here but you should not do this. It's an informational one. I personally think some of the ideas contained in it or the main idea is a valid one for certain environments, so the first I won't say mission but intent of my talk is to make you aware of this one and the approach in itself. The approach, to quote the RFC literally that's here, it's mainly about not having global Unicast addresses on infrastructure called them point‑to‑point or transit links, just go with link local only, and use that one namely for the routing protocols, for the IGPs it's used anyway but use it also for the BGP session once you run this across this link. It assumes that and I think this is safe assumption, that device itself has a look‑back, at least one look back interface with an address so this can be used for outbound ICMP traffic and inbound management plane traffic. That is the technical core of the approach. There is a number of potential advantages and disadvantages. I will just cite them again, quote them from the RFC, I will give some comments. I do not think all of them hold from the environments that I know and that I am involved in, but yeah, the RFC itself, which again it went through several iterations, it was a double digit revision number before the IETF draft added double digit, it might, going with such an approach might make the routing table smaller, as we don't have to transit links in your IGP for network, it might help to or it might facilitate address management as you don't have to keep all these ‑‑ to provide distinct addresses to the transit links and manage them in whatever type of address management approach you have. There might be lower configuration complexity. I think this one is a very interesting one, once you go with the same addresses on several links, like go with F E ‑‑: 1 and F EA: 2 and hundreds or even thousands of links, this can help from a configuration and operations perspective in a number of ways. There are people who have to exactly different opinion that this will complicate things. But let's save this for the case study discussion. There might be simpler DNS. I don't buy this one, I don't think this holds true as probably you only have the lookback address in the DNS anyway and not the transit link addresses. And it might contribute to a reduced ‑‑ doing this for BGP peering means you can't interact with the BGP peering from remote which can be achieved by other means as well as most people here in the room will know. These have been the potential advantages. There have been, say, described in the RFC there is a number of potential disadvantages and those have been voiced very explicitly on the mailing list discussion, which as you can only perform a ping to a dedicated interface from the local link. Personally, I don't think this is a big disadvantage because I have quite some operational experience. I think doing ‑‑ not ping device by look back but ping in interface that doesn't happen too often but your mileage may vary here. It might obviously ‑‑ this approach will change the output of trace route. You won't see the exact path, you will just see the devices involved. If you don't ‑‑ if you go with SLAAC and UI ‑‑ not SLAAC but UI 64 based IIDs, I don't buy this one either as I don't think it makes sense doing with UI 64 generated IIDs, in that case. And of course, and I think this is the major one, it might change the way your network management, your monitoring or pulling data from interfaces, how this works, once you can ‑‑ can't identify a specific interface or at least you can't reach it, I mean you can identify it somehow but you can't reach it from outside the local link. Again this was just listing advantages and disadvantages as of the RFC. The specific case study, the anecdotal evidence is from one of our customers, a large manufacturing organisation, so classical enterprise, into the provider but within the group they have a known IT operations provider, X, Y services, thing, and they run their own country ‑‑ MPLS network spanning several countries and the main platform for the PEs is in that case, that is Cisco AS R one case, 1006 and 1013, since 2006 large group project ongoing which is probably most people here in the room will think this is a good thing, especially in enterprise space, enterprise is lacking behind, the amp will Iitude is during work days versus the weekend, on the weekend there is much more IPv6 traffic as people work from home or access the Internet from home, as opposed to the work week, so enterprise is lagging behind, they have this project and within this project they identify going with L L. A only on PE to CE links as a very viable thing. I can tell you at least from my perception, these people have valid reasons for this, they are smart people so do not fall into the temptation that is dumb thing, they shouldn't do this. They have reasons to do this. One of the reasons is actually in their, like organisation I PAM, they are currently 43,000 networks, about 20,000 of those ones are just point‑to‑point links, again you can come up with oh this is dumb, they shouldn't do this. Don't fall into this temptation, large enterprise, political reasons, historical reasons as for the network and the way it is managed, suffice to say they wanted to approach ‑‑ to do this. This is how it looks from a lab configuration. The main thing here is just the identical addresses on all the PE to CE links. And they run BGP over the PE to CE links which probably most people here in the room do as well. But once you try to configure this on say a high end platforms, or at least from an enterprise perspective high end platforms, with latest code and then release is running, what happens is, once you set up the first BGP peering, so you configure BGP, you configure the address family context, this works and once you touch the second one, the remote AS statement of the first one disappears. So that gets kind of deleted from the config and obviously the full BGP session will ‑ the first session of the first peer will immediately break, as can be seen here, like this is the configuration or part of the configuration of the second one, and look at this probably most people here in the room can easily interpret what is going on, their session breaks and neighbour is down and here is AS mismatch. So, in fact, simply said, you can't configure this, you can't go with this, so if some of you in the room have been considering doing this exact approach on point‑to‑point links in your environment with Cisco gear, it just doesn't work. It's documented in this bug ID.

Why do I tell you all this? To share the experience in case you had considered going with this one and you have Cisco gear, you might approach like account manage tore ask wait a second, oh your platform doesn't support what your guys have been describing in an RFC authored by Cisco guys but the moral of the story, I would like to share here, is, as I already stated, enterprise has been lacking, when it comes to IPv6 adoption. Now, there is at least from what I see, a lot of movement in enterprise space. There is people who in realise that IPv6 can bring benefits that haven't been available earlier with IPv4, like the changed architecture and the changed paradigm, they can do things that haven't been possible before. That's the idea. And then, again, I mean it's 2016, and it turns out that things just are not supported on major platforms by major Windows and this is ‑‑ I would say this is quite unfortunate. And another thing of the conclusion is obviously of course, all of you have well equipped test beds with dedicated budgets for the test beds. This again proves why it is so important to do the actual testing and not just rely on ah, we were told we could do this, that way. Thank you for your attention.

ANNA WILSON: Thank you. I see there are questions.

JAN ZORZ: Yes. Can you hear me? I can't resist. With a grumpy operator. I would encourage all my competitors to use this to build their networks. One word: Troubleshooting. It doesn't work. It's easy to set it up, it works, but when ‑‑ can I say shit ‑‑ when shit hits the fan, you are in big trouble. I tried it. Don't go there.

ENNO REY: I won't get into this discussion as this was the intent of this one, it's not the intent of the RFC, it describes this, and I mean it's up to any operator to decide is this a viable way or not. Those guys decided it could be a viable way. That is all I have to say from that perspective. ***

JEN LINKOVA: As someone who used to operate two back bones one done this way and one which is traditional point‑to‑point links I actually love this. No problem with troubleshooting, everything worked perfectly fine but yes it's not best current practice and I think I say that on the mailing list when was discussed, it ‑‑ it's one solution which might not fit particular network but it's true for almost any designs, right. And it might not be supported, there is a lot of features which I want to deploy which are not supported, that is unfortunate, I don't think we can blame the solution for this, it's not so scary as might think. I have really huge backbone working like that with no problem.

ANNA WILSON: This side first and then I think Gert was after.

ENNO REY: It's funny to see the same people who have objected heavily on the mailing list. You have been one of them, Gert. Several occasions.

GERT DORING: I am always the one objecting, of course. I am Gert Doring, one of these troublemakers in the ISP world. In the discussions lead doing this RFC I was more on the opposing camp. I am perfectly fine with the result having a balanced RFC which says you can do this, here is the benefits and the drawbacks and this is a good document. I am fully in the I want numbered links and to ping them from the network management system camp, but see your example coming from a totally different background, and having different deployment goals like actually having the interface name in the BGP so you immediately know which peer it is, by and mapping everything to the interface and not bothering with IPs as an inter immediate lookup thingy, actually makes lots of sense so it's very good that you brought this. I am still not going to use it because my network looks different but it makes sense.

ENNO REY: I am stunned, you have made my day.

GERT DORING: I am old and grumpy but I still learn, sometimes.

AUDIENCE SPEAKER: Blake. Thanks for the RFC, I did actually read it and try to crank this up in my lab to see would it explode and not only does ‑‑ on Junos not work with IPv6 but not work with IPv4 link local in RSC P for example, trying to get an MPLS backbone that was complete waste of time. But the comment that I will make about this is, certainly it makes a lot of sense for from like a design point of view but do we really want our vendors to be implementing this and opening up yet another can of worms where they can break things in all sorts of interesting ways?

AUDIENCE SPEAKER: Tom. It was very interesting presentation. Actually, more of a comment than anything. I think this is very, very cool in the sense that it's more of an extension to the sort of model we already have at the moment where you have your loop backs only in the IEG P. I could quite happily leave ‑‑ not get into that whole mess of trying to work with my transit up‑streams or peers to get them to do the same things, I could quite happily burn a link net there but actually the internal design, I think this is really cool and I think from a troubleshooting perspective we already have MPLS labels coming back, I don't see a major problem with this and I really hope my vendor gets on with implementing it.

ENNO REY: Thank you.

AUDIENCE SPEAKER: Marco. Besides I appreciate the RFC, I mean, it's very cool, I do that in my lab every time because I am lazy and I prefer to rely on auto configuration. I don't think that it will ever work for MPLS, I mean, it breaks traffic engineering. I know there is a draft that allows you to push interface and inter fire but not really working so you cannot decide where to send your traffic because you basically have the same addresses on different interfaces. So I think it's going to be cool for large enterprises, I want to add another reason for not rely on ‑‑ addresses on E BP, we had nice chat with Gert on ‑‑ because Cisco now decided to rely on local addresses when you do EBGP and with global routable addresses for instance that breaks a lot of things like, I don't know, remote recalling or it forces you to rely on resolving on the box the next‑hop. There are a lot of reasons for not doing that but thank you you again, I love your work and I think that it's good for IDP but not for the rest.

AUDIENCE SPEAKER:  ‑‑ both RFC and working for Cisco. To the command, very nice that you change your mind and indeed use case where it's apply and other use case when does not apply. So that is a balance to be found. To come back on the issue your customer found, that is obviously a bug and we are working to fix it. That is, I am on call on this. The other point to gentleman to say Cisco introduce a feature, no, we don't introduce a feature, we implement RFC 2460 which is dated 20 years ago. So that is not a single feature, we didn't change our mind, we simply in specific case have a bug but it's into the feature.

ANNA WILSON: Thank you very much, any last response there? No.

We will finish there. Please let us know on the website what you thought of all the talks, we are back at 1,600 in the main room and thank you again to all the speakers today.