Routing Working Group
26 May, 2016
At 9 a.m.
JOAO DAMAS: Good morning. Welcome to this session of the Routing Working Group, cooperation if that is where you wanted to be, is in the other room. We have a pretty packed agenda today, I would like to get started as soon as possible, while people still come in. I am co‑chair of this Working Group, my co‑chair Rob Evans is here in the first line. Preliminaries as quickly as possible. I would like to thank the scribe Anand, the Jabber scribe Michael and stenographer Aoife here in the first line who really helps us with making this as smooth as possible as us a. Whenever you want to ask or contribute something at the microphones state your name and affiliation so we know who is speaking.
Next item, the minutes from RIPE 71, Rob circulated them to the list on March 31st, I would like to ask the room to there are any amendments that you think are necessary? In that case, we declare them final and ask the RIPE NCC to publish them. The first piece of work today is appointment of new co‑chair. Rob's term has ‑‑ was up for renewal, he has decided that he has already a very, very life, and will not continue. I would like to say words of thanks for Rob, throughout all these years, I can't remember exactly how many there were but there were quite a lot. Ten. He has been the best colleague, really. It's been a pleasure working with him, so if you wouldn't mind, before he steps down, I would like to have a round of applause for Rob.
(Applause) thank you.
So this is the chair selection process that the Working Group approved some time ago, just over a year ago. It's very simple; we issued a call for new Chairs back on the same message, on March 31st. One candidate came forward, I don't quite see where he is right now, ah, there. Paul low, there were no other candidates by the time the nomination period closed so he has been around RIPE and around Routing Working Group for a long, long period, even though he has changed jobs he has always been at the Routing Working Group. I think most of you know him and since there is one position and one candidate that both Rob and I felt was a really good one, I would like to welcome Powell low as the new Routing Working Group co‑chair unless somebody has anything to say. In that case, thank you for joining us.
(Applause)
So now the rest of the real work comes into play. First speaker today is Ignas Bagdonas on 64 bit extended BGP communities. I saw him a second ago. There.
IGNAS BAGDONAS: Good morning. So let's start routing day from extending BGP in some other way. First, what is ‑‑ to start, it is about addressing the need for BGP to carry policy constructs by contain 4 byte AS numbers. 4 byte AS numbers are being deployed, show newspaper S paths and need to express the policy which constructs in a way AS do not advertise to AS that, where this and that are both 4 byte ASs. Also signalling to the 4 byte AS some action which is identified by another 4 byte entity, and with existing encodings, today there is no simple way of doing that.
So, the problem space. BGP policy just does not have sufficient mechanisms to express the AS four colon policy in simple and, well practical way. There are various work around and hacks and proposal called white communities which solves that problem also adds in much more additional functionality, just a question is at what cost that is being done. If we could go to the next slide, please.
Looking at encodings, standard communities have been for a while widely deployed and they can represent one entity of 32 bits. The way how that is interpreted by the local systems today is, two 16‑bit parts ‑‑ and that is mostly local implementation matter.
Extended communities: Quite a few of them were different interpretations, again the encoding defined provides a container for 48 bits of information, and that is being treated as local and global parts, depending on the interpretation that can be seen as AS number just owe paying inter injury or IPv4 address.
With IPv6 there was extended community defined for expressing the policy where one of the parts is IPv6 address. That is a rather large container. However, there is still not enough space for carrying the second part, which would be able to carry the 4 byte AS number. Technically, this could be a starting point of trying to extend to carry AS 4: How from a practical perspective it doesn't seem to be widely implemented and deployed by vendors.
4 octet communities, that is basically only the change of reinterpretation of the fields, it's still the same container which carries up to 48 bits of the value, just that the global path now is interpreted as being 4 byte AS number. The local path remains at 16‑bit entity and still no practical way of expressing the policy in AS 4: AS 4. There are other extended community types used for signalling operations in other address families, most notably Layer 2, layer 3 VPNs. They basically reuse standard extended community encoding, which still provides for the container 48 bits total. The problem of AS 4 in the context of those address families, yes, it is present there; however, it's not that important as there are mechanisms for mapping the AS numbers and on the other hand deployments typically are constrained a single domain and that could be controlled via trance lags mapping or some other mechanisms. If we are talking about the inter AS type of solutions there are mechanisms for translating the values if there is a conflict.
EVPN once again tries to redefine encoding within the same extend community container by making that a little bit future proof, that is several fields of flags and redefiance the actual information payload as an owe paying container. The problem is, however, still it's not enough space, still 32 bits.
Wide communities. That is a different approach, that is a new approach, not an extension to existing communities, but a definition of a new path attribute. It defines different encoding rules and slightly different operational model, in a sense where there are mechanisms for controlling the propagation scope and interpretation scope. The good news about that is that all entities within wide communities are 32‑bit, the question is at what cost that functionality comes in.
So this is an example of possible encoding for expressing AS 4: AS 4, for carrying 64 bits of valuable information we have a rather complex structure. It is a flexible structure but the implementation complexity and operational complexity that might change the operational model is one of the blocking factors why this is not yet widely deployed, and vendors are quite conservative about supporting wide communities, for the main reason being that it is complex to implement, and if we are talking from a practical deployability perspective there needs to be interoperability just a fraction of vendors implement that, that is not good enough.
Possible work arounds, this is nothing new. Ability to split AS 4: AS 4 into two extended communities carrying only half of one single AS, that is a nasty hack. That has conflicts and in order for this to be at least not fragile, you possibly need three extend communities, one used as a marker and the other two‑carrying the actual information. The other mechanism is to use a mapping approach where you present AS 4 byte AS number being mapped into a two byte value and the policy is implemented by referencing the two byte AS.
There are vendor extensions and not necessarily standard extensions for carrying information not directly related to IPv4, IPv6 address families. That can be used, however that is not necessary interoperable and not necessarily widely available.
So if we look back into the requirements of for expressing 64 bit communities within BGP. The fundamental requirement is that it must be a single path attribute which is able to carry that information without splitting and referring to any other attributes. The model should stay equivalent to what the community used to standard and extended communities. Also this approach needs to be practical; both from implementation perspective that, means that vendors should be able to implement that without major investment into new functionality and from operational perspective that the existing tooling and systems should be adjusted to using this new encoding without major rework.
Comparing, not necessarily comparing but looking into the perspective, into the wide communities, this is not an attempt to derail that work. White communities bring in good functionality and just a question again at what cost and wide communities have been in specification phase for quite a while but still no implementations for that. The problem of expressing 4 colon AS for policies today and the solution is needed today.
A couple of options of trying to move forward the possible encodings. One is to try to reuse the existing extended communities by adding additional two fields. That is mostly due to the error handling requirements. Extended community length must be multiple and not enough to add two bytes, we need to add the full set of eight. That possibly gives a nice option of having certain future proof functionality, either extending that later or simply having another additional field, which can be interpreted as a parameter.
And another option is a minimalistic one: Defining the new path attribute, which is similar to a standard community and just that it is able to carry two 32‑bit fields. No flags, no extensibility, just a simple functionality, implement that via the different path attribute. This way, we can avoid potential need for capabilities if this extension is implemented via extended communities.
So with this, I would like to bring it up to the discussion of whether this is a problem in your environment, if it is a problem how painful that problem is, whatever functionalities you potentially might see here and if we are talking about possible ways going forward, which of the encoding options would be the preferred one.
And a general comment about this: For this to be practical vendors need to implement that. It ‑‑ pretty much anything can be defined here on the paper, but as long as there are no inter operable implementations, this is of little practical value. Therefore, the general approach would be to agree on the requirements, how that should look from the encoding perspective and then bring that to the vendors, saying that this is a requirement from a community and please do implement that.
That is it from my side. Thank you, and if you have any thoughts for the discussion.
JOAO DAMAS: Any questions from anyone?
AUDIENCE SPEAKER: Hi, Paul Thornton, speaking for myself. This is a problem. This is a problem, probably for newer adopters who may only have a 32‑bit AS, while we do have 16‑bit ASs available they are running out. I have customers who are new to BGP who cannot use community based filtering as we already do because they don't use a 16‑bit AS and can't just encode in a standard community. I have lent chunks of AS 8676 to customers to do this internally, or used a private AS for this, which goes terribly wrong when it leaks outside because somebody else might be using the same private AS and announce this route so you end up leaking that way. I have seen that happen at least once. Two customers I can think of, luckily made an acquisition where they gained a 16‑bit AS as part that have so they peer at ‑‑ they use one AS for all of their transit and peering but use the 16‑bit they gained for their communities, so yes, it's a problem. It's painful but it does have work arounds. The only potential issue I see is adding another fix to the mix gives the vendors yet another option to prevaricate over and we all know it takes time for these things to filter down, so it's great that we have having this discussion but I think we need to all beat up on the vendors to say can we pick something and go with it quite quickly.
IGNAS BAGDONAS: Thanks, and that is actually the core of this proposal, call it so. Yes, there are multiple options of doing this. Some of them look nice theoretically, or at least in specification phase, wide communities bring in the right functionality. The reality is that we don't have wide communities, and it's not clear when we will get those. Yes, that is one of the options to push vendors to implement that and again, we need interoperable implementations of that. There is quite a large degree of freedom in wide communities what we do, end up implementing or what what not. Maybe one of the options is to take out the path of the wide communities' approach which deals with 4 byte ASs and trying to form that as a separate implementation or basically subset of the specification which covers only that. We can argue that this is easier or quicker to implement, however that still has quite a lot of complexity and if we start to look into implementing part of wide communities probably the full should be implemented and again we come to the scenario where it is complexity which blocks the development, implementation and deployment of that. Another aspect of using multiple or having multiple approaches for doing the same thing: Yes, that is a risk and, therefore, the general approach should be that community agrees on which way is acceptable and that is being signalled back to vendors, they should implement this or that approach. So, this problem is not new; it has been there for a while, just that the actual use of the 4 byte AS numbers is not that wide, and with time, this problem just becomes more visible.
RUEDIGER VOLK: Deutsche Telekom. Yes, the problem should have been recognised during the many years when the introduction of the 4 byte ASs was being done. I am kind of irritated by seeing that even the previous speaker's comments seem to indicate that the full understanding of that, well, OK, everybody who is doing some community‑based signalling in his system, regardless of whether he has got a legacy number or a future one, actually has impact because he is going to interact with parties that have no other choice than doing the 32‑bit numbers. Parties who do not do anything in using communities for signalling obviously do not have an immediate problem, except when they are dealing with parties that are doing this, and where they want to use it. The discussions in IETF about extending this have been around for many years and well, OK, kind of no progress has been ‑‑ has been made and well, OK, following any proposal that has kind of really extended functionality, comes with a problem that the discussion of the additional functionality, well OK, does not have a proof of termination in advance. So, my take would be, well OK, for reasons of getting the implementations easily done and getting consensus, I think the only reasonable approach is to figure out how many bits are needed for a transparent extended community and, well OK, probably has to be a different attribute type and just define that as transparent like the original standard community was done and, yes, it makes sense to do more or less exactly the same conventions on using of the bit field, which, in the old time, essentially said, well OK, consider the first 16 bits as the coding for who owns this name space, who rules about what is going into the other bits and, well OK, I think everything we could do in the old communities can be done there. We might actually add a few more bits so that more things can be done and more cleanly, terminating a discussion whether 64 bits is fine or 128 should be used or something in between. I think if we really insist on getting termination that discussion can be fairly short. The implementation obviously is easy, the implementation for operators actually is not completely that easy because, unfortunately, in the policy languages, treating different types of communities actually means other ‑‑ different branches essentially even in the syntax of a configuration languages so kind of any configuration system that's there at the moment will have to be extended. So, kind of, the guess is, even if we ‑‑ if we are going really fast track, in five years' time we will actually be there. Not before.
IGNAS BAGDONAS: Possibly, and again, it's naive to expect there are no changes to existing systems required for this. The question is, which one would use which has minimum practical level of those changes. Therefore, the minimalistic approach is just a direct extension of standard communities, instead of having one 32‑bit field you have two now. And that should be good enough for this particular use case.
RUEDIGER VOLK: Not only for this particular use case, if we do this as again a transparent bit strain, with very little conventions on it, it also allows whatever additional use people may throw on it later, well OK, kind of replicating something like wide communities, well OK, I wouldn't advise that because, well OK, the coding will get nasty with that but kind of, having a transparent string and a minimum set of conventions of how the name space is managed, you can actually put in extension semantics later with not really a problem.
IGNAS BAGDONAS: Exactly, leaving the code on the same just reinterpreting the bit stream in a different way, yes.
GEOFF HUSTON: Two comments. You said 4 byte AS numbers are not widely used. There are 42,000 two byte AS numbers in the BGP routing table, there are 9,6004 byte AS numbers in the routing table. So if not widely used means anything less than one in four you might be right but they are widely used. Let's get over used.
IGNAS BAGDONAS: And getting more widy used.
GEOFF HUSTON: If you want something done, write a draft, submit it to the IDR Working Group, sit on the mailing list and hammer it through. There is nothing wrong with most of these they haven't got through standardisation. Give it a shot. All you are going to spend is your time. Go for it.
IGNAS BAGDONAS: Draft is in progress.
GEOFF HUSTON: Submit, go. Yes.
JOAO DAMAS: Thank you very much.
(Applause)
RANDY BUSH: IIJ. Let's be honest Geoff, we just have a draft going into Working Group approval that was been there for five years.
STAVROS KONSTANTARAS: So good morning, everyone. This is Stavros from NLnet Labs, you probably guys know my organise, for those who don't know us keep in mind we are a nonprofit organisation and located in Amsterdam and our other business is DNSSEC and DNSSEC services. However I am going to speak about the ENGRIT means, extensible next generation routing information toolkit. Before I start talking about this project, which is only routing project that we have, I would like to ask you guys clearly and honestly, do you like this name? If you don't like, please raise your hand, feel free to do that. So everyone likes this name. OK. Surprised, because to be honest, I don't like this name. So in front of you, right now, I am going to change the name of the project from ENGRIT to mantaBGP. I hope I will not get fired for it. Why that? Do you know what Manta is? This kind filter feeder, so feeds himself filtering black tunnel in the ‑‑ just like Manta is feeding itself by filtering we are going to feed your route by putting filters in a router. If you have a more cool name send mail to me, no problem.
Let's go to the serious stuff so. Why we should spend time and work on this project: OK, the Internet is big, we all know that. It has been mentioned many times during this conference that the Internet has been changed a lot and has been increased the last four years. Actually we see a lot of deaggregation, around 600,000 routable prefixes currently. We have the IPv4 depletion, so there are discussions at the RIPE meeting about chopping the last /8 network and gives to the members, we have a lot deploying IPv6, so dual stack networks introduce a lot of complex filtering and complex filtering leads to misconfiguration mistakes. You guys probably know the incident with YouTube and Pakistan Telekom about 2008, also the Chinese telecom incident which affected 15% of the Internet, but it's not only the big incidents that are known to the people; even the smaller incidents take place every time, for example, for us, it happened in November 2015, where our provider misconfigured the broken and filtering we didn't have IPv6 traffic for almost two hours or IPv6 services so our clients could not reach us through IPv6. And this happened every day. What is our motivation out of that, world peace of course. Sit together and drink beers and have fun instead of accusing each other who is responsible for mistakes and angry clients. Improving security and stability of the Internet. However, the low level call is to achieve end‑to‑end configuration and that will automation will minimise the mistakes and can happen by raising the gap between IRR and router device.
So, before we start working on the project, we had a look on the market about related tools that exist outside. You probably can recognise a lot of this tools on this diagram, we classify them based on four variables so we have the commercial tools versus the open source tools and we have the policy construction V router configuration tools. You know probably IRR toolset, you probably know BGP Q3 power tools, power tools and I am pretty sure I might forget some of the tools that exist outside.
So, before building our own tool we had to specify some requirements that we need to achieve by building this project. So, as features we would like this project to have be flexible and extendible and like to have 32‑bit ASN support and of course IPv6. We would like to hand security on the data as we said so we are going to achieve that by inheriting RPKI and vendor independent as possible. As functionalities we would like to be able to query the IRR database and extract peering relations and resolve peering filters and we would like to have also local file for extra information and tuning. And of course, at the end, push the configuration to the router. And all that will stuff is only open source.
We have to place the previous diagram and put it over there, something between router configuration and policy extraction and fully open source.
So, before start coding we had to make some choices, we decide that had our tools is going to be based on Python, which is an easy and flexible programme in language and familiar to many network engineers. We decided we are going to query the RIPE database only as an input source of public data. We can achieve the rest from the restful API objects and know that RIPE database is able to mirror other databases as well, we can get them by querying the RIPE database as well.
We decided to go for not con as university output which is quite universal and supported by Juniper and Cisco. Secure protocol. And at the end for local based file we decided to go with YAML, so similar like an Sybil. It's a language very easy to read and understand and modify so people will get used to it very easily and this YAML based local filed contain additional router information, we can do some fine‑tuning, you can increase the local preference of a prefix if it is validated by RPKI and similar concepts like that. We can ‑‑ through YAML based. What is our current status: We are finally the input library which is the dealing with RPSL objects. This is a policy parser and filter resolver, it does exactly the same two jobs. And it's going to be a different branch so people can use it autonomously and their own infrastructure and adopt it with their own home‑made tools. I am going to go back and design with my colleagues over all tool architecture and which component is going to have which role.
So, as I said we are finalising this library which we can use it freely. What is the overview of this library: It's a library that converts and resolves policies into XML format, and uses Python language and lots of regular expressions. However it is not a new peval. We support autnum objects, Route‑6, AS‑SETS. RFC 2650 says if you want to create your own home‑made tool you start by supporting this objects which are the most commonly used.
If would you like to play with our policy parser, there is a free ‑‑ link on GitHub you can download and play with it. Feel free to do that. This policy parser has input and output. The input of this is XML out of the RIPE database. I spoke with many guys and I realise that this output, service of RIPE is not really well‑known, I don't know why. Probably some of you use Whois service. But there is also this alternative. So if you want to check it, you can do a curl on this link and see clearly the output.
As an output now, the policy parser produces three section XML document which contains the prefix lists, filterings and your policy. To give you an example, this XML document contains the first two sections, the prefixes and the filters. So for example, you can see here that on the top of XML document you have a prefix list which has the name AS 199664 and this prefix list contains a /22 IPv4 network and /29 IPv6 prefix. After that, it comes a filter section and now we have a filter which has a unique has value, allows us to refer to the filter, many times so we can avoid duplicates and can minimise the size of this XML document. This comes from the expression we just has the express and we create the has value, and this filter is combined by statements, so it can have one or multiple statements and every is a type of accept or deny and has an order. So you can have multiple statements and if you apply them in the correct order you have the correct filtering. In the last section you can see, for example, the any expression which actually means we accept any and there is no prefix list applied on that. So, this has values, we can then go to the policy section and then we can apply them in the import or export. For example, here in this XML document you can see we have the peering policy and we peer that, for example, we peer with autonomous system 1257 with two different IP addresses and apply different filterings on import and export and preference 100.
So then we did the test with policy parser and we ride to see how much of this policies we support currently, outside, with our regular expressions. And the results are very promising, we managed to achieve almost 99% on tier 1 and tier 2, results are very promising, comes from CAIDA ranking and ‑‑ nonprofit organisation in USA, you probably know T the results are very promising but the test was very simple: We parsed all the policies and modified the tool to throw exceptions if something is not supported, so the tool breaks and then goes to the next policy. However it managed to parse almost all the policy, which is very promising also for us. But you might think we did a great job and support everything but no, we don't. We don't support multiple imports and exports per line, peer AS filter ‑‑ don't support refine and except and this might disappoint our friends who used it a lot. Logical O R is not supported, range operators, nested operations or 2 or more depth of operations are not supported, you have parentheses and more and we can cannot parse it clearly. However, we had good results. We spent a lot of time on improving the perform arranges we noticed the majority of the tools outside use single threads and they take a lot of time to resolve all the filters. We use multi threading in Python and spool a lot of threads per core. The policy of auto net can resolve it in three seconds. The policy of KPN, parse it in six minutes, in the beginning it used to be 30 minutes but contains 45,000 ASs and 10,000 AS sets that is a lot of information, and we used server with 8 cores and 16 gigabytes of RAM, if you put your resources to your server can can be even faster.
What are near future goals: I would like to come here in the next RIPE meeting and show to you a complete prototype which is able to do end‑to‑end configuration. In the beginning we are going to support BGP filters only so push configuration only with BGP filters. And we are going to be Juniper compatible because wave new test‑bed in our lab, I have J C I training so we are going to start with that, so for the future goals we would like to do BGP filters and peering so we can configure both of of them, more vendor independent and support Cisco and maybe Brocade, we want to enhance security on the Internet so integrate with RPKI validator so you can validate your prefixes before you push them to the router. And we would like to adopt RD L. If you don't remember that, I would like to point you the Benno's talk in RIPE 68 who described the language and the conceptual idea behind.
If you like this concept and this idea then you want to participate, feel free to do that with your experience, your real live cases, ideas, debugging in your testing environment that would be really useful. We are very open for collaborations and new ideas, feel free to do that. And I would like to thank you for your time and accept your questions now.
(Applause)
JOAO DAMAS: Thank you very much.
AUDIENCE SPEAKER: An dress from GRNET. Thank you for this presentation, this is obviously very useful tool, it will be very useful tool. One question about the performance issue that you mentioned: So it takes six minutes to parse a big policy. I assume the biggest problem is to find the prefixes for each AS within the AS set?
STAVROS KONSTANTARAS: Yes.
AUDIENCE SPEAKER: Do you cast these replies so if you need to ‑‑ if you find the same AS in another peer policy you will not need to do the same job again?
STAVROS KONSTANTARAS: Currently we don't any any ‑‑ mechanism, the tool keeps in a Python dictionary, the AS numbers that has already parsed so if inside an AS set there is an AS number first goes to the dictionary, tries to see if this has been already resolved and if it has been, then just use this information, otherwise goes to the RIPE database and finds the new information. So to be honest, yes, if we are going to start resolving it's going to take more than 1 giggy bit of memory, if you are going to resolve big policies, all this information is stored in the RAM. However you could use ‑‑ too.
AUDIENCE SPEAKER: From your answer I understand that if you parse a large network and then you go and parse a second large network the second large network will not take so long because ‑‑
STAVROS KONSTANTARAS: Will take so long because it's different ‑‑ you have to run the tool by saying Python ‑‑
AUDIENCE SPEAKER: So do you not use the dictionary.
STAVROS KONSTANTARAS: Yes the dictionary is unique within the instance when you initiate your Python, please resolve this policy.
AUDIENCE SPEAKER: Think about it because it will improve performance a lot.
STAVROS KONSTANTARAS: That is for sure.
RUEDIGER VOLK: Deutsche Telekom. I have, well OK, I am not quite sure whether I am misunderstanding or your use of the word "parse" is kind of misleading. What do you mean by "parse"?
STAVROS KONSTANTARAS: What I do mean by parse?
RUEDIGER VOLK: Yes, parsing ‑‑ parsing some object, in my language, is something very different from evaluating filters that may be embedded in what we are parsing, and of course, and of course, doing the parsing in the strict sense, I am really surprised if that takes six minutes after optimisation, doing a filter evaluation on the databases for any of those large AS definitions, actually seems to be well OK, lightning fast if you get it done in six minutes. So what are you doing and how far does the parsing and the evaluation extend then?
STAVROS KONSTANTARAS: Filter evaluation first do I need to do filter evaluation? Because RIPE is doing filter evaluation, RIPE is responsible to check if your syntax and your RPSL text is correct. I am not doing that. I am just reading the sentence and trying to understand what is hidden behind for example AS number or AS set or route set, and then since I am understanding this then I can go and resolve this. Actually is the resolving that takes time, not the parsing, the parsing takes time in milliseconds obviously.
RUEDIGER VOLK: I would suggest you should be saying well OK, the evaluation of the AS takes this long, for parsing, yes ‑‑
STAVROS KONSTANTARAS: Parsing is simple, yes.
RUEDIGER VOLK: 40 years ago parsing for trantext was taking, well OK a tenth of a second of the dinosaurs.
JOAO DAMAS: OK, we know what is going on.
RUEDIGER VOLK: For the evaluation and filter building, actually I also would like to ask what kind of filters are you including under your term of BGP filters? Does this include AS path REGEX presses
STAVROS KONSTANTARAS: No expression AS path no, we don't support it.
RUEDIGER VOLK: Is that in the plan?
STAVROS KONSTANTARAS: It is in the plan, yes, it is in the plan.
RUEDIGER VOLK: Are you expecting any difficulties doing that?
STAVROS KONSTANTARAS: Of course.
GERT DORING: Just a small observation. I think these force at 1103 and 287 need to do some homework because I think 45,000 ASs is about everything that is in the RIPE database, there is about that amount of active ASs in the world. So if a single object implodes to everything in the RIPE database somebody has been creative.
STAVROS KONSTANTARAS: Thank you for noticing, because as you can see, if you parse the policy of KPN you will see only 189 AS numbers, or let's go further, it's 286 AS sets, if you open this 286 AS sets you are going to end up with 10,000 AS sets and 5,000 AS numbers so there is a tendency outside to not maintain the AS sets but include your AS set in another, we had the suggestions where we managed to exceed the depth of Python when we tried to ‑‑ imagine, the Python instance crashed because had a lot of trying to resolve AS sets. Now, which guys should I blame for it?
RUEDIGER VOLK: Well, we know that the IRR data sets have lots of crap in it, doing the real world evaluation is not untypical, to have, well OK, just a prefix for just a prefix filter evaluation, a peer that is announcing actually 200 routes may actually have an AS set that resolves to 350 ASs and well OK, an order ‑‑ certainly an order of magnitude more distinct lines of filter than he is actually announcing and looking at the unique prefixes that he has actually registered, it may be actually even another order of magnitude more, and obviously that is really crap and, by the way, and that is something that you actually could do, is, in the AS sets that are resolved, there are actually alien ASs.
STAVROS KONSTANTARAS: Yes, they are.
RUEDIGER VOLK: OK. That is crap in the database and so kind of this is just documenting, well OK, the data is not really that useful unless used with very much scrutiny, which very few people take the effort to do.
AUDIENCE SPEAKER: Gerard from NTT. We have seen a couple of customers as has been discussed here expand out their AS set to be greater than 500,000 prefixes. And this is something that we, as a community, need to take seriously and go and stop these people from inserting garbage into the database because when you have an AS set or an autnum object or whatever expands out that large it causes the operators who are attempt doing filter to take huge collaterals on our routers where we are putting in half million lines of config and that is very frustrating for us and it makes our vendors freak out as well because every time they tills our config is too big and unfortunately, that is the nature of our business and so we have ‑‑ we are working with them on that, but this is something that we as a community need to push on these people and if they have garbage in there, say, I am sorry, I will accept your AS set when you have cleaned it up.
STAVROS KONSTANTARAS: I totally agree.
JOAO DAMAS: Come back to this one in your talk I guess. Alex.
ALEX BAND: I am the product manager at the RIPE NCC and I am doing a talk about RPKI, which I haven't done in, in eternity because nowadays everybody else talks about RPKI. I mean the last presentation mentioned it and I heard it mentioned in the connect Working Group yesterday as well. But usually when I do, I used to talk about ‑‑ it doesn't work ‑‑ I would love doing to the next slide. Because I usually talk about RPKI publication and I show you graphs that go up and to the right. Because what we initially wanted was for people to get a certificate and then actually do something with it so publish their routing policy and create ROAs. And then this happened. So more graphs that go up and to the right so you have some sort of reliable data set and people did studies on it and starting measuring data quality and the result was that the data quality is somewhere in the 90% so all way better than the IRR data quality‑wise there is just relatively not allot of it.
But all of these people doing presentations on using RPKI data, obviously they need some sort of toolset in order to get the data. And we made one. We made one in 2011 because when you work on RPKI publication you also want to test your own code so we made a toolset, and it just needed to be simple. We just wanted to download a package and just start it up, run it, get the data set, do cryptographic validation and give you result. And that is sort of the philosophy with which we have written that particular toolset.
There is two other tool sets as well, there is R sin I can and RP S T IR from BBN technologies and that is the RPKI validation landscape, the three tool sets available to you. Essentially they offer two things, like I said. So first of all you have the cryptographic validation, just doing the work of getting all of the certificates and ROAs, seeing whether they pass the crypto checks and when they do, essentially have a list, off‑list of ASs and prefixes and then you need to either well shove that into your RPKI capable router, get it out through an API or maybe some sort of separated file and you can apply filtering to it. It's really up to you how you use the data.
Like I said, the RPKI validator that we designed, we get reports, actually quite a lot of them are people saying what you build doesn't really match our needs. We would like to community with the toolset over https, we would like to hook it up or set it up behind our proxy serve and it doesn't allow us to do that as well, we would like to have some better user management because you have this kiosk mode where you send plain text, plain http password in order to authenticate to get into the tool. That was sort of a hack as well. It's sort of hacked together test tool and when you hear presentation from example, AMS‑IX, who has toolset hooked up to Falcon servers, we sat with our clenched buttocks saying please keep working. With that graph that goes up and to the right it uses a lot of memory and CPU power for the task that it does, the RPKI validator currently needs two gigs of RAM to work. There is other stuff, I mean, some people would like to use RPKI data in conjunction with route object data so if a ROA is published and a route object exists as well or maybe not, you can prefer one over the other, you can say OK if it has ROA but doesn't have a route object, that is fine, I am still going to trust it but you can apply different trust levels to the availability of the data that you have but for that it could be integrated into a single toolset.
Actually, I want to make something new, that is really what I am here to propose, and I am really standing here as a product manager because you obviously all or a lot of you, would like to use RPKI data but I would like to get a good understanding how, so what it is that you would like the toolset to do, whether it should be something really basic that does validation and spits out common separated values file and you just take it from there or whether the toolset should do more, how you would like set up authentication and hook into it, what ‑‑ for example, an API needs to conform to. Before I get into all of those details, what I would really like to know from you do you even support this? Would you like the RIPE NCC to work on this or are you relying on other tool sets or do you have another proposed solution with which we could go? Because if the answer is yes, and these are the kinds of things I would like to discuss with you. We are kind of ‑‑ what kind of operating systems should we be supporting, what kind of configure operations should we have, which database should it support for storage of data, should it keep history so you can compare a ROA that maybe used to make a BGP announcement valid but today doesn't any more, you can go back and look at why that is. It's all things that we could do and we would like to have an open suggestion with you about that.
And that is really it. Thank you very much.
JOAO DAMAS: So general requirements, quick, small, reliable.
ALEX BAND: Quick, small, reliable, yeah.
AUDIENCE SPEAKER: Fast.
AUDIENCE SPEAKER: Peter Hessler. We do have some sort of interest in implementing RPKI in open BGPd, we currently aren't doing it yet, primarily because of lack of time on my part. But I have been looking at the RTR protocol which allows us to have our router query something else and fetch the data so that is primarily what we would probably do. I have quite a bit of ideas for improvements I would like to add to RTR and possibly other things and we can talk about off‑line. But I would like to use that sort of stuff not just to do RPKI lookups but to do arbitrary lookups, like how long is this announcement been out there, is this prefix jumping from AS to AS to AS, and just other things that I can utilise.
ALEX BAND: OK, cool, super.
RUEDIGER VOLK: Well, first one nasty remark, I suspect that your motivation of jumping on a completely new thing is that you want to avoid the missing documentation of a details of the existing thing.
ALEX BAND: And there is Tim.
RUEDIGER VOLK: Kind of the point, kind of the point is, for relying parties that is users of the validator that are discerning that actually want to know exactly what happens and what failure modes can hit them, actually need very detailed documentation of what mechanisms are in the tool that deal with failures which obviously always can happen in a distributed system. And well OK, I think you were not in Buenos Aires, Tim was there, we had this requirement up in the CIDR Working Group and again I just wanted to get rid of the nastiness that I am always carrying; the comment and answer to your question and quite certainly to the previous speaker as a comment, I do not think it is a good idea to try to have one organisation develop a monolith I can, all‑dancing, all whistling, all blowing, I don't know what, thing, that integrates all possible things. Actually, in RIPE, for the RIPE NCC, well OK, I think the RIPE Stat portal is kind of the place where you do that, and well OK, one of the nice things there is well OK, it assumes that particular functions are dealt with with specific tools, and for what you were kind of throwing up in the air, I think, I think you should keep to a modular design and have distinct tools doing very clearly defined function in a good way and well‑documented where necessary, and to assume that an ecosystem will absorb this and a lot of other tools that can ‑‑ that should be easy to connect and say for the RPKI RPSL interactions, there are a couple of ways, I have my way, seems to be different from yours, well objection, I have mine actually in practice, you don't.
ALEX BAND: That is what I mean, that is why I am here.
RUEDIGER VOLK: What probably would make sense is for discussing what functions would be useful and what interfaces can be helpful, I guess some workshop of people who are actually working on those tools for throwing up in the air the ideas of well, OK, I have extended this tool, this way, and it's helpful this way and well OK, this interface is really important for doing this, well OK, that is not something that is easy to share on, say, the routing Working Group mailing list, and I think it would need ‑‑ it would benefit from a close interaction.
ALEX BAND: Yes.
RUEDIGER VOLK: Of course, everything that you as a product manager collect from your customer base as requirements, can be reviewed there, but I think, I think it's really, it's really something where the tool builders and tool users should have a close interaction for figuring out which way and actually I think some of the tool builders also will benefit to see a little bit more detail of what the guy next door actually is doing.
ALEX BAND: Yes because the situation that we find ourselves in is that loads of feedback that you get, everybody sort of getting a feel for how they would like to use the data and some things that are mentioned as feedback is, you don't really know whether that falls in the requirements or the nice to have category. It would be nice to have from my specific implementation. You don't know whether you want to be building features just for very small group of people, so it needs to be broadly supported. Tim do you want to make a comment first and then Randy.
Tim: I can stick to the point about documentation if I may. So, the core RPKI validation algorithm as such is something that we believe actually works well in this one, it's more about the complete package of the application that we are talking about here. That is one thing. The the second thing is we are writing an IETF informational document to describe how that works, so that is in progress, as far as I am concerned, and if there is missing documentation requirements, I suggest maybe we talk off‑line and figure out what we can do.
RANDY BUSH: IIJ. Genetic diversity, I think we need at least the BBN isn't used, we need at least the two.
ALEX BAND: Yes.
RANDY BUSH: Secondly, I can come back and say what I think is under Ruediger's comment which is a separation of the core thing that I want to run in my network versus what I am going to use to explore the RPKI as a user, see what is going to... the user visualisation gooey stuff from running a a hard‑core validators near my routers. The third question goes to Ruediger, you said you are using this and deployed, you are deployed, this is news to us all. That was binary question, Ruediger, that was binary, yes or no, is it in or ‑‑
RUEDIGER VOLK: I am running the full validation every four hours and the interesting question would be, what use do I make out of the results? And I am using some of the results. It's tiny, it's just a few special cases. It is for evaluating other data.
RANDY BUSH: Thank you.
GEOFF HUSTON: This is this kind of well you asked so you might as well hear. The RPKI was not built around BGP only as its only form of use. If you think about that and other forms of use of this method of validating attestations about addresses and AS numbers and their use you kind of think the way BGP uses it, amass all this sort of crap in one area and sift through it is not what you do if you are validating a single theme of one‑off use and when you talk about what would I like to see, in the DNSSEC world I really like Casey Deckio's tools ‑‑ why this validating, why is this validating, what is going on here. And if you are looking to expand the gene pool of use of RPKI away from one particular theoretical model about how to secure BGP and look at the RPKI as an independent way that would help us understand is that really your address you are talking about, you might want to look at a validator in the same way as OC SP looks at a single thing in certificate, why is this particular thing valid or invalid rather than the world. Like I said, you asked and I am not over there in BGP land and thinking of the tool that I would like to see. I am there and thinking, well, what if I find snuff registries and other contexts, what kind of validation tool would make sense. And running R sink in and and its derivatives across the entire world every four hours might be a lot of fun but a shit loafed work.
ALEX BAND: Somebody would like a signed statement ‑ resource holdership.
GEOFF HUSTON: There is nothing wrong with that, it makes a lot of sense. Why don't we have tools that help us to do that.
AUDIENCE SPEAKER: Mark thank you for your talk. I I would like to see the tool integrated in your APIs, I think that it will be a great thing in the future. By the way, I would like also to run an instance in my network and I think that it would be important to switch to a more language ‑‑ Java is into the thing that everybody can understand, I am thinking maybe Python or anything that is understandable by the community.
ALEX BAND: OK. Cheers.
AUDIENCE SPEAKER: I would like to state that we use your tool in production, it was pretty easy for us to install it and put it in production. I would like to see you develop this tool and thank you for the great job.
ALEX BAND: Thanks.
JOAO DAMAS: I think that's it, thank you, Alex.
(Applause)
JARED MAUCH: Hello, from NTT, I have decided that all of my presentations this year for anything are going to be about making whatever it is great again. I think it's going to be a great theme. So, I want to talk to you just for a little bit about making routing registries great again because as I think was observed two talks ago, a lot of garbage that exists within the he can could system, we certainly faced operational challenges from the fact that AS sets expand out to objects that basically cover the entire DFZ and, you know, there is people who have automated scripts that just go and add objects to the registries based on BGP announcements and there is varying practices that basically everybody inserts into the ecosystem and we all mirror each other as a consequence of that, and the tools in this room I think are well understood but when it comes to the broader BGP community, I think that they are not well understood and people are like, why do I have to reg injury my routes, I don't understand, here is my address space and such. There is a variety of existing tools to kind of look at prefixes, you can sit down and do Whois, there is the awesome IRR explore tool which I really love and Job has given several presentations on in different venues. And you can go and use BGP Q3 doing and kind of look at things. So, when it comes to the business of being a service provider, you know, customers come to us, the process for accepting routes from customers is, the customer wants something routed, and then we do something and then hopefully we profit as a result from that. You know, and everyone who has ever seen Southpark before will recognise this from the underwear gnomes situation. So the reality is, what really happens is, everyone has a different process for how they accept BGP routes from customers. Cogent, for example, I talked to them and they go and their process involves e‑mailing your sales representative and they basically have to go through and check the L OA for every prefix you want announced and then gets handed off to the implementation team. At NTT we go and are using the, we have our own routing registry and use the IRR tools doing and build something and basically if you stick in one of the registries that we mirror, then we will go and add that in at the automated time when we run our daily filter generation. So, the question is, how do we as an industry go and address this challenge of people put stuff in and then never do anything about it, or there is no additional check. So, one of my thoughts in talking with several other providers is, what we need to do is actually create a new registry that includes human validated objects which means that we go and are doing effectively a title check on the prefix, I would say similar to what you might see elsewhere. And that could involve using the ROAs and RPKI data to inform us of that, go look at the allocation data, the present and the past, go and look at historical routing announcements so we understand what exactly exists and insert a human in there, and as a result, we would then insert an object into the registry once it's gone through this human check and we would attest to others in the industry who want doing and mirror that specific database and who meet the same standards, we would also mirror their database that says, this is what is ‑‑ this is what we believe is true and I as NTT am going to attest to that as well, and I think that this is I think a very new approach in thinking about how we are using routing registries, if I go and also sign my name on to that and say, yes, I have checked this myself, I think that says a lot.
So, we have been thinking about what the benefits to somebody for going through this additional process would be. You know, because this is not going to be a relatively simple thing of you send an e‑mail and then through N T R M we go and get the object within five times and able to push it out into our routers. Because we have to go and insert some sort of additional checks in the process. So, I think the ongoing thoughts here are that we might give people who go through this process improved access to BGP communities, the opportunity to request filter updates automatically through an API, and kind of say, here is a carrot if you are willing doing through this additional process to insert your objects in there and say, if you are willing to do this, you derive value from that. I think also getting your prefixes strictly tied to the origin AS, there is a lot of people, myself included, in the past, who have used proxy registration as a way to work around getting things into either AS sets or other types of objects correctly. And I think that there is a lot of things here.
This slide is to pre‑empt in part a few people but also, I think that RPKI is not really a very accessible technology for the majority of people who are implementing BGP. I think that there is some challenges there. While we are out there and trying to do clean‑up of the existing objects I think that is something which is definitely a huge project and we have had people from berry green in the past, I remember attending presentations 15 years ago that said we are from the routing police, we are here to get you. I think that well that can be an effective strategy to solve some of the issues that we face, it often lacks sort of the outcomes that we would all like to see. So, with that, I am willing to accept any of your feed backs and questions and abuse.
JOAO DAMAS: Thank you very much, Jared.
(Applause)
AUDIENCE SPEAKER: Peter Hessler from Hostserver. Who do you expect to be utilising this data? So you collect it, we feed in our data everything is great. Who do you really see utilising this on their sides and aren't they just almost a one‑to‑one relationship of people who are already checking all the existing route objects in the database?
JARED MAUCH: So I have had extensive conversations with Level 3 about how do we solve this problem and Level 3 through a variety of activities, they run a routing registry and without talking too much about the, all of the details that I know about it, they would like to see that the data quality improved significantly, doing in solve some of the routing ‑‑ basically the garbage in/out problem that we have, I think it's been well‑documented that spamers, they say oh I have to put in Reverse‑DNS and register my route to announce it, they figured out how to clear those hurdles and can find the dark space that has been unused and go and abuse it and I want to help institute some sort of a policy that allows us to check more thoroughly to ensure that we have more accurate routing because when things misrouted it does damage I think to everybody involved.
AUDIENCE SPEAKER: Of course, but but I guess what I mean; so Level 3 would be good because that is how several issues have come up in the past. But are they actually going to implement this or do you only see the same groups of people that are always doing ‑‑ always checking this type of thing?
JARED MAUCH: I am not going to speak for them.
AUDIENCE SPEAKER: Of course.
JARED MAUCH: But the intent would be to do this in conjunction with several other service providers. And such that if you are willing to say we are going to meet this minimum standard for checking that we have decided of what that is, and I outlined it what I think those criteria are, that then I would sign it and trust their signature and they would trust my signature. And if they are going to insert a human into the process and I am going to insert a human into the process we have that shared cost that we each are consuming as part of that and I think that the community will drive value out of that. So, my hope is that we would be able to do this, they would do this as well, it's definitely a conversation where the direction of the conversation, and as a result, I hope that we can make this data better.
GERT DORING: What I really like about that is that you have already buy in by other big ones because you are doing great things alone will not fix up anything but if you have two, three, tier 1 providers into this it can really make a difference so that's cool. I am not sure I have any idea of whether this can actually scale with a human in the process and 500,000 objects in the queue, but this is not for me to decide. We are happy to submit our objects to you, and we are happy to vary your data of quality and pay for your transit because it means it's good quality transit. And I have just for the record, having discussions inside why we are using NTT because it's so much more expensive, this is why. And we keep using it.
RANDY BUSH: First, we have an old joke which those of us who attempt to redesign TCP learn all the lessons again. We lost ‑‑ in the design of RPKI, which Geoff was specifically for BGP, I was there, you weren't, was, there were those of us who wanted a web of trust instead of a hierarchy and that is what you are trying to do. The problem is, there has been too little work on that for all sorts of reasons, mostly layer 9 that I don't think I want to get into. I mean, I would love to but a large sledgehammer. I think you have got a problem as Gert said of scaling without some more formal work done on non‑hierarchic, let me say, PKI, for lack of a better thing, because that is what you want, you want reasonable crypto to be able to validate this garbage, and I would love to see that happen. Love to see that happen, indeed ‑‑ in my heart the Internet is not hierarchic, and unfortunately the bits ain't there.
JARED MAUCH: Yes, so I think when I was corresponding with Erik Osborn and asked him to provide some feedback on these slides, you know, the idea idea of this being as you said web of trust or peer to peer or whatever terminology we want to use about that, I think if we can attest to that and meet those same standards, when I buy my house somebody goes out and does a title check and performs some sort of additional thing to say yes, you own this, this is yours. I don't know that I am going to take that to the national next step ‑‑
RANDY BUSH: The structure behind that is disgusting, the fact is that you and four other providers can do it but the 10,000 providers that are out there is a little harder problem.
JARED MAUCH: Yes and I think we have seen that play out in the CA ecosystem of anyone can generate something. I think there is a few of us that are in a position where I think that we can help exert pressure not only each other but we have this issue of cogent makes up their standards, we have our standards, and rightly or wrongly we are mocked, made fun of, of those standards and to the extent that we can normalise those across the industry this presentation is an attempt to is that right dialogue ‑‑
RANDY BUSH: There is a difference between agreeing to put the same colour lipstick on APIGS than running an entire ecosystem
JARED MAUCH: And we have to start somewhere.
RANDY BUSH: We have.
JARED MAUCH: Unfortunately, not all of the solutions proposed today work for everyone. There is people who can't get ROAs and they can't get RPKI and this may not work for people, either.
RUEDIGER VOLK: Well OK, first I I would like to comment on the title of the talk. I think the hint by the word of again that we ‑‑ that we lost a golden age, sing a misconception. I think it never was really bright, in particular looking globally in certain parts of the world there were times when things were kind of neat and fine and smooth and small and in other parts it started out already quite chaotic. What I wonder is besides the question of how to really reach out and scale is, well, OK, how much investment in developing some changes in technology, in processes, in education is actually needed here? I take, in all those directions, effort is needed, and with what I am seeing from you US guys coming over, I have the feeling that well, OK, the specific context and background you are coming from and the problems have a slightly different flavour for you than for us here and well OK, Latin Americans have a different flavour of a problem. And the Far East as well, I have the feeling that, well OK, you are biased going directions that probably are not really the best possible choice for doing this ‑‑ those additional investments, and kind of, well OK, so far, the process of how to define interoperable standards here has not been quite open. We know Level 3 has been doing stuff, but it never was really published. Other things are keen not publishable because what cogent does on his sales chain quite obviously is not applicable and negotiatable for anybody else and, well OK, doing open standards development quite certainly has its additional kind of burdens, but I don't feel very comfortable with the idea that well OK, you keep on those discussions over there and I have doubts that kind of my requirements actually can fit in there.
JARED MAUCH: Well, like I said, this is attempting to start a dialogue about this and being a global provider I'm concerned that you are perceiving that this is a US‑centred approach to it, as opposed to the fact that we are all part of the same global ecosystem, and from that extent, I would like to see us collaborate better to bring the standards higher for what goes into the system because when I have customer ‑‑ that does none of us any good, not filtering them would be the worst outcome, I would like to improve the quality of the data that I am putting into the router. It could be by putting a human there, by putting a web front end that says e‑mail this route object in but to the extent that we can raise the bar in the environment, that is absolutely my goal, and what I have seen of the existing systems is that because the bar at the bottom is so low that anything gets in, and I would like to stop anything from getting in unless there is some sort of basic truth to it. And looking at this, like I said, the idea is to do effectively a title check or something to allow us to say, yes, I know that this is accurate and I as NTT am willing to sign that and then you can come and say, as Dutch telecom, you can say NTT, why did you accept this? You certified this, what happened?
JOAO DAMAS: You don't have to be doing the same thing to be able to address somebody else.
AUDIENCE SPEAKER: Freedman from ‑‑ I have missed the discussions back and forth about route registries, they were never necessarily great the conversation about them was at least vigorous and intellectually stimulating. Because ‑‑ my simple view would be there is no one way had a anyone can actually use in production to validate all the routes that are out there, having multiple ways with people sort of assisting as signing cumunkuli on the side is a perfectly valid solution for portions of things, which could be based on ASs or whatever, and whatever winds of getting adopted by people with customers, people with many customers, you know, is probably there and there has been a lot of discussion this week about things that people have to do, including multi stage bureaucratic dances to get IPs trance period to places so you can do RPKI. So the effort is interesting and we will watch it.
JOAO DAMAS: Thank you, I think we can talk about this.
(Applause)
We will be running a little bit into the coffee break but Job is here to show us why we call them...
JOB SNIJDERS: Good morning, I too work for NTT and I am happy that my topic will be much less controversial. Bogon ASNs as I will define them later in the slides are something that should not appear in the routing table. For instance, routers support 4 byte ASNs, we support transitional mechanism, so there is no legitimate reasons for us to see occurrences of 23456 in our routing tables. Other than that, private A Ns and reserved ASNs have no place in the global public routing table. It hinders our efforts in terms of security accountability, who is responsible for which efforts, efforts could be spam operations or other things. And concluding on those two aspects I think there is a sort of new paradigm in this industry where we say, do not be Lib rat in what you accept, no. Ensure that they fill hard and fast and that motivates them to improve upon their configurations. Because by accepting routes that are obvious results of misconfigurations, if you accept them you reward that misconfiguration, by not accepting those routes we encourage people to configure their routers more properly.
So, going forward, given the context, there is a piece of policy we wanted to implement in July 2016, we don't want to accept any announcements that contain Bogon ASNs anywhere in the AS path or atomic aggregate and with Bogon ASNs we have defined 23456, a slightly smaller number to a slightly larger number and a really large number up to a really, really, really large number but you can click the link in that regard.
We think we have done some research, we can back it up by the IANA‑based registry and there is reason to not allow these ASNs in the public routing tables. If you click this link there are examples that you can copy and paste into your configurations if you have iOS XR, JUNOS or BIRD. But if you are a network where you anticipate that within the next two years you will go on full auto pilot mode, do not copy and paste. Other than that I would encourage people to implement similar policies, I don't think we should be the only ones. Over the whiskey BoF I have heard encouraging signs that others will implement it as well but I have to verify and follow up to see if that is still true today. In other words, Bogon ASNs have no place in the default‑free zone and we intend to filter those out. Please update your configurations accordingly. Thank you.
(Applause)
JOAO DAMAS: Thank you.
RUEDIGER VOLK: This is not really that verbose, I think. I have been looking at the proposal from one of the other Tier1s doing things like this and just investigated what I find in the IANA registry and what I find in the wild, and well OK, I would certainly put AS number 0 in that list. Supposedly it should be blocked anyway but it doesn't hurt to put it there. And in my list of filtering, we are very likely to put in all the large 32 blocks ‑‑ 32‑bit blocks that IANA still have in their pools. And looking into what is in the wild, I not only found the 32‑bit private ASs sailing around, I also found a couple of 32‑bit numbers, probably really typos. But from the 32‑bit IANA pools. So, I think ‑‑
JOB SNIJDERS: If you allow me to comment on that. If you apply stricter filtering than this list, of course that is fine, but please, please do ensure that you maintain that filter as time goes by. So a year from now, review it, two years, review it. Another remark, we are reaching out ‑‑ we have a weekly report that investigates all these occurrences listed here, and we are e‑mailing all affected parties like hey, you should really, really, really update your routing because in July there might be an issue.
RUEDIGER VOLK: Kind of yes, of course if you filter more finely grained you have to keep it up to date, that is well understood. There is one additional remark after the previous presentations that I would like to make. Specifying filtering like this within the RPSL environment is actually tough.
JOB SNIJDERS: I would agree. Rude for the things we have been doing for our private
RUEDIGER VOLK: We have an AS set extension that allows to specify say the 4 trillion something to something else range, it's not working very well, and kind of the interesting question, whether the tools can actually generate AS path filters based on the RPSL and kind of comes into the game here. OK.
JOB SNIJDERS: Thank you.
RICHARD HESSLER: In our example configuration file for RPKI Daemon we already have a list of the restricted IP address that should never be in the global tool and I will start investigating how straightforward it is essentially whether or not we can have ranges of ASs and then possibly add that to our default configuration.
ROB SNIJDERS: We are Twitter friends, I would be happy to help you test it.
AUDIENCE SPEAKER: John Heasley with NTT as well. I want to bring up the point the ASN trans in theory should not appear in the BGP table, it should not be possible. So if it is come interesting one of your customers, we would like to know why because there is obviously ‑‑ there is a deficiency somewhere in an implementation or somebody it doing a biopolicy, they are prepending the AS transah I can't come up with any other way for that to appear in the table. Thanks. E‑mail Joe.
RANDY BUSH: As transshould use a different bathroom. I think you get the greatest reduction if you get the router vendors to add a specific set to remove private AS.
JOB SNIJDERS: I tried, I failed.
RANDY BUSH: I think if that were plural instead of singular you might be more successful, if we all, a gang of us, went and said, but I think we would have to be exceedingly specific.
JOB SNIJDERS: I sent patches, they were refused.
RANDY BUSH: I was thinking of a document that says "we all agree this is the AS list" did not point to the IANA list, I am not saying the IANA list was wrong but some of that is reserved and some that have might appear 665666 ‑‑
JOAO DAMAS: We had document in the past here that specified like that.
RANDY BUSH: All I am saying; I think the vendors might listen to us, that is the easiest way to get the most yield. We are going to say how we are going to do it and how important we are and how we will filter. To get the best yield there is 10,000 people out there using micro ticks, if it just by default did that I think we would be way ahead.
Peter Hessler: To address your comment, as one of the vendors in this, we do want to add sane defaults to our example configurations, for example ‑‑ that exact reason to make it very simple for our users to go oh these should never be seen, I already have a configuration, I can just leave it in.
JOAO DAMAS: Thank you very much, Job.
(Applause)
With that, we are done for this session of the Routing Working Group. The PC just asked me to remind you all there is still some available room for lightning tomorrow. If you have submit it and see you next time.