Archives

Connect Working Group
25 May, 2016
At 11 a.m.:

REMCO VAN MOOK: All right. If everyone can find your seats we will get started in a minute. 30 seconds. I guess that is most of you. Thank you for finding this room. Hello, everybody. This is the, your second most favourite Working Group, the connect Working Group, as you will know your most favourite Working Group is NCC services later this afternoon which I couldn't possibly compete with, way more fun bunch than we are. I would like to start with some good news and Florence is back.
(Applause)

REMCO VAN MOOK: So she will be doing the rest of the session. Right. On today's show, we have a good presentation and there is even a lightning talk. If you really feel strongly that there is something that you should be presenting, drop me or Florence an e‑mail in the next 30 minutes and we might have a minute or two to slot you in, maybe, I am not guaranteeing anything at this point because the session has now started. So, the administrative parts of this, there is a scribe, that is Emile, anyone object to Emile being scribe? Anyone else really want to do this job? No. Good. Amazing. So, we have the agenda, we also have the minutes from the previous session, I trust every one of you has read them carefully, and has sent your notes back to me. I haven't seen any notes so everyone is happy with the minutes so I hereby declare the minutes to be approved. That's fine.

This is the agenda. Anyone has any objections or comments on the agenda? No, then that's set as well. Then moving on to another housekeeping item, which is the connect Working Group Chair selection process. So, back when connect Working Group was chartered at RIPE 68 during the BoF we covered the Chair selection process. However, oops, we forgot to document this properly and we couldn't find it again. There is no minutes, no record, no nothing, no slides no ‑‑ I have no idea what happened, I think the cat ate them. So, I am really sorry, we are going to have doing through this again so we can get in line with all the other Working Groups. In order not to steal too much of your time about this, I shamelessly stole the selection process from Meredith, who carefully wrote this town for the Cooperation Working Group, so this is the process I am proposing which is every three years or whenever a seat is vacated we will do a call for interested parties and they have got two weeks on the mailing list to make their interest known and then whoever arranged for Chair announces all the and the issue called for discussion, the Working Group members discuss this and after two weeks the Chairs make a decision based on the mailing list discussion and call a day and then we have a new Chair. Does anyone object to this? Does anyone agree? Are you awake? So we are all happy.

AUDIENCE SPEAKER: Yes.

REMCO VAN MOOK: Thank you. Dear scribe, can you please carefully minute that we now have a Chair selection process that is fully documented, that is on slide and can be uploaded to the RIPE NCC website. Thank you. That is good.

Now we go back one more slide. So, with that enough time for me on stage, I am going to announce our first speaker of today, which is Arnaud Fenioux from France IX and talking about route servers, features and security.

ARNAUD FENIOUX: Hello. Selfie time. So, I am working for France‑IX which is a French exchange point based in Paris and Marseille and I will be talking about route servers.

So what are route servers, multi‑lateral peering exchange and available at all point of exchange point and members are peering with only one BGP session, the route server and route from other members connected to the route server. And that is provided for free at most of the exchanges.

Some of the benefits: It's that there are very, a lot of members using route server. As you can see in Paris we have more than 90% of our members are connected to the route server, and a bit less in Marseille, we have around 70%. And there are some big benefits of being connected to route server, like less BGP sessions to configure, and it's quick and easy way to learn a lot of routes but also to announce your route to all the member connected to the exchange on the route server so it's really quick way to start when you arrive on exchange point. It can be tuneable and you can tune your through your route with BGP communities, I will just show you after. And you can save a lot of time because there is no need to make a lot of multiple peering arrangements with other members.

But there are still some exceptions: Like some exchange points only using one type of route server, only one implementation, so if there is a major break it can considered as single point of failure. And some of the routing intentions is now out of the network that is connected to the exchange point, but on the exchange point so there needs to be, first, in the exchange point, operators. Selective announcement, on which the route can act, also needs some tweaking if you want to keep the image core because ‑‑‑ and most important with time peer number will vary and increase with time because new members connecting to the exchange point will also be connected to the route server, so it will add some new route and some new destinations through the exchange point that might not be wanted by some big networks that are doing some strict peering policy or fine traffic tuning. That is why some CDN prefer to avoid using web server connection and connecting to each other with bilateral BGP peering.

There are some RFC, still some draft. I have put three here, two on route servers, the first is the definition and the usage of route server, the second one is a feedback on how to run with server on exchange point and what should be done or avoided.

And the third one will be more explaining by Daniel just after me about RPKI validation.

So a few implementation of BIRD, of route servers. BIRD is most used exchange point. It's actively developed. The thing is other implementation like open BGPd, Quagga or Cisco very less used and BGP is used in Japan and I think it's very good implementation because it's multi core and good alternative to BIRD on some exchange point.

So some features on route servers that you can find. As you can see when you do direct bilateral peering, control plane and data plane are the same so active exchanges the data on the same plane. But using route servers, no data plane between members and the control plane, between the member and the route server are different. So this can lead to some problem if the data plane got broken, then in some case members can still speak with server and ‑‑ but it won't be able to exchange traffic any more, so there will be some black holing problem. This can be solved with some auto BFD but it's not yet ready for production. So what basically does route server, it will select the best path, it won't modify the IS path so it won't add IS number in the IS path between members and it won't modify next‑hop. It's actually should not interpret well‑known communities and may support other feature like ADD path so if the route server on several route to the same destination, if the members can is AS path enabled it will retrieve all the route for the same destination on the routing decision is not any more on the route server but on the router of the number.

You can also do some other features with route server like using BGP communities for some action, like filtering, AS path prepending or ME D override. If you look here, we are using some communities like 0 route peer AS number that mean I don't want my route to be announced to this member so in the example here, we can see that I as a regular member is route to all of them ‑‑ members through the route server, C is also doing the same but the member B here don't want to end his route to see so we will add communities, 0.AS of C so that is a route server won't announce the route of B to C in that case. But the member B will still receive the route from C through the route server so if you want to keep symmetrical paths you will need two filters on his path to not any more learn the route from C. So there is some adjustment here but it's very easy to do.

I will now be talking a bit about security that we have on route server. There are some fat fingers errors like things that are really easy to avoid, so route server most of them filtering, martian prefixes and also max prefix limit so that the number of prefix learned by peers is limited. And there are also filtering on the prefix length ‑‑ international routes from member to the exchange point.

There are also some verifications that are done on the route server like the next‑hop validation and we verify that the next‑hop in the BGP that have is also the source of the IP packet, or that the left‑most AS in the AS path of announce is also the peer AS. This avoid fake BGP announcements and traffic redirection to a victim. But it's not filtering the route, so a member can still announce whatever he wants on the route server and it will be released to all other members.

So, there are some solutions to filter route like choosing IRR, like the most of the people are using ADB to make filtering but it's not of time ‑‑ it's not a good idea because the quality that have in RDB are just a big mess, Job Sniders made a beautiful presentation last year on IRR Lockdown and I put the link in my presentation. So one good way to filtering would be to do some IRR Lockdown and import that from different routing registries and validate that data is kind of accurate. That is what you can do with BGP cursory, which is very fast binary to request working registry and as you can look here in my example I am using the server from NTT that does IRR validation and lock down. So, this solution of filtering is, can protect from prefix hijacking but you have to take care of the routing registry you are querying.

Another solution is using RPKI and ROA validation but Daniel from DE‑CIX will be talking more about this just after me so I won't say more. Just register your prefixes on the URL on the RIPE website to be sure that your prefix are signed.

Now, the conclusion:

Filtering prefixes on route servers is good for the Internet and good for the community. It forces users to update their IRR records but also it can be lead to reject some valid or I would say some rejection some legitimate prefixes, when I was in my previous job in another exchange point in France we were also doing some filtering and some of our biggest customers that had many LIR due to merge and accusation in their company, didn't know how to manage them so we were aware for this kind of member dropping between 20 and 30% of legitimate prefixes.

So, I would like, exchange point are working and everything ‑ an effective solution to filters routes and enable security BGP and between member so no, if you are wondering if you should use or not one server on exchange point I hope you have some key point here. Thank you.
(Applause)

FLORENCE LAVROFF: All right. Thanks. Do we have any question for Arnaud? Nobody.

ARNAUD FENIOUX: Yes. Thank you.

FLORENCE LAVROFF: So now we are to have a second presentation which is a little bit complementary to this one, a presentation from Daniel Kopp from DE‑CIX about RPKI validation at IXPs.

DANIEL KOPP: Thank you very much. I am with R&D department at DE‑CIX. And I talked about this topic before at Euro‑IX so I know some people are here that have saw the talk before, so but I added some value at the end so you have to wake up at slide 15.

So I am pretty new into BGP and this whole community, so I can remember when I learned about BGP and that all the routes update are not validated, I was really puzzled at the whole thing can work. So the pictures are not chosen by random, that is how I feel about RPKI; it's like I think you can measure pin OK know's nose with RPKI, you can find the people who are lying to you in the system and, advertising false routes or like in the old times with post cards you put stamps on it and you you know where it came from. So I think the best thing we can gain with RPKI is we can identify a prefix hijacking and we can find route leaks which are occurring from time to time. And what we try to do at the IXP is to support the customer being able to use RPKI easier.

So, that is the ‑‑ I will show you the set‑up at an IXP, what we thought about, here you see it's easy. And the thing is that some routers are not able to perform RPKI validation due to lack of support in the firmware or because they don't have the computational power or it's just too bother some to implement this in production. So the idea is to use the route server and fan out all the results of the RPKI validation.

So the question for us was, how could we do that with the route server, how could we signal the results to the peers and the idea was to use already available RFC, which uses the extended communities to signal the result of the RPKI validation from the route server to the peers. And this, the idea is to use a non‑traversive feature so will not spread further than the IXP. So made us able to or we would be able to support legacy hardware and it would be pretty easy to use. So the idea was found one year ago at Euro‑IX and people, all IXPs were discussing about this and the draft is now in version 1, which we only did some,minor wording changes from the previous version lately and AMS‑IX is working on it France‑IX and DE‑CIX at the moment but everybody is welcome to add something to it.

Lately I did some measurements for RPKI so I measured or I used a RIPE NCC data to just graph how RPKI is developing, so you can see a measurement of IPv4 prefixes and you can see a stack graph with the different and you can see at the moment it's slowly growing, it would be nice to see a spike in it but we are now at 8.3% of the advertised IPv4 space which are covered with RPKI.

So I continued my investigation, I ‑‑ yeah, I went into the RIRs and had a look at how much IP space they advertise, and you can see ‑‑ I drive you through all these statistics so we start at the bottom, top left, and you can see the advertised space of the different RIRs, which is the most is ARIN and so on and then I was thinking about maybe how much IPv4 space is really advertised, so this would influence any RPKI statistics. So, then on the right you can see, the utilisation of the IPv4 space which is really advertised, which is of course not 100%, but it didn't really change the statistics. So in the bottom you can see the RPKI statistics, how on the left it's the coverage on the total IPv4 space, on the right it's only on the advertised so the numbers of RPKI coverage gets slightly better if you just see it through the advertised IPv4 space. But it's still not that good and it really jumps to your eye that RIPE is doing a really good job, it's 22% of the advertised space is covered and at ARIN it's just 2.9%, for example. But they hold the biggest IPv4 space so it would be nice to see some better numbers here.

And then I went on then continued this investigation for our IXPs or for number of IXPs, so on the top you can see how much prefixes are advertised on the IXPs, and which is important that you get a feeling about any RPKI statistics. You can see on the bottom here we have the amount of valid IPv4 prefixes and there you can see Dubai is green outlier, only advertise 700 prefixes and then this leads to a really odd valid number. But except that have you can see that we have 7% valid prefixes in Frankfurt, for example, which, yeah, is I think usually you can see in any RPKI statistics. And for the invalids everything is sub‑1% so it's really low invalid number which means, if it's RPKI covered there is ‑‑ you can use it pretty well.

For not found, means which prefixes can't be found in RPKI. So it means on the bottom it means any lower number is better, at least if there is less found it means OK ‑‑ it can see in Frankfurt it's 91% and in New York, for example, which would be more the ARIN space, it's a lot more, it's not found there it's into the RPKI development.

Then, this is the part I added to the Euro‑IX presentation. I had a look at the data volume at the IXP, how much data volume is covered by RPKI because these numbers could be different to the normal statistics about the prefixes. So here I have the red lines are the prefix statistics and the orange ones are the data volume. So you can see the numbers get better, not found is going down to 86% and especially what jumps into your eye is the valid routes are doubled; I mean, that means the data volume is ‑‑ there is double amount covered, if you compare it to prefixes. So yeah, RPKI is getting better here if you just compare it to the data. If you go into the invalid prefixes, then you can see also that the numbers are really changing and unauthorised AS go down to 8% and too specific it's the most data volume that is available, they go up to 91%.

Yes, this brings me to the end of the presentation. To conclude this, the idea is if we can use RPKI at the route server we will be able to support legacy hardware that would be really helpful. We have added value for any customer at the IXP and we could make the Internet a little bit more resilient and secure and the challenges we see at the moment is that people will adopt this feature at the route server, that people start to use RPKI more and we lately discovered, thanks to Martin Levy, that ARIN relying party agreement may be some ‑‑ gives us some problem because it denies to use their Trust Anchors to signal to other people the results of RPKI. As far as I understood it. And ongoing work will be that we will develop the draft for using it at IXP, and I hopefully can do more observation about the RPKI status and we are planning to implement the RPKI at DE‑CIX in the future. All right. That's for my talk. Any questions?

REMCO VAN MOOK: All right. Thank you, Daniel. Any questions? I see ‑‑

ALEX BAND: From the RIPE NCC. In order for to you do RPKI validation which toolset are you using?

DANIEL KOPP: Which tools? Your tool.

ALEX BAND: Our tool.

DANIEL KOPP: It was only for the investigation I did, nothing in production or anything.

ALEX BAND: OK. How is it working for you? It doesn't work, you have any problems with it, does in need any addition Alfie tours? Anything? I have the feeling that it needs to be a bit better, the tool we have.

DANIEL KOPP: The statistics it gives are right, they are valid, that is the most important. But we can talk later a little bit about it maybe.

ALEX BAND: That sounds like a plan. Good to know you are using ours.

JEN LINKOVA: Two questions. I am surprised you mention only legacy IP protocol in your slides. Any IPv6 data?

DANIEL KOPP: Sorry, again?

JEN LINKOVA: Do you have any data for IPv6 prefixes ‑‑

DANIEL KOPP: I am totally sorry, I forgot this.

JEN LINKOVA: That is our problem, it keeps happening, right.

DANIEL KOPP: I forgot to mention it's only IPv4, yes.

JEN LINKOVA: Any reason?

DANIEL KOPP: Lack of time. Because I would do everything again.

JEN LINKOVA: Second question: You mentioned wrong origin IS, did you look if a lot of them are privatised numbers?

DANIEL KOPP: No, but I would like doing and dive deeper into the authorised or invalid states, I know there is even more you can discover from other presentations, which AMS‑IX did but I am not sure if you can do it on automated way; I think people are always doing it on manual basis but it would be nice to have more possibility to automate this.

JEN LINKOVA: Because curious of this was caused by people not removing private IS ‑‑

REMCO VAN MOOK: Daniel, if you need an armed escort out of the building, I will see what I can do for you.

AUDIENCE SPEAKER: I have a question about the invalids, I have done bit of analysis myself and I found quite a few invalids are actually, they do have valid announcements covering so less specific or potentially even for different ASN, and I am wondering, you know, do you have any ideas about how bad those invalids are?

DANIEL KOPP: No, not really. It was only statistics but I never looked into really the details, who is advertising this or what the causes might be, but I am looking forward to have some automated way to get more information about this.

AUDIENCE SPEAKER: I am mentioning it because I know at least of some instances where invalids occur but they are not made valid on purpose, although there are coverage announcements that are supposed to be valid so if you just look at the amounts of invalids as a measure of how good the data; there are some more considerations.

DANIEL KOPP: Yes, I mean the most invalids are due to two specific prefixes probably and wrong configuration or ‑‑ yes, but they are used in RPKI and ROA so, yes.

AUDIENCE SPEAKER: Malcolm. Please educate me further if I am misunderstanding this but the only real value that I see in this whole thing is the ability to protect yourself from hijacks by dropping invalids, so, all the statistics about take‑up and stuff is all very interesting and great if it's your project to roll out RPKI but from a actual value point of view, all that really matters, as far as I can see, is, what percentage of invalids are actually caused by hijacks as opposed to something else? And what actual ROA number of invalids are caused by hijacks as the cause of something else, that could give us some idea as to whether it's worth doing any of this, otherwise you are introducing complexity for things that can go wrong for no benefit. Can we focus on that and see what is actually ‑‑ how much problem is there that you are discovering that you are actually protecting against with this? You are trying to...

RUEDIGER VOLK: Let me at least try to fix a little bit of your misconception. The invalids that are observed actually are useful as feedback to the people who are publishing them, because, well, what does an invalid mean? It means something is going wrong, well OK, what does going wrong mean? Where do I have the benchmark what is correct? That is documentation. Or more correctly, technically said, the documented authorisation, and the invalids, even if you are not dropping them, used as feedback to the people who are propagating the stuff, already would be useful. And would be actually addressing a little bit of Tim's questions, because yes, in the end, a more specific that is ‑‑ that is distributed without having the official authorisation out there, only the originator can tell whether this is an unintentional leak or whether it is something bad or something that doesn't matter. So, the question I ‑‑ so, again, using ‑‑ well, OK, your assumption that only figuring out what is really wrong, is certainly more limited than the value we get out here. Agreed? Good.

REMCO VAN MOOK: Ruediger, is there a question to all of this?

RUEDIGER VOLK: Yes, actually, I have a related question for Daniel and that is ‑‑ actually, two: One is, have you considered without going to your community propagation of results, just installing a simple service that gives the feedback that I was talking to ‑‑ was talking about, to your members, to the peers of your route server, telling the essentially the up streams, hey you are sending stuff that looks dubious. I think you could do that right now ‑‑

DANIEL KOPP: Yes.

RUEDIGER VOLK:  ‑‑ without a lot of work, and, well, since I always want to have some nasty side remark about the ARIN policies, I think you could even do it with the ‑‑ using the ARIN ‑‑ the RIPE trust ‑‑ the ARIN Trust Anchor, I don't think ‑‑ I don't think the liability that you are assuming there for just distributing the information off‑line, I think that can be handled.

DANIEL KOPP: Yes.

RUEDIGER VOLK: And you could check that with John, I think. Otherwise, actually, I think the ARIN relying party agreement, even would mean that any results you have in your statistics should be essentially coded here as a got you, because yes they improved the agreement so that statistic results can be published but only in ways that are not easy machine recognisable. But away from ARIN, yes, so please provide feedback on the invalids.

DANIEL KOPP: Yes, a good idea.

RUEDIGER VOLK: The question of, as far as ‑‑ as long as you don't get an agreement with ARIN about using the Trust Anchor, obviously you can't cover them. And that's of course one of the reasons why the take‑up in the region is so slow ‑‑ so little.

REMCO VAN MOOK: Thank you, Ruediger. I think this is, we have discussed this now to death. Thank you, Daniel, for standing up here. Big round of applause for Daniel.
(Applause)

So, the next presentation is going to be Greg but Greg I am going to steal a minute first. I know a lot of you do a lot of flying, right? Doesn't this room feel a little bit economy class? I did a quick head count, about 180 in this room, where my sources tell me that Address Policy in the big room has about four people attending right now. Quick straw poll. Who is just here to avoid Address Policy? No shame in it. Most of you are here because you are really interested in connect. So I would like to make a quick picture from the stage to support my claim to get more space. Here we go. So, I hope I will get a bigger room for this session next RIPE meeting. So keep your fingers crossed and make some noises towards Working Group Chairs and other people organising this stuff.

So, Greg, over to you. And he is going to talk about PeeringDB, big surprise.

GREG HANKINS: We have been doing a lot of evangelism about PeeringDB and the new 2.0, so has anyone here not heard of PeeringDB by now? Aaron has not. Has anyone not heard about 2.0? I will tell you a little bit about it if I can figure out thousand work this. The new features and give a short organisation update after that.

PeeringDB is the source of peering information on the Internet, it's incredibly useful to have your network, ISP or facility listed there and now it's required by law of networks so if you don't have a PeeringDB entry they won't peer with you. PeeringDB 2.0 launched in March, it was a pretty huge success and pretty painless launch, probably due to the multi year extended Beta that we had before that. Currently 2.010 which has just a few minor bug fixes but in general the launch there were a ton of support tickets, so handled hundreds of tickets and the developer/contractor was very response circumstances of mate if you met him. I broke down the new fee tiers into infrastructure features and user. The infrastructure are important because they translate into the user features so there is a couple of important ones. The number one the database has been completely rewritten and using Python and HTML5 on the front end, it looks great on laptops, desk tops and mobile devices, I was surprised at how well it worked on my phone, try it on your mobile or tablet. You can populate all the fields and really do anything from a mobile device that you could on a laptop or desk top. It's a redesigned database scheme at that and validate, this is important because all the data is permission now so allows some fine‑grained user controls. We do input validation on the fields in peering 1.0 there were no input validation and a bunch of junk in the database so we are trying to do some better validation the data and trying to validate the data in PeeringDB records when we can so, for example, if you select your ASN to peer at an IXP you will notice there is a dropdown box that only allows to you select your ASN, before we had people putting in the IXPs's ASN or route server's ASN so trying to avoid some of those errors itself. The data is now version so that easy because we can do rollbacks and things like that. There is a ton of historical cada data nightly dumps, from PeeringDB, that we have, it took about 3 weeks to import, it's there but not available yet that is one of the uses of the data versioning, there is a restless API, it's stateless and allows a lot of automation and local database sync. And the goal was to have documentation available simultaneously with the new features and there is a bunch of resources that I have at the end of the presentation that point you to all that stuff.

In terms of user features, it was hard to rank these in the order because there were so many of them but I think the top features were that facilities and exchanges can maintain their own information now, PeeringDB 170, the way that that was done was to send a message to support and the admins had to update something at an IXP or data centre facility so this can be done by individuals now. You can now manage multiple organisations with one account, before you had to have multiple accounts to manage different ASNs or facilities different entries in PeeringDB. And now users can also manage their own accounts and permissions. So this is a challenge before because you couldn't remove users and delegate any Commissions and a lot of stuff had to be done by the admin team. Some of the minor ones the contact information has permissions now, and the past we had problems with people mining PeeringDB for e‑mail addresses and spamming so hopefully this will solve some of these problems as I mentioned the API and the ability to do local PeeringDB sync is also a key new user feature.

I will show a couple of screenshots, this is a fairly complete deck but I am going to skip through some slides because we only have 15 minutes. This is an example of how multiple records and/or single organisation work, for example LINX, they have one facility, and two networks which is their route server network and corporate network and six exchanges that are all under the LINX organisation. So they can delegate permissions to each of these and they can edit their own data and makes it a lot easier and more efficient.

One account can manage multiple organisations. Here is an example for Job, you can assign different administrator privileges. If you are not registered in PeeringDB already, if you haven't taken ownership of your record you will probably need to do this. If you are a network you should have an admin that was covered over from 1.0, if you have an IXP or facility since that wasn't possible you will have to take ownership and the way they do this is to search for the facility and there is a little request ownership box that you click and generates a support ticket so that the add minutes can validate and approve that request.

If you are a new user or you are trying to request affiliation with an existing organisation it's real simple: Number one, you login and go to your profile which is in the top corner, first you need to validate your e‑mail address and if your e‑mail address is not validated it would have a box there in order to do that. And then it's really easy. If you enter an ASN or organisation it doesn't auto complete on existing AS numbers or organisations and then you just click the affiliate. If it's existing it sends a message to that record admin not to the PeeringDB, to the owner of that ‑‑ the organisation, and then they can approve that. And if it's a new organisation then it sends a ticket to support for validation and then the add minutes will take care that have and give you access.

Here is an example of user management if there are pending requests you will see that under this little tab here where users requesting affiliation. You can select different privileges, you have the concept of an admin which has administrative privileges or just a member so you will delegate those permissions, I will show the permissions on the next slide. If you want to remove users from the organisation you can do that too and that removes them from your organisation, it doesn't actually remove their PeeringDB user accounts so that can only be done by the add minutes. So it's really easy to add or remove members from your organisation.

Here is a really good example of permission delegation. And we have a number of different permissions and then fine‑grained permissions under those different permissions. So here, in the example, is that Paul can actually manage several network records but he is not allowed to manage any facilities or exchanges and then user Rafael can manage only the Equinix connect network record and any exchanges or facilities but even under those he is not allowed to delete anything. So you have the concept of a create and update or a delete and each of those permissions are also delegated under this tab so. You can have people that can only edit information and not delete information, you can have people that can only update information but not create information, so the combinations are endless and pretty much up to you what you want to delegate.

Here is the example of the contact permissions, there are several different roles that are pre‑defined, abuse policy, technical whatever, each of those have one of three, there is private which is only internal to the organisation so only people affiliated with that organisation can see it. There is a users, which is for registered users only. And then there is a public one which anyone can see. So, if you want to protect certain contact information like some peering aliases or something to only registered users you could do that and maybe your sales contact information is open to everyone or something like that. Whatever you want to do.

I will talk a bit about the API is pretty complex, and probably a long presentation in itself but basically, 2.0, the goal was automation from the very beginning. So the goal was that anything that could be done over the web interface can be done with automation so that is recreate update delete, all these things supported and each object has an associated tag. And the there is a list of objects and all the API documentation at these URLs. If you wanted to do a quick example, just get something, return it in JSON output, you could do this for all of the networks, or for a specific network, entry 20, which happens to be 20 C. You can see all the data that is in the database by doing a simple curl command. There is two tools that I will tell but that work with database sync so the local database sync is an option for you to sync peering DB to a local database, as often as you like and it supports incremental sync so the load is pretty low if do you an incremental sync. Why would you want to do that? In in my previous slides I had in case PeeringDB wasn't up and then I removed that slide and it turns out we had an outage yesterday, so the past 24 hours it's been available 76% of the time, that is better over longer months. Anyway, so, but it does, regardless, it does improve the performance and reduces the load on the PeeringDB servers so if you want to have a local sync and then do thousands of queries per second could you do that. Why would you want to do that? It allows you customise peering DB to your liking. If you don't like the colours that the PeeringDB web interface has you can write your own. If you would like to add custom fields for your internal information you can do that. If you would like to ignore certain fields you can do that. If you want to customise it for whatever interface, you can do that. You have a choice of databases engines, my SQL, postgres, S Q light. You can write your own that sinks using the API.

Django library is one of the things that finds the database schema and gives you a framework to integrate with tools, it supports the three engines, you can find it here. And there is a Python client which is a way for to you display PeeringDB information in JSON or YAML or a Whois‑like display, you can find it at this URL and there is some examples. And a little bit about the membership and the committees. PeeringDB was officially formed last December. There is an interim election for a temporary board that was held and then we held the elections for one and two year terms in April. Membership, voting was quite high, 94 registered and 80 voted so that is a pretty good turn out for voters. And the board that was elected, you can see here on this slide and I like to include pictures and I make the point that PeeringDB is run by people in the community so these are people that attend conferences, there are several board members here so you can find them in the audience if you have questions or want to talk to people about PeeringDB. We have two committees, there is an admin committee and product. The admin committee is responsible for the day‑to‑day herding and management of tickets and adding users, fixing problems, things like that. And product committee has just been formed and will be responsible for the product roadmap. We are not currently seeking any volunteers because we have full committees now but if you would like to contact either you can do so at this ‑‑ at the e‑mail addresses below there.

Here is the admin committee. It's been really fantastic and I think... I don't know if Arnold is here, I think we are well about 24‑hour response for tickets. We have people from around the world that work in different time zones so you should get a response fairly quickly if you have issues. And the other thing, we have added a people that have local Portuguese language skills, we got a lot of very confused tickets from Brazil from people trying to register and they didn't understand what to do and their English was kind of a problem, Edwardo has written some training in Portuguese. Here is the product committee, we are all here in the audience, add minutes are here, product here are here, the board is here. Feel free to find us. Sponsors, it's accepting sponsorship money, used to run the organisation and also the rest of the money will be used to fund feature development, so it's important that we have a way to pay that, if you would like to become a sponsor you can get in touch with the sponsorship team. We want to thank our sponsors. We had a lot of existing sponsors and we just had a couple of new sponsors, ISOC just joined as a gold sponsors and RIPE NCC and NLIX are our two recent silver sponsors. So thanks to everyone who was a sponsor and has become a sponsor. Here is a lot of information resources. We have a number of mailing lists that you can join, announcements, governments, technical information, the dotsings, are all at this URL, the Portuguese training material is at this docs URL, if you would like to contact the board and officers, contact them, we are also on Twitter and Facebook for announcements and things like that.

The last slide is that we want to thank Richard Torcburgen. This is a photo from GPF. We gave him an award. He was the one, for those who don't know history ‑‑ he started PeeringDB kind of as a garage project and had no idea that it would turn into an indispensable resource so he has donated the PeeringDB to the organisation and we thank him for that and gave him a little award.
(Applause)

That was it.

FLORENCE LAVROFF: Thank you, Greg. Do we have any question here, any PeeringDB super user?

SPEAKER: Google. Thanks for all the work, everything. Just a quick question: The visibility for all the imported data, what was the default set for?

GREG HANKINS: That is good question. I don't know. Does anyone know? Is it open? OK, it was open. So if you don't want ‑‑ that is a good yes, so if you want want your information to be open, go back and lock down those permissions if that is not what you want.

SPEAKER: Sebastian. It's a good really good work what you did with 2.0, I was testing the Beta before it was launched and it was really great. I am not an operator, I am a researcher, and interesting thing is there is one use case you are missing which is finding an address, that is this address belongs to an IX in the PeeringDB, which could be very useful in actually useful for you so you don't have, I know researchers that doing this, using the API to get a full dump of the PeeringDB and then search, where you will be saving some cycles in there.

GREG HANKINS: Yes sounds like a good feature requests, there are a number we will look at and certainly usability and search enhancements are at the top of the list.

FLORENCE LAVROFF: No more questions?
(Applause)

So our next topic is going to be from Christian about latest trends in data centre optics. Christian.

CHRISTIAN URRICARIET: Good morning. First time at RIPE, and presenting at RIPE so thank you for the opportunity. I work for Finisar corporation, I am not sure if you are familiar with Finisar. We are a US company, so we are the largest supplier worldwide of optical interconnect, plugibles, optical trance receivers, modules and but we are the largest out there, around 30% market share, worldwide presence and so on. And we sell mainly to the system OMs, you may have used our products as part of Cisco or Juniper but more we are engaging the data centre, the user communities and carriers directly. What I am going to talk about the optical layer, the trends that are happening around the data centre applications in particular and the needs, the new needs for speed and for density and power.

So, the driver and not you all know this better than I do, the driver is hyperscale primarily but also other applications which is the increase in traffic which creates this huge bandwidth demand that flattens data centres, that creates the need for faster speeds in data centres, lower power, lower ‑‑ or higher speeds, higher port densities in the switches and so on, that is really the driver that is starting in hyperscale but certainly going after that into smaller data centres ultimately into the enterprise. So those data centre connections are evolving in the rack from 10 G deployed today from 25 gigabit ether neat that is started to be deployed in the servers to 50 will be deployed as the next speed. In the switches the speed are are going from 40 to 100 gigabit Ethernet will be started to be deployed in a big way starting this year and what follows will be 200 GE or 400 GE depending on who you talk to and the application. The long spans, the data centre interconnects, for example, the 100 G or routers have been there already, 100 GE coherent for the one, that is going into 400 G that is being standardized and what happens next is an opening discussion, is it 800 gigabits, is it one terabit or even beyond.

So, what is the some of the trends. The first is significant increase in port density, both at 100 G and 25 gigabit Ethernet, smaller form factors e‑ lower power to handle these densities and even on board optics.

100 G has come a long way over the last five or six years, evolved from modules like CFP or 2 in routers, which is what switches are implementing for 100 gigabit Ethernet ports. These are either optical trance receivers or cables which are assembly which have two trance receivers with fixed optical table in the middle with no connector, available at lower speeds but also 100 GE and used for one ‑‑ secting to multiple serves run at 25 gigabit Ethernet in that configuration shown to the right. So the connections on the server are modules called SFP 28 which is the evolution of the 10 GS FP module being used in servers today, very small unit that is being now again as either a transceiver or active optical cable being deployed in next generation servers and some switches as well.

Some other ways that people are increasing density using on board optics and these are application that is came out of super computers which are using them in a big way. There are some routers that use them, a few switches but people are looking at them as next generation densities at 400 G and beyond even. These are modules that are not plugable, installed mid‑board very close to the AS IX, the traces are short and the power is lower, the bandwidth density is much higher. So things like 12 channel or 24 channels running at 10 gigabit per second per channel or 25 are deployed today in super computers and even 50 gigabit per channel Sunday development so this kind of on board optics is things like Microsoft is looking at, they have a sponsored an effort called CoBo, on board optics so actually this is the module that CoBo is trying to standardise.

So what are some of the other trends: The extension of the type of optical links between boxes beyond what the standards have specified. So if you look at the four different grad rants of different connection types, single mode and multimode, parallel and duplex, we have things like SR4 or extended SR4 that is part of the standard for multimode applications, up to 100 metres only or 300 metres. 40 gig bit LR4 for longer reaches, inter sites or even going into the one, in the case of 40 kilometres. Single mode fan out applications to 10 gigabit per second ports with distances that are longer than 100 metres or 300 metres, this is typical in the hyperscale computing environment. And the fourth guadrant which is perhaps the most interesting is the use of duplex multimode fibre for 4 gigabit which means legacy fibre, I will touch on that later on because it's interesting.

If we look look at 100 G we see parallel multimode, with and without FEC. We see long wave versions, standard like LR4 or CW/CLR, big variety of optics that are appearing in the market because again they service different needs for different types of data centres. PS M4 is another MSA that some people are planning on implementing, a parallel singling mode interface. And the fourth quadrant the interesting one, how do you one 100g Internet on existing fibre infrastructure, duplex multimode being used today at 10 gigabit per second now for 100g, more on that coming up.

Let me talk about a latency, this is important because 10 G per second it wasn't a big deal, there was no significant latency in the link. As networks have gone to 25 G per channel both for 25 gig Ethernet and 100 the concept of latency is becoming important since it has been required to use FEC to extend the link, the link length at 25 gigabit Ethernet. So that FEC being used on standardised by IEEE includes or ‑‑ incorporates latency around 100 nanoseconds into the link and some applications are very, very sensitive to that latency. Things like high frequency trading, and other future applications around the 5G mobile deployments. Things like 100 GS R4, 25 GS R, CWDM, all these require FEC which require late see.

Let's look at the impact of that latency, if you look at 100 meet link multimode fibre the time it takes a bit doing over around 500 nanoseconds, if you add 20% more so it's significant. If you look at a single mode link however, 500 metres only, it's around 2,500 nanoseconds propagation time. If you add another 100 that is not very significant. It's really a multimode problem, for very short links again in high performance data centre, things like high frequency trading and so on. The market has responded to that, there are now both 25 G and 100 G optical trance receivers that do not require the use of FEC. They have a distance that is shorter, 40 metres only, for example, but they really ensure that there is error free operation without latency and that is very important in some applications. So that is another trend in the market.

The third trend is that I hinted at earlier is the reutilisation of the existing 10 G fibre plant multimode for 40 G and 100 G Ethernet. Why is that important. You have a brown field application, data centre that has large amount of fibre installed, you want to reuse that and not change to MOP based ribbon cable which is what you need for 40 gig or 100 gig, or single mode, which is more expensive. How do you maintain that duplex multimode fibre running at 40 G and 100 G. The market has responded to that by introducing a concept called SWDM or a short wave wave long division multiplexing which transmitting four different wavelengths for different colours, each one running at 25 gigabit per second on a single fibre pair basically. Similar to the LR4 concept in single mode but now it's a solution optimised in cost and in performance for multimode fibre. So this is actually, this is an industry effort, SWDM alliance, fibre manufacturers, equipment manufacturers, Juniper and so on, there are supporting the SWDM so you are going to hear that term more in the context of 40 gig Ethernet and 100. So that really completes that fourth quadrant. There are two solutions out there, one is bi‑directional, a limited use, only few vendors are supporting. It's limited distance, 120 metres and SWDM 4 that provides 300 metres, but over duplex multimode and there is also a similar product or concept or technology for 100 gigabit Ethernet as well. Now qualified or tested by system, you are going to have more that coming from the likes of the switch vendors and router vendors.

So what is the last trend we are seeing, the move beyond 100 G, what happens after 100, 200 and 400 which means routers or data centre switches and so on. So the IEEE has come up with two standards already, 400G Ethernet standardised both multimode and single mode applications and vendors like ourselves are already investigating heavily in the technologies around 400 gigabit Ethernet, new modulation factors like PAM 4 for example and the manufacturing infrastructure that is around PAM 4 eco systems which is new for the data centre and market. The other effort is for things like 50 gigabit Ethernet into the server that is being standardised, 200, which is potential into the immediate step for lower power more cost‑effective switches for next generation that you are going to start to hear about. That is again being standardised now the IEEE and we are a big part that have effort as well as many other vendors as well.

So the first module that you are going to see is normally the router, just like it happened 100 G, so CP F 8 is first generation implementation of 400 GE connection that you will see. Again this is first generation. Somewhat large, like a CP F 2 at 100 G but running at 400 and that is a standard form factor that you will see it from multiple vendors in the market. From the technology standpoint, PAM 4 as I mentioned is very important. This is multi level now transmission, so it's not 0 and 1s any more but rather four levels for each symbol if you will, so two bits per symbol to increase the bit rate or the data throughput, rather. So, this is an example of a recent demo done by Finisar, 100 gigabit two by 50 G on two different wavelengths that we did recently at O FC and again it just shows the importance of this will have for the 400 G, 200 G ecosystem and 200 into the server so everything will be PAM4 in 3 to 4 years.

One of the last ‑‑ and finally the last trend somewhat disconnected I think it will be important for this audience is open networks so. We are very active in the open networks community, open compute project and so and many others. We are sponsoring two initiatives, one at open compute called the open optical monitoring and the other as part of sFlow to essentially allow open access creating an API to allow open access into the optical layer, into the monitoring functions, to allow you to look at the optical link itself, things like optical transmit power, receive power, temperature of the modules and so on, and as well as any other control functionality or added feature that may be available in the future. That is going on right now, there is already Beta code in GitHub, anybody can access it and implement it and we are working with the white box manufacturers like edge core, and a few others that are part of the open network ecosystem so we are leading that effort within those two groups. This will allow you again things at the network inventory of optical ports, serial ID information to identify each individual port, I mentioned digital diagnostics, any new value added feature and this will ‑‑ all be in an open network ecosystem. So that is something that will be important for the future.

Again, in summary, very large bandwidth demand, we all know that. Lower power, higher speeds, the industry supporting today, the optical industry is with 25 gigabit Ethernet, 100, smaller lower power and we are looking into the future, 50, 200, 400 and we are ready for the next challenge and next Ethernet rates and speeds. Open interfaces are coming into the optical layer. That is an important trend. Thank you very much. Any questions?

REMCO VAN MOOK: Thank you, I think that was excellent. Any questions for Christian?

AUDIENCE SPEAKER: Paulo, speaking for myself. I saw that you are saying you are exporting statistics with sFlow in your slide. So I was just wondering why sFlow and we are in times where there is stream tell em tree, for example, so why sFlow instead of stream tell em me or instead of IPv6 which is a standard protocol easy to parse. You are good question, it's

CHRISTIAN URRICARIET: Good question. Give me your card afterwards and I will get you in contact with optical guy, physical layer guy. But it's a good question and we can follow up afterwards and we will make sure that we get you an answer from those guys.

AUDIENCE SPEAKER: Gorab. Question is not for you but for the audience. How many in audience are trying to use or trying CLR4?

CHRISTIAN URRICARIET: I can go a deeper explaining the two.

AUDIENCE SPEAKER: Any IXPs in the room looking at that? Trial unit, I am wondering if everyone ‑‑ if anybody else is using that I want to talk to them.

CHRISTIAN URRICARIET: CWDM4, SWDM 4 almost identical, higher performance but this is a small differences and I think in the market they will ultimately converge because they target the same application space, two kilometre with FEC in the data centre, in the data come space, you know, a cheaper alternative to LR4, ways higher performance, 6.2D B interface across multiple patch panels.

AUDIENCE SPEAKER: If anyone is using that or trying to use it, I will be happy to talk to you later. I think we are doing both of them. Thank you.

AUDIENCE SPEAKER: Tom hill. I was really quite enjoyed that, thank you very much. The question I had was regarding some of SFP 56 availability, do you have any time to market when you think those will be available.

CHRISTIAN URRICARIET: SFP 56 which is a 50 gig version of SFP model, I mentioned 25 G, SPF 56 is ‑‑ into the server and then switches. So, we believe that it will be starting to be trialed, deployed in the market around 2018. That is the time frame that we believe, based on the what the servers guys are doing basically.

REMCO VAN MOOK: Well, thank you Christian. Big round of applause. Thank you.
(Applause)

And then next up is Arnold. He is here. And he is going to talk about Euro‑IX, briefly.

ARNOLD NIPPER: Good afternoon, already. So I am between you and the lunch now. I do a short Euro‑IX update, just to recall what Euro‑IX is. We are meanwhile 15 years old, I will cover that later. We are association of Internet Exchange point, meanwhile we are 80 affiliated IXPs, 55 regular members from, mostly from the Euro‑IX region, from 49 countries and operating roughly 100 peering lance and 25 IXPs from the rest of the world. Our datest member is Dataline from Russia, CAS‑IX from Morocco and Espanix from Spain. This is the countries where our members come from. We still have in Europe some white spots; hopefully, we will be able to cover. And also from, especially from the Middle East region, we definitely need more members. Typically, that does not mean that there are not IXs in these countries but still are some countries in the world which don't have a local IX.

Seeing on the global scale, membership pretty much comes interest all around the world, from Canada, US, Brazil, from Africa, India, Australia, Japan and so on. Besides the regular members, most of the income for the organisation also come what we call patrons, which are our sponsors, sponsors typically originally came from the ‑‑ also colocation providers come in and that is pretty much what we have today. Thank you to the sponsors.

What we basically ‑‑ Euro‑X, does, it's ‑‑ we have to what we call fora, that is meetings, one meeting which is typically in April time frame, the other one around November. On the other side we have a website, a portal for our members, databases and tools, we published an annual IXP report and have two programmes, that is a mentor IX programme and the fellowship programme, and what we also do is benchmarking ourselves and a couple of years already, it's B MC 11, meanwhile.

So I talked already about it, this is our fora. So we have our last fora almost a month ago, this was held in Luxembourg and our host was LU‑CIX, the local IXP over there. We had an out‑turn of 120 attendees from 59 organisations, 42 operators. The day before, we had an IXP manager workshop, the IXP manager is a piece of software written by INEX and widely used by IXPs around the world. We had sessions about technical, commercial, regulatory and route servers sessions, route servers were already mentioned and covered by Arnaud in his presentations, which is BIRD and go BGP ways Neustar on the Horizon. What we also do since a couple of years, we not only have meetings but also do socialising, typically takes place on the day before, on Sunday this time, visits to S E S and R T L and looked around and on the Monday we celebrated our 15th anniversary over there. We have the programmes I talked about, the meant or‑IX and fellowship, to get new members, the meant or is to get new members and the fellowship is for existing members to enable them to come to the fora. What we find out is really coming together at the RIPE meetings is coming together, talk to each other, exchanges information, this is really the most useful. Then, we also have a movie, which explains what an IXP is. It's already, I guess, four or five years old. Meanwhile, available in a lot of languages. What we added two or three weeks ago, this is also now wave Chinese version, thanks to the Chinese Internet Exchange. Still worthwhile to (we have) look at because it's nicely explained what an Internet Exchange does. We have our monthly newsletter. If you are interested, please subscribe. We are twit erring and on Facebook. Follow us. And that's it.
(Applause) love‑love thank you, Arnold. Any questions for Arnold? I don't think so. All right. Thank you. We are almost coming to the end. And we have two lightning talks to close this session, and the next topic is going to be from Barry O'Donovan about detecting asymmetric routing over IXP.

Barry O'Donovan: Good afternoon. This is a quick lightning talk on the results of the RIPE Atlas hackathon that happened on the Saturday and Sunday just preceding this meeting. Myself Jacob and Facebook and drew from Comcast formed a team and we wanted to look at asymmetric routing over IXPs. So the problem is very simply defined in that at an IXP a lot of the, particularly a lot of the smaller organisations who may only use BGP, to create a peering session or transit and to join a single IXP in their local country and never use BGP out of that will often have some small misconfiguations in terms of how they preference route traffic. Trace routes only one show one direction of how traffic moves not the reverse path. And it's asymmetric route something a common problem that we deal with if IXPs. So the solution through the RIPE Atlas project was to use RIPE Atlas probes to create bi‑directional trace routes between ASs at an exchange. So from two networks that connect to an exchange we initiate a RIPE Atlas trace route from one side to the other and the other back again and we compare the two paths. There is two constraints: The IXP has to support the export JSON schema because we need to get information about the peering LANs and the peering addresses connected and the members and particularly about what IP address is assigned to which member. The other constraint is obviously a network that wants to use this or participate in it has to have an active and public RIPE Atlas probe within their network.

So, just an example of the results. We ran one with HEAnet, and you can see that the system will identify traffic that doesn't flow over the exchange at all so there are four there non‑IXP and this is expected because these four are very selective about peering, mainly transit providers. We can see one trace failed with Eircom so that looks like a probe that didn't come back to us and we can see that we have got symmetric routing and we found one in particular has asymmetric routing. So we can dig into that and look at the trace that comes back from the Atlas probes, we can see that the trace route goes over GAENT, something on purpose or unaware of. One other example is to Irish, small regional ISP called Conway broadband. The trace is asymmetric routing out so it's not going out the way we would expect but what is interesting is that it is actually going out over the exchange but it's going via a different member. So, we will flag this up to Conway, their traffic out is coming ‑‑ going over vie tell where it's coming back directly from their own router.

So a quick disclaimer, this is a the fruit of a hack son, databases were definitely harmed. Thanks for organising it and Comcast for sponsoring, as they say, the proof is in the pudding. A live example with INEX and the code is on line. If you would like your code added just send me an e‑mail. Thank you very much.
(Applause)

REMCO VAN MOOK: Thank you, Barry. I have time for one very quick comment? No. Then it's off to Martin who is going to do a lightning talk so fast I am going to stay on stage for it. No pressure, Martin.

MARTIN LEVY: Statistics. When you say a number you are lying. There is a book about this. That is request to everybody that runs an Internet Exchange that has route servers and says that they have 97% of their something connected to the exchange.  ‑‑ to the route servers. Is that the number of actual real routes or is it the number of members, independent of whether they are sending routes? Have a look at this issue and maybe before the next connect somebody wants to talk a little bit about this. It's not hard to tell that you are seeing fewer than 100 percent of fruits a member and I would like an update on the stats. That's it, thank you.

REMCO VAN MOOK: Thank you. Any immediate comments to that? No. So looking forward to a submission offing I think about five or six presentations on route servers at the next meeting, with that, we are at the end. Do you have any feedback for Florence and me about the content of this, the way we organise it, tomatoes, bricks, anything else? No. Everyone is still happy. Other things we should or should not be addressing in this session? We are still finding our way around. OK. I take this as a group that is desperate doing for lunch. So I thank you all for your time and see you in Madrid.