The fine threads of trust that keep the Internet together

The chaotic, and self-governing nature of the Internet is truly a wonderful global networking marvel! It should be at the top of one of the 'Seven Wonders Of the World' Lists! Blogs and newsfeeds rattle with discussion on what the International Telecommunications Union (ITU) in its' capacity as chief regulatory agency on telecommunications, is trying to achieve or not achieve with Internet regulation. In the midst of this uncertainty it is great to see human nature, the spirit of co-operation, and just good down-to-earth network engineering talent, that goes into fixing broken 'Internet things', such as the induced error by an Indonesian ISP's BGP route advertisements last week that affected Google for 30 minutes.

The link that follows looks at the identification and resolution of the problem from a data packet analysis, network engineering perspective, so if you are a keen network troubleshooter add some of the lessons here to your skill-set toolbox - http://blog.cloudflare.com/why-google-went-offline-today-and-a-bit-about.

That nasty Leap Second problem on July 1st 2012

by Geoff Huston - APNIC http://labs.apnic.net/blabs/?p=233#more-233

The tabloid press are never lost for a good headline, but this one in particular caught my eye: “Global Chaos as moment in time kills the Interwebs”. I’m pretty sure that “global chaos” is somewhat over the top, but there was a problem happening on the 1st of July this year, and yes, it impacted the Internet in various ways, as well as many other enterprises who rely on IT systems. And yes, the problem had a lot to do with time and how we measure it. This month I’d like to look at the cause of this problem in a little more detail.

http://www.heraldsun.com.au/news/leap-second-crashes-qantas-and-leaves-passengers-stranded/story-e6frf7jo-1226413961235

WHAT IS A SECOND?

I’d like to start a rather innocent question: What exactly is a second? Obviously it’s a unit of time, but what defines a second? Well there are 60 of these seconds in a minute, 60 minutes in an hour and 24 hours in a day. That would infer that a “second” is 1/86400 of a day, or 1/86400 of the length of time it takes for the Earth to rotate about its own axis. Yes?

Almost, but this is still a little imprecise. What’s the frame of reference that defines a unit of rotation of the Earth? (As was established in the work a century ago in attempting to establish a frame of reference for the measurement of the speed of light, these frame of reference questions can be quite tricky!)

What is the frame of reference to calibrate the Earth’s rotation about its own axis? A set of distant stars? The Sun? These days we use the Sun, which seems like a logical choice in the first instance. But cosmology is far from perfect, and far from being a stable measurement, this use of the length of time it takes for the Earth to rotate once about its axis relative to the Sun varies month by month by up to some 30 seconds from its mean value. This variation in the Earth’s rotational period is an outcome of both the Earth’s elliptical orbit around the Sun, and the Earth’s axial tilt. These variations mean that by the time of the March equinox the “solar day” is some 18 seconds shorter than the mean, at the time of the June solstice it’s some 13 seconds longer, at the September equinox its some 21 seconds shorter and in December its some 29 seconds longer. This variation in the rotational period of the Earth is unhelpful if you are looking for a stable way to measure time. To keep this unit at a constant value a second is based on an ideal version of the Earth’s rotational period, and we have chosen to base the unit of measurement of time on “mean solar time.” This “mean solar time” is the average time for the Earth to rotate about its own axis, relative to the Sun. This is a relatively constant value, as the variations in solar time work to cancel out each other in the course of a full year. So a second is defined as 1/86400 of mean solar time, or in other words 1/86400 of the average time it takes for the Earth to rotate on its axis. And how do we measure this “mean solar time”? Well that’s derived from baseline interferometry from a number of distant radio sources.

So now we have a second as a unit of the measurement of time, based on the Earth’s rotation about its own axis, and from this we can construct a uniform time system to measure not only intervals of time, but to allow us all to agree on a uniform value of absolute time. From this we can not only make calendars that are “stable, in that the calendar does not drift forward or backward in time from year to year, but accurate in that we can agree on absolute time down to units of minute fractions of a second. Well so one would’ve thought, but the imperfections of cosmology intrude once again.

The Earth has the Moon, and the Earth generates a tidal acceleration of the Moon, and, in turn the Moon decelerates the Earth’s rotational speed. As well as this long term factor arising from the gravitational interaction between the Earth and the Moon, the Earth’s rotational period is affected by climatic and geological events that occur on and within the Earth. This means that it’s possible for the Earth’s rotation to both slow down and speed up at times. So the two requirements of a second, namely that it is a constant unit of time and it is defined as 1/86400 of the mean time taken for the Earth to rotate on its axis cannot be maintained. Either one or the other has to go.

In 1955 we went down the route of a standard definition of a second, which was defined by the International Astronomical Union as 1⁄31,556,925.9747 of the 1900.0 “mean tropical year”. This definition was also adopted in 1956 by the International Committee for Weights and Measures and in 1960 by the General Conference on Weights and Measures, becoming a part of the International System of Units (SI). This definition addressed the problem of the drift in the value of the mean solar year by specifying a particular year as the baseline for the definition.

However, by the mid 1960′s this definition too was found to be inadequate for precise time measurements, so in 1967 the SI second was again redefined, this time in experimental terms as a repeatable measurement. The new definition of a second was 9,192,631,770 periods of the radiation emitted by a caesium-133 atom in the transition between the two hyperfine levels of its ground state.

LEAPING SECONDS

So we have the concept of a second as a fixed unit of time, but how does this relate to the astronomical measurement of time? For the past several centuries the length of the mean solar day has been increasing by an average of some 1.7ms per century. Given that the solar day was fixed on the mean solar day of 1900, then by 1961 the mean solar day was around a millisecond longer than 86400 SI seconds. Therefore, absolute time standards that change the date after precisely 86400 SI seconds, such the International Atomic Time (TAI), get increasingly ahead of the time standards that are rigorously tied to the mean solar day, such as Greenwich Mean Time (GMT).

When the Coordinated Universal Time (UTC) standard was instituted in 1961, based on atomic clocks, it was felt necessary that this time standard maintain agreement with the Greenwich Mean Time (GMT) time of day, which until then had been the reference for broadcast time services. Thus, from 1961 to 1971, the rate of broadcast time from the UTC atomic clock source had to be constantly slowed to remain synchronised with GMT. During that period, therefore, the “seconds” of broadcast services were actually slightly longer than the SI second and closer to the GMT seconds.

In 1972 the “leap second” system was introduced, so that the broadcast UTC seconds could be made exactly equal to the standard SI second, while still maintaining the UTC time of day and changes of UTC date synchronised with those of UT1 (the solar time standard that superseded GMT). Reassuringly, a second is now a SI second in both the UTC and TAI standards, and the precise time when time transitions from one second to the next is synchronised in both these reference frameworks. But this fixing of the two time standards to a common unit of exactly one second means that to track the time of day it necessary to periodically add or remove entire seconds from the UTC time of day clock. Hence the use of so-called “leap seconds”. By 1972 the UTC clock was already 10 seconds behind TAI, which had been synchronized with UT1 in 1958 but had been counting true SI seconds since then. After 1972, both clocks have been ticking in SI seconds, so the difference between their readouts at any time is 10 seconds plus the total number of leap seconds that have been applied to UTC.

Since 1 January 1988 the role of coordinating the insertion of these “leap second” corrections to the UTC time of day has been the responsibility of the International Earth Rotation and Reference Systems Service (IERS). IERS usually decides to apply a leap second whenever the difference between UTC and UT1 approaches 0.6s, in order to keep the absolute difference between UTC and the mean solar UT1 broadcast time from exceeding 0.9s.

The UTC standard allows leap seconds to be applied at the end of any UTC month, but since 1972 all of these leap seconds have been inserted either at the end of June 30 or December 31, making the final minute of the month in UTC, either one second longer or one second shorter when the leap second is applied. IERS publishes announcements every six months, whether leap seconds are to occur or not, in its “Bulletin C”. Such announcements are typically published well in advance of each possible leap second date — usually in early January for a June 30 scheduled leap second and in early July for a December 31 leap second. Greater levels of advance notice are not possible because of the degree of uncertainty in predicting the precise value of the cumulative effect of fluctuations of the deviation of the Earth’s rotational period from the value of the mean solar day.

Between 1972 and 2012 some 25 leap seconds have been added to UTC. On average this implies that a leap second has been inserted about every 19 months. However, the spacing of these leap seconds is quite irregular: there were no leap seconds in the seven-year interval between January 1, 1999 and December 31, 2005, but there were 9 leap seconds in the 8 years 1972–1979. Since December 31 1998 there have been only 3 leap seconds, on December 31 2005, December 31 2008 and June 30 2012, each of which have added one second to that final minute of the month, at the UTC time of day.

 

LEAPING SECONDS AND COMPUTER SYSTEMS

The June 30 2012 leap second did not exactly pass without a hitch, as reported by the tabloid press.

The side effect of this particular leap second appeared to include computer system outages and crashes – an outcome that was unexpected and surprising. This leap second managed to crash some servers used in the Amadeus airline management system, throwing the Qantas airline into a flurry of confusion on Sunday morning on the 1st of July in Australia. But not just the airlines were affected, as LinkedIn, Foursquare, Yelp, Opera were among a number online service operators who had their servers stumble in some fashion. This managed to also affect some internet service providers and data centre operators. One Australian service provider has reported that a large number of their Ethernet switches seize up over a two hour period following the leap second.

It appears that one common element here was the use of the Linux operating system.

But Linux is not exactly a new operating system, and the use of the Leap Second option in the Network Time Protocol (NTP) is not exactly novel either. Why didn’t we see the same problems in early 2009, following the leap second that occurred on the 31st December 2008?

Ah, but there were problems than, but perhaps it was blotted out in the post new year celebratory hangover! Some folk noticed something wrong with their servers on the 1st of January 2009. Problems with the leap second were recorded with Red Hat Linux following the December 2008 leap second, where kernel versions of the system prior to 2.6.9 could encounter a deadlock condition in the kernel while processing the leap second.

 

“[...] the leap second code is called from the timer interrupt handler, which holds xtime_lock. The leap second code does a printk to notify about the leap second. The printk code tries to wake up klogd (I assume to prioritize kernel messages), and (under some conditions), the scheduler attempts to get the current time, which tries to get xtime_lock => deadlock.” [http://lkml.org/lkml/2009/1/2/373]

 

The advice in January 2009 to sysadmins was to upgrade their systems to 2.6.9 or later, which contained a patch that avoided this kernel-level deadlock.

This time around it’s a different problem, where the server’s CPU encountered a 100% utilisation:

“The problem is caused by a bug in the kernel code for high resolution timers (hrtimers). Since they are configured using the CONFIG_HIGH_RES_TIMERS option and most systems manufactured in recent years include the High Precision Event Timers (HPET) supported by this code, these timers are active in the kernels in many recent distributions.

“The kernel bug means that the hrtimer code fails to set the system time when the leap second is added. The result is that the hrtimer representation of the time taken from the kernel is a second ahead of the system time. If an application then calls a kernel function with a timeout of less than a second, the kernel assumes that the timeout has elapsed immediately after setting the timer, and so returns to the program code immediately. In the event of a timeout, many programs simply repeat the requested operation and immediately set a new timer. This results in an endless loop, leading to 100% CPU utilisation.” [http://www.h-online.com/open/news/item/Leap-second-bug-in-Linux-wastes-electricity-1631462.html]

LEAP SMEARING

Following a close monitoring of their systems in the earlier 2005 leap second Google engineers were aware of problems in their operating system when processing this leap second. They had noticed that some clustered systems stopped accepting work during the leap second of December 31 2005, and they wanted to ensure that this did not recur in 2008. Their approach was subtly different to that used by the Linux kernel maintainers.

Rather than attempt to hunt down bugs in the time management code streams in the system kernel, they noted that the intentional side effect of the Network Time Protocol was to continually perform slight time adjustments in the systems that are synchronising their time according to the NTP signal. If the quantum of an entire second in a single time update was a problem to their systems, then what about an approach that allowed the 1 second time adjustment to be smeared across a number of minutes or even a number of hours? That way the leap second would be represented as a larger number of very small time adjustments which, in NTP terms, was nothing exceptional. The result of these changes was that NTP itself would start slowing down the time of day clock on these systems some time in advance of the leap second by very slight amounts, so that at the time of the applied leap second, at 23:59:59 UTC, the adjusted NTP time would have already been wound back to 23:59:58. The leap second, which would normally be recorded as 23:59:60 was now a ‘normal’ time of 23:59:59 and whatever bugs that remained in the leap second time code of the system were not exercised.

[http://googleblog.blogspot.de/2011/09/time-technology-and-leaping-seconds.html]

MORE LEAPING

The topic of leap seconds remains a contentious one. There was a proposal from the United States to the ITU-R Study Group 7′s Working Party 7-A back in 2005 to eliminate leap seconds. It’s not entirely clear whether these leap seconds would be replaced by a less frequent “leap hour”, or whether the entire concept of attempting to link UTC and the mean solar day would be allowed to drift, and over time we would see UTC time shifting away from UT1′s concept of solar day time. This proposal was most recently considered by the ITU-R in January 2012, and there was evidently no clear consensus on this topic. France, Italy, Japan, Mexico and the US were reported to be in favour of abandoning leap seconds, while Canada, China, Germany and the UK were reportedly against these changes to UTC. At present a decision on this topic, or at the least a discussion on this topic, is scheduled for the 2015 World Radio Conference.

While these computing problems with processing leap seconds are annoying and for some folk extremely frustrating and sometimes expensive, I’m not sure this factor alone should drive the decision process about whether to drop leap seconds from the UTC time framework. With our increasing dependence on highly available systems, and the criticality of accurate time of day clocks as part of the basic mechanisms of system security and integrity, it would be good to think that we have managed to debug this processing of leap seconds.

It’s often the case in systems maintenance that the more a bug is exercised the more likely it is that the bug will be isolated and corrected. However with leap seconds this is a tough ask, as the occurrence of leap seconds is not an easily predicted occurrence. Whenever we next have to leap a second in time about the best we can do is hope that we are ready for it.

FURTHER READING

The story of calendars, time, time of day and time reference standards is a fascinating story. It includes ancient stellar observatories, the medieval quest to predict the date of Easter, the quest to construct an accurate clock that would allow the calculation of longitude, and the current constellations of time and location reference satellites, and these days much of this material can be found on the net.

A good starting point for the leap second can be found in Wikipedia under the topic of

http://en.wikipedia.org/wiki/Leap_second

APNIC 34 Conference - August 2012

The APNIC 34 Conference and Workshop convened in Phnom Penh, Cambodia during August, bringing together players from across the Internet spectrum. 237 delegates from 36 economies and 52 organisations assembled for 10 days of workshops, plenaries, presentations and open forums. Representatives from the various standards organisations and Internet Registries together with regional service providers, gained valuable insights into efficient Internet resource management, and garnered best practices in IPv6 address assignment and routing configuration. Functions of APNIC

The Asia Pacific Network Information Centre, one of the world’s five Internet address registries, is a non-profit organisation responsible for servicing the Asia Pacific Internet community. As a membership-based organisation, APNIC provides Internet resource distribution and registry services for the region and actively supports the Internet community.

Two concurrent workshops were held over the first five days. The subjects covered included:

IPv4 / IPv6 BGP Routing Workshop

  • OSPF/ISIS design and best practices for Service Provider networks
  • BGP introduction, attributes and policy
  • BGP scalability (including Route Reflectors and Communities)
  • Aggregation
  • BGP Multihoming Techniques (redundancy and load balancing)
  • ISP Best Practices
  • Peering best practices
  • IPv6 Background and Standards
  • IPv6 Extensions for Routing Protocols
  • IPv6 Addressing and Address planning
  • IPv6 Deployment Case Study

Network Security Workshop

Network Security Fundamentals

  • Cryptography
  • Infrastructure security
  • Monitoring and managing access
  • Point protection
  • ACLs
  • Edge protection

Network Analysis and Forensics

  • Understanding TCP/IP
  • Forensics fundamentals

Anatomy of a network attack

  • Miscreants, motivations and misconceptions
  • Modern attacks
  • Botnets
  • DDoS & botnet financials
  • Trends

DNS Security

  • DNS vulnerabilities
  • DNS security mechanisms (TSIG, DNSSEC)

Transition Technologies update

Over the following five days, a range of conference sessions and plenaries were conducted. A number of these sessions focussed on IPv6 implementation, and particularly around the transition from IPv4 to IPv6, of which many strategies have been formulated and are being deployed by various service providers.

Access Transition technologies are mechanisms that allow operators to deploy and migrate their subscriber-base to IPv6. Transition technologies have been developed by the IPv6 community and vendors to help accelerate IPv6 deployment, and reduce barriers to IPv6 uptake. All should be evaluated carefully to identify which technology or technologies are the best fit for any given operator to deploy.

Some transition technologies have a ‘long term life’. Others are seen as interim solutions to deploy IPv6 quickly while investment or technology catches up. CPE is one of the most important domains for IPv6 deployment – to support any transition technology, long term strategy, and managing cost.

Here is a brief descriptions of some transition technologies:

Native Dual Stack

by Alastair Johnson – Alcatel Lucent

Deploying IPv6 services as native dual-stack is the best case approach for most operators and subscribers. However, it is often the most difficult.

Dual-stack is the ‘best-case’ transition design for IPv6 deployment which allows full coexistence of IPv6 and IPv4 services on an incremental deployment basis (e.g. subscribers can take up IPv6 services as their CPE is replaced, after network-wide deployment).

  • Subscriber experience is identical regardless of IPv6 or IPv4 service, which are terminated on the same equipment (CPE, BNG) and share queues, SLA, and authorization and accounting policies.
  • Impact to the customer side of the network is high due to the CPE swap requirement – however significant number of CPE today are now IPv6 capable (including many transition technologies – refer to CPE link in references).
  • Broadband Forum TR-177 and TR-187 along with TR-124i2 give excellent references for operators looking to deploy dual-stack services into existing TR-101 and PPP based environments, and provide requirements for RG behavior.
  • Depending on topology (IPoE v. PPPoE) the impact in the access/aggregation is variable: PPPoE is very straightforward to deploy IPv6 on and allows easy customer uptake.
  • IPoE does require some changes in the access network, particularly if Lightweight DHCPv6 Relay Agent (LDRA) support is required and what the access architecture looks like.
  • Debate over SLAAC vs. DHCPv6 in the access attachment continues, however general recommendation and approach is DHCPv6 based to align with DHCPv4 model in existing networks.
  • Impact in the subscriber edge (BNG) is variable: impact to some legacy BNGs may be substantial when dual stack service is enabled impacting scalability, or lack of features for full equivalent IPv4 deployment. Operators need to investigate this carefully, however modern BNGs should have no issues when deploying dual-stack services at high subscriber scale.
  • Dual-stack does have drawbacks in that it does require potential capital investment if equipment forklift upgrades are required, as well as the impact of monitoring two address families in the network (twice the link monitoring, etc).
  • Dual-stack does provide an interesting and easy approach to an IPv6-only network by simply turning IPv4 off in the future (and potentially using NAT64, etc).
  • Allows status-quo to remain for non-Internet services (e.g. VoIP ATA, CPE/RG management, IPTV services etc) as existing IPv4 path is retained.

Dual Stack Lite

by Alastair Johnson – Alcatel Lucent

DS-Lite specifically targets the case where operators wish to immediately remove IPv4 from the access – aggregation and subscriber management edge and run single-stack IPv6 while continuing to support IPv4 connectivity through a classic NAT44 capability rather than address family translation. This view was developed around 2007 and has started to gain deployment traction in 2012.

  • Significant impact in the CPE domain as the CPE must be upgraded to support IPv6 WAN and all associated connectivity (management, VoIP, IPTV etc), however NAT function is removed from CPE which potentially reduces cost (CPU/memory) in maintaining NAT state in the CPE. CPE are commercially available today that support DS-Lite and vendor support is continuing to increase.
  • Access network and subscriber management edge must support IPv6 in the same manner that dual-stack deployment. DS-Lite typically assumes an IPoE deployment but could be used in the PPP case as well.
  • Debate over SLAAC vs. DHCPv6 in the access attachment continues, however general recommendation and approach is DHCPv6 based to align with DHCPv4 model in existing networks.
  • As the operator must now deploy an AFTR, this node needs to be located near subscriber traffic (e.g. in or adjacent to the BNG) to avoid hauling traffic to centralized locations in the network which may impact TE or interface scaling in the network core. A potential drawback to non-BNG located AFTR is that any DPI or other IPv4 classification may be forced to occur at AFTR or elsewhere in the network, potentially stranding existing investment.
  • As DS-Lite moves the NAT44 function out of the RG and into the service provider environment, the service provider must support transaction logging for the LS-NAT as the subscribers share a common LAN IPv4 prefix (192.0.0.0/29) for the inside prefix.
  • DS-Lite does force re-architecture of existing service offerings such as VoIP and IPTV which may need to be moved to native IPv6 services to avoid transiting AFTR nodes in the network which may present a significant bandwidth bottleneck (in particular with multicast traffic!).
  • Deployment of DS-Lite generally implies a significant migration stage where entire Access Nodes (or regions) are migrated at once, rather than an incremental migration on a per-subscriber basis – however this is up to individual service provider deployment approach.
  • DS-Lite provides an interesting and easy approach to an IPv6-only network by simply turning IPv4 off in the future when it is no longer required.

NAT64

by Alastair Johnson – Alcatel Lucent

NAT64 specifically targets the case where operators wish to immediately remove IPv4 from the access–aggregation and subscriber management edge and run single-stack IPv6 while continuing to support IPv4 connectivity. NAT64 most closely aligns with wireless deployment models rather than wireline, given the drawbacks in NAT64 for application translation and the wider number of applications found in wireline environments vs. wireless.

  • Significant impact in the CPE domain as the CPE must be upgraded to support IPv6 WAN and all associated connectivity (management, VoIP, IPTV etc), however NAT function is removed from CPE which potentially reduces cost (CPU/memory) in maintaining NAT state in the CPE.
  • Access network and subscriber management edge must support IPv6 in the same manner that dual-stack deployment. NAT64 typically assumes an IPoE deployment but could be used in the PPP case as well.
  • Debate over SLAAC vs. DHCPv6 in the access attachment continues, however general recommendation and approach is DHCPv6 based to align with DHCPv4 model in existing networks.
  • As the operator must now deploy a NAT64, this node needs to be located near subscriber traffic (e.g. in or adjacent to the BNG) to avoid hauling traffic to centralized locations in the network which may impact TE or interface scaling in the network core. All DPI and classification on the IPv4 side of the NAT64 should be translated into the IPv6 side as well to preserve end-to-end behavior in the service provider network.
  • The operator must also deploy a DNS64 node that can provide the DNS synthesis by translating DNS responses with only A-records into AAAA-records with the well-known Pref64 prefix. Major DNS vendors support DNS64 translation today.
  • NAT64 will break a number of applications that rely on IPv4-literals (e.g. attempt to establish a socket directly to 192.0.2.1) and applications that will not traverse NAT environments happily. Some experiments have been conducted with IPv6-only networks and NAT64 environments and document the broken applications – refer to reference slide.
  • NAT64 does force re-architecture of existing service offerings such as VoIP and IPTV which may need to be moved to native IPv6 services to avoid transiting NAT64 nodes in the network which may present a significant bandwidth bottleneck (in particular with multicast traffic!).
  • Deployment of NAT64 generally implies a significant migration stage where entire Access Nodes (or regions) are migrated at once, rather than an incremental migration on a per-subscriber basis – however this is up to individual service provider deployment approach.

IPv6 Rapid Deployment - 6rd

by Alastair Johnson – Alcatel Lucent

6rd specifically targets the case where operators wish to immediately deploy IPv6 to their subscriber base, but cannot enable it in the native access. As 6rd encapsulates IPv6 in IPv4, it can be deployed across any existing IPv4 network.

Some constraints faced by operators that drive 6rd as the technology include legacy Access Nodes (e.g. DSLAMs) that cannot support forwarding IPv6 packets, or older access technologies (e.g. DOCSIS 1.1) that cannot support IPv6 L3 wholesale access environments that cannot support IPv6 are another common barrier to deployment.

  • Significant impact in the CPE domain as the CPE must be upgraded to support 6rd. CPE are commercially available today that support 6rd and vendor support is continuing to increase.
  • Access network and subscriber management edge face no changes.
  • As the operator must now deploy a 6rd BR, this node needs to be located near subscriber traffic (e.g. in or adjacent to the BNG) to avoid hauling traffic to centralized locations in the network which may impact TE or interface scaling in the network core. A potential drawback to non-BNG located 6rd BR is that any DPI or other IPv4 classification may be forced to occur at 6rd BR or elsewhere in the network, potentially stranding existing investment or impacting service provider operations.
  • As 6rd may automatically derive the subscriber prefix with variable length subnetting (e.g. 48-56-64) based on the IPv4 address, the operator must consider exactly how many IPv4 bits they wish to stuff into the IPv6 prefix, and how this impacts any RIR allocated IPv6 prefixes. There are multiple approaches for managing the IPv6 addressing in 6rd environments.
  • 6rd does not force re-architecture of existing service offerings such as VoIP and IPTV which may remain on the existing IPv4 service.
  • 6rd can be deployed incrementally with no impact to the subscriber base as and when CPE are upgraded to support 6rd.
  • 6rd does not solve the long term problem of removing IPv4 from the access network or moving to native IPv6 services, however discussion for this is being undertaken in the IETF currently (refer reference slide).
  • Potential MTU issues may occur with the tunnel, but may be mitigated by increasing WAN MTU or implementing fragmentation in the 6rd BR and CPE.

464XLAT

by Masanobu Kawashima - NEC Access Technica, Ltd.

464XLAT provides limited IPv4 connectivity across an IPv6-only network by combining existing and well-known stateful protocol translation (described in RFC 6146) in the core, and stateless protocol translation (described in RFC 6145) at the edge.

What it is

  • Easy to deploy and available today, commercial and open source shipping product.
  • Effective at providing basic IPv4 service to consumers  over IPv6-only access networks.
  • Efficient use of very scarce IPv4 resources.

What it is NOT

  • A perfect replacement for IPv4 or Dual-stack service.

As the name implies, 464XLAT centres on “translation”, employing terms such as:

PLAT: Provider side translator - A stateful translator that performs 1:N translation. It translates a global IPv6  address to global IPv4 address, and vice versa.

CLAT: Customer side translator - A stateless translator that performs 1:1 translation. It algorithmically translates private IPv4 address to global IPv6 address, and vice versa. Other features are IPv6 router, DHCPv6 Server/Client, Access Control, DNS Proxy, etc. The presence of DNS64 (described in RFC6147) and any port mapping algorithm are not required.

Upcoming APNIC Conferences

APNIC 35

– Singapore, March 2013

– with APRICOT 2013

Also, for the avid readers, interesting BLABs on the Internet can be found here:

http://labs.apnic.net/blabs/?cat=7

About Cambodia

Phnom Penh is often referred to as the “Pearl of Asia”; it was considered one of the prettiest French-built cities in Indochina in the 1920s. Founded in 1434, the city is noted for its beautiful and historical architecture and attractions. You will still find surviving French colonial buildings along the grand boulevards. There are scant reminders of the tragedies that have beset Cambodia right up to recent decades, but it is possible for the inquisitive to be reminded of some of the worst atrocities a nation could assail on its people.

Attempting to cross Phnom Penh’s streets, you encounter a never ending stream of tuk-tuks plying for trade and motorcycles laden with the entire family. The key lies in the confidence to step off the footpath and walk steadily to the other side – let them avoid you. The automobile is an altogether different proposition. Do not take on a car – the car will not miss you, and Cambodian drivers do not attend the scene of an accident. There is no personal liability insurance or CTP in this country.

Nor, are there many beggars. Phnom Penh has made a concerted effort to encourage its people to produce goods and to offer them for sale, and when one considers that over 90 percent of the population survive on less than two dollars a day, you have to admire their drive for independence and their tenacity.

Laurie Benjamin IIT Training