Payment Terminal

Technology is entwined with much of our day to day lives, with no better example than the growth of smartphone adoption, a device now seen as a must-have. Payment and banking is almost unrecognisable from ten years ago with online banking, mobile apps, ‘chip and pin’, contact-less payments and online payments.

At events, however, many attendees often find trying to make a simple credit/debit card payment can be a frustrating and unreliable experience. For ourselves as technology providers ‘credit card machines’ or PDQs as they are known, come top of the list of complaints from event organisers, traders and exhibitors.

These problems not only cause frustration for attendees but also present a serious issue in terms of financial return for traders and exhibitors, and their desire to be present at events. It is well documented that the ability to take contactless and chip & pin payments at events increases takings, reduces risks from large cash volumes and can improve flow and trackability.

So why is it such a problem? Much of the issue comes down to poor communication and misinformation on top of what is already a relatively complex environment. Card payments and the machines which can take payments are highly regulated by the banking industry meaning they tend to lag behind other technology, however, this can be overcome and a properly thought through approach can deliver large scale reliable payment systems.

Bad Terminology

A lot of the confusion around PDQ machines comes from the design and terminology used. Although the machines all look the same there are differences in the way they work. Nearly all PDQs use the design of a cradle/base station with a separate handheld unit. The handheld part connects to the base station using Bluetooth. This is where the confusion starts as people often describe these units as ‘wireless’ because of the Bluetooth, however, their actual method of connectivity to the bank may be one of four different types:

  • Telephone Line (PSTN – Public Switched Telephone Network) – This is the oldest and, until a few years ago, the most common type of device, it requires a physical telephone line between the PDQ modem and the bank. It’s slow, difficult and very costly to use at event sites because of the need for a dedicated physical phone line, however, once it is working it is reliable.
  • Mobile PDQ (GPRS/GSM) – Currently the most common form of PDQ, it uses a SIM card to connect to a mobile network to use GSM or GPRS to connect back to the bank. Originally seen as the go anywhere device, in the right situation they are excellent, however, they have limitations, the most obvious being they require a working mobile network to operate. At busy event sites the mobile networks rapidly become saturated and this means the devices cannot connect reliably. As they use older GPRS/GSM technology they are also very slow – it doesn’t make any difference if you try and use the device in a 4G area – it can only work using GPRS/GSM. As they use the mobile operator networks they may also incur data charges.
  • Wi-Fi PDQ – Increasingly common, this version connects to a Wi-Fi network to get its connectivity to the bank. On the surface this sounds like a great solution but there some challenges, firstly it needs a good, reliable Wi-Fi network. The second issue is that many Wi-Fi PDQs still operate on the 2.4GHz Wi-Fi spectrum which on event sites is heavily congested and suffers lots of interference making the devices unreliable. This is not helped by the relative weak Wi-Fi components in a PDQ compared to a laptop for example. It is essential to check that any Wi-Fi PDQ is capable of operating in the less congested 5GHz spectrum.
  • Wired IP PDQ – Often maligned because people think it doesn’t have a ‘wireless’ handset, but they are actually the same as all the others and have a wireless handset but it uses a physical wire (cat5) from the base station to connect to a network. In this case the network is a computer network using TCPIP and the transactions are routed in encrypted form across the internet. If a suitable network is available on an event site then this type of device is the most reliable and fastest, and there are no call charges.

All of these units look very similar and in fact can be built to operate in any of the four modes, however, because banks ‘certify’ units they generally only approve one type of connectivity in a particular device. This is slowly starting to change but the vast majority of PDQs in the market today can only operate on one type of connectivity and this is not user configurable.

On top of these aspects there is also the difference between ‘chip & pin’ and ‘contactless’. Older PDQs typically can only take ‘chip & pin’ cards whereas newer devices should also be enabled for contactless transactions.

Myth or Fact

Alongside confusion around the various types of PDQs there is a lot of conflicting and often inaccurate information circulated about different aspects of PDQs. Let’s start with some of the more common ones.

I have a good signal strength so why doesn’t it work?

The reporting of signal strength on devices does nothing but create frustration. Firstly because it is highly inaccurate and crude, and secondly because it means very little – a ‘good’ signal indicator does not mean that the network will work!

The issue is that signal strength does not mean there is capacity on the network, it is frequently the case at event sites that a mobile phone will show full signal strength due to a temporary mobile mast being installed but there is not enough capacity in terms of data to service the devices so the network does not work. A useful analogy is comparing networks to a very busy motorway. You can get on, but you won’t necessarily go anywhere. The same can be true on a poorly designed Wi-Fi network, or a well-designed Wi-Fi network which doesn’t have enough internet capacity.

In fact you can have a low signal strength and still get very good data throughput on a well-designed network. Modern systems also use a technique known as ‘beam-forming’ where a device is not prioritised until it is actually transmitting data which means it may show a low signal strength which increases when it is doing something.

On the flip side your device may show a good signal strength but the quality of the signal may be poor, this could be due to interference, poor design or sometimes even weather & environmental conditions!

Wi-Fi networks are less secure than mobile networks

There are two parts to this, firstly all PDQs encrypt their data no matter what type of connection they use, they have to so that they meet banking standards (PCI-DSS) and protect against fraud. The second aspect is that a well-designed Wi-Fi network is as secure, if not more secure, than a mobile network. A good Wi-Fi network will use authentication, strong encryption and client isolation to protect devices, it should also be the case that all PDQs are connected to a separate ‘virtual network’ to isolate them away from any other devices.

You have to keep logging into the Wi-Fi network

Wi-Fi networks can be configured in many ways but for payment systems there should be no need to keep having to log in. This problem tends to be seen when people are trying to use a payment system on a ‘Public Wi-Fi network’ which will often have a login hijack/splash page and a time limit.

A multi-network M2M GPRS/GSM SIM is guaranteed to work

Sadly this is not true, although a PDQ with a SIM card which can roam between mobile networks and use GPRS or GSM may offer better connectivity, there is no guarantee. Some event sites have little or no coverage from any mobile operator and even where there is coverage, capacity is generally the limiting factor.

Mobile signal boosters will solve my problem

Mobile signal boosters, or more correctly signal repeaters, are used professionally by mobile operators in some circumstances, for example inside large buildings, to create coverage where signal strength is very weak due to their construction (perhaps there is a lot of glass of metal which can reduce signals from outside). In the UK the purchase and use of them by anyone outside of a mobile operator is illegal (they can cause more problems with interference). For temporary event sites they provide little benefit anyway as it is typically a capacity issue which is the root cause of problems.

A Personal hotspot (Mi-Fi) will solve my problem

Personal hotspots or Mi-Fi devices work by connecting to a mobile network to get connectivity and then broadcasting a local Wi-Fi network for devices to connect to. Unfortunately, at event sites where the mobile networks are already overloaded these devices offer little benefit, and even if they can get connected to a mobile network the Wi-Fi aspect struggles against all the other wireless devices. On top of that these devices cause additional interference for any existing on-site network making the whole situation even worse.

The Next Generation & the Way Forward…

The current disrupters in the payment world are the mobile apps with devices such as PayPal Here and iZettle. Although they avoid the traditional PDQ they still require good connectivity, either from the mobile networks or a Wi-Fi network, and hence the root problem still exists.

Increasingly exhibitors are also using online systems to extend their offerings at events via tablets and laptops which also require connectivity. An even better connection is required for these devices as they are often transferring large amounts of data, placing more demands on the network. Even virtual reality is starting to appear on exhibitors stands so there is no doubt that the demand for good connectivity will continue to increase year on year.

What the history of technology teaches us is that demand always runs ahead of capacity. This is especially true when it comes to networks. For mobile operators to deliver the level of capacity required at a large event is costly and complex, and in some cases just not possible due to limits on available wireless spectrum.

4G is a step forward but still comes nowhere close to meeting the need in high demand areas such as events, and that situation will worsen as more people move to 4G and the demand for capacity increases. Already the talk is of 5G but that is many years away.

For events, realistically, the position for the foreseeable future is a mixed one. For small events in a location well serviced by mobile networks with limited requirements then 3G/4G can be a viable option, albeit with risks. No mobile network is guaranteed and performance will always drop as the volume of users increases as it is a shared medium. There are no hard and fast rules around this as there are many factors but in simple terms the more attendees present the lower the performance!

For any sizeable event the best approach is a dedicated event network serviced with appropriate connectivity providing both Wi-Fi and wired connections. This solution facilitates usage for Wired IP-based PDQs, Wi-Fi PDQs, iZettle and other new payment devices, as well as supporting requirements for tablets, laptops and other mobile devices, each managed by appropriate network controls.

With the right design this approach provides the best flexibility and reliability to service the ever-expanding list of payment options. What is particularly important is that an event network is under the control of the event organiser (generally via a specialist contractor) and not a mobile operator, as this removes a number of external risks. For those without existing compatible PDQs the option of rental of a wired or Wi-Fi PDQ can be offered at the time of booking.

The key in all of this is planning and communication, payment processing has to be tightly controlled from a security point of view so it is important that enough time is available to process requests, especially where temporary PDQs are being set-up as they often require around 10 working days.

danger, office in a fire , a burning computer

The problem with modern IT is that on the whole it just works. Its reliability has made us lazy and overly confident so that when it does fail the pain is all the more intense. Twenty years ago a damaged floppy disk might have lost you 1.44 MB of data, now even a humble USB memory stick can have 64 GB of data on it.

The loss of some data is one thing but nearly all businesses are incredibly dependent on their IT systems, laptops, smartphones and internet connectivity. Businesses spend many thousands of pounds deploying systems which become integral to the operation of the business but frequently do not spend any time considering the what-if disaster scenarios or any approaches to mitigating those risks.

With many small and medium sized businesses now moving to cloud based solutions there seems to be an even more relaxed attitude due partly to the belief that cloud systems are 100% reliable. Unfortunately the cloud is no more than a buzz word behind which sits computer and networking equipment no different to any other IT system, and in the same way it will fail from time to time.

Hardware is very reliable and with redundant systems the physical side can be designed very effectively, however, there will be a single point of failure somewhere and more often than not today that point of failure is human – typically when making a configuration or software change.

Even an outage of a few hours can cost a business large amounts, from lost sales, production delays, shipping delays and a host of other aspects depending on the business type. It’s not just the IT systems directly though – fire, flood, terrorism, loss of building access, cyber-attack, loss of internet access, etc. all can have a potentially devastating impact albeit with varying degrees of probability.

Whether you are a sole trader or a large corporation a sensible approach to business continuity and disaster recovery is essential. For a small business it may be very straightforward but none the less it is important that the risks are reviewed and appropriate actions taken.

The first step is to identify the risks and run through each scenario noting down the potential impact. Each scenario can then be scored based on probability of occurrence and impact to the business. The next step is then to mitigate these risks as much as possible, looking at aspects such as processes, system design and environmental factors, from which a prioritised list of actions can be generated based on feasibility and cost.

This process is broad, covering physical building and operational aspects through to much maligned data backups but it is important that everything is looked at as it will always be the smaller details which cause the problems. One common issue for example is that in many buildings the internet connectivity comes into the building via the basement with sensitive networking equipment located in the area most at risk from flooding!

Not all risks can be eliminated so for those that remain the next step is to look at contingency. For example, a building fire or flood is likely to necessitate a relocation so a disaster recovery plan should be in place which details the steps and actions to be taken in the event such a disaster occurs. This may include pre-identified space in which to move to, stand-by equipment and a recovery plan for bringing services back online.

The biggest risk for most disaster recovery and business continuity plans is that they frequently do not get tested. Only when a disaster strikes does it get discovered that the system backups have been failing all along! (Yes, I have seen that happen) Checking and testing plans on a regular basis is a key part of the process, just like a fire drill.

Disaster recovery and business continuity planning is not necessarily as big a job as it might be perceived to be but without it the reality of a disaster is all the more painful. Relying on a ‘it won’t happen to us’ strategy is not good business practice.

Festival

In part one of this series we looked at the physical network, part two covered the logical network and now in the third and final part we reach the edge network. Everything that has gone before is purely to enable the users and devices which connect to the network to deliver a service. For this blog we’ll take a journey through the different user groups and look at how the network services their requirements and the way technology is changing events.

Event Production

Making everything tick along from the first day of build until the last day of derig is a team of dedicated production staff working no matter whatever the weather. It is perhaps obvious that they all need internet access but the breadth of requirements increases year on year. Email and web browsing is only a part of the demand with applications such as cloud based collaboration tools sharing CAD designs and site layouts, along with event management applications dealing with staff, volunteers, traders, suppliers and contractors all being part of the wider consumption of bandwidth.

Just about everything to do with the delivery of an event these days is done in a connected way and as such reliable connectivity is as important as power and water.

Across the site, indoors and outdoors are carefully positioned high capacity Wi-Fi access points delivering 2.4GHz and 5GHz wireless connectivity to all the key areas such as site production, technical production, stewarding, security, gates and box offices. Different Wi-Fi networks service different users – from encrypted and authenticated production networks to open public networks – each managed with specific speeds and priorities. To deliver a good experience to the high density of users’ careful wireless spectrum management is essential, in some cases using directional antennas to focus the Wi-Fi signal in specific directions (rather like using a torch to focus light in a specific area). With so many wireless systems used on event sites interference can be a real challenge so wireless scanners are used to look for potential problems with active management and control used to make sure there are no ‘rogues’.

Not everything is wireless though, many devices, such as VoIP phones and some users require a wired connection so many cabins have to be wired to from network switches. Some sites may have over 200 VoIP (Voice over IP) phones providing lines for aspects such as enquiries, complaints, box offices, emergency services as well as a reliable communications network where there is no mobile service or the service struggles once attendees arrive. Temporary cabins play host to array of IT equipment such as printers, plotters and file servers all of which need to be connected.

As equipment evolves more and more devices are becoming network enabled, for example power is a big part of the site production with an array of generators across the site. The criticality is such that a modern generator can be hooked into the network like any other device to be monitored and managed remotely. On big sites even the 2-way radios may be relayed between transmitters across the IP network. Technical production teams also use the network to test the sound levels & EQ from different places.

Event Control

Once an event is running it is event control that becomes the hub of all activity. Alongside laptops, iPads and phones, large screens display live CCTV image from around the site – anywhere from two to over a hundred cameras may be sending in high definition video streams with operators controlling the PTZ (Pan/Tilt/Zoom) functionality as they deal with incidents. A modern PTZ camera provides an incredible level of detail with a high optical zoom, image stabilisation, motion detection and tracking, picture enhancement and low light/infra-red capability. CCTV may be thought of as intrusive but at events its role is very broad playing as much a part in monitoring crowd flows, traffic management and locating lost children as it is in assisting with crime prevention.

Mast & Cameras

These cameras may be 30m up but they can deliver incredibly detailed images across a wide area

Full-HD and 4K Ultra HD cameras can deliver video streams upwards of 10Mbps, with 360 degree panoramic cameras reaching 25Mbps depending on frame rate and quality, this creates many terabytes of data which has to be archived ready to be used as evidence if needed, requiring high capacity servers to both record and stream the content to viewers. One event this year created over 12TB of data – the equivalent of 2,615 DVDs!

As everything is digital, playback is immediate allowing incidents to be quickly identified and footage or photos to be distributed in minutes. Content is not only displayed in a main control room but is also available on mobile devices both on the site and at additional remote locations.

Special cameras provide additional features such as Automatic Number Plate Recognition (ANPR) for use at vehicle entrances or people counting capability to assist with crowd management. Body cameras are becoming more common and now drone cameras are starting to play a part.

At the gates staff are busy scanning tickets or wristbands, checking for validity and duplication in real-time across the network back to central servers. The entrance data feeds to event control so they can see how many people have entered so far and where queues may be building. Charts show whether flow is increasing or decreasing so that staff can be allocated as needed.

For music events especially, noise monitoring is important and this often requires real-time noise levels to be reported across the network from monitors placed outside the perimeter of the event. Other monitors are increasingly important, ranging from wind-speed to water levels in ‘bladders’ used for storing water on site. The advent of cheap GPS trackers is also facilitating better monitoring of large plant and key staff.

External information is also important for event control with live information required on weather, transport, news and increasingly social media. Sources such as Twitter and Facebook are scanned for relevant posts – anything from complaints about toilets to potential trouble spots.

Bars, Catering, Traders & Exhibitors

For those at an event selling anything from beer to hammocks, electronic payment systems have been one of the biggest growth areas. From more traditional EPOS (Electronic Point of Sales) systems through to chip & pin/contactless PDQs, Apple Pay, iZettle and other non-cash based solutions. These systems are particularly critical in nature transacting many hundreds of thousands of pounds during an event with some sites deploying hundreds of terminals.

High volume sales such as bars also require stock management systems linking both onsite and offsite distribution to ensure stocks are maintained at an appropriate level. A recent development is traders operating more of a virtual stand with limited stock on site, instead the customer browses online on a tablet to order and have the product delivered to home after the event.

Sponsors

Most events have an element of sponsorship with each brand wanting to lead the pack in terms of innovation and creativity. Invariably these ‘activations’ involve technology in some form – from basic internet access to more involved interaction using technology such as RFID, GPS, augmented reality and virtual reality.

There are often multiple agencies and suppliers involved with a short window in which to deploy and test just as the rest of the event is reaching its peak of build activity. To be exciting the sponsor wants it to be ‘leading edge’ (or ‘bleeding edge’ as it is sometimes known!), which typically means on the fly testing and fixing.

Media & Broadcast

Media Centre

Busy media centres create demanding technical environments

From a gaggle of photographers wanting to upload their photos to a mobile broadcast centre, the reliance on technology is huge at a big event. Live streaming is increasingly important, both across the site and also out to content distribution networks. These often require special arrangements with guaranteed bandwidth and QoS (Quality of Service) controls to ensure the video or audio stream is not interrupted. It is not unusual to get requests for upwards of 200Mbps for an individual broadcaster.

More and more broadcasters are moving to IP solutions (away from dedicated broadcast circuits) requiring higher capacity and redundancy to ensure the highest availability. These demands increasingly require fibre to the truck or cabin with dedicated fibre runs back to a core hub.

Alongside content distribution, good quality, high density Wi-Fi is essential in a crowded media centre with the emphasis on fast upload speeds. Encoders and decoders are used to distribute video streams around a site creating IPTV networks for both real-time viewing and VoD (Video-on-Demand) applications. The next growth area is 360 degree cameras used to provide a more immersive experience both onsite and for remote watchers.

Attendees

Then after all this there may be public Wi-Fi. For wide-scale public Wi-Fi (as opposed to a small hotspot) it is typical over the duration of an event for at least 50% of the attendees to use the network at some point – the usage being higher when event specific features are promoted such as smartphone apps and event sponsor activities.

The step-up from normal production services to a large scale public Wi-Fi deployment is significant – a typical production network would be unlikely to see more than 1,000 simultaneous users, but a big public network can see that rise beyond 10,000, requiring higher density and complex network design, as well as significantly greater backhaul connectivity with public usage pulling many terabytes of data over a few days.

With a significant number of users, a large amount of data can be collected anonymously and displayed using an approach known as heat mapping to show where the highest density of users are and how users move around an event site. This information is very useful for planning and event management.

crowd

Public Wi-Fi has to deal with thousands of simultaneous connections

Break It Down

As the final band is doing their encore, or the show announces it is time to close the team switch to follow the carefully designed break down plan. What can take weeks to build is removed within a couple of days, loaded into lorries, shipped back to the warehouse to be reconfigured and sent out to next event. Sometimes tight scheduling means equipment goes straight from one country or job to the next. But not everything is removed at once, a subset of services remains for the organisers whilst they clear the site until the last cabin is lifted onto a lorry and we remove the last Wi-Fi access point and phone.

The change over the last five years has been rapid and shows no sign of slowing down as demand increases and services evolve. Services such as personal live streaming, augmented reality, location tracking and other interactive features are all continuing to push demands further.

So yes we provide the Wi-Fi at events but when you see an Etherlive event network on your phone spare a thought as to what goes on behind the scenes.

 

x-default

In the first Behind the Wi-Fi blog we looked at some of the physical aspects of building out a large scale temporary network, this time we look at how it all comes together as a ‘logical network’ or more simply how all of the networking components work together. With some event networks servicing 10,000+ simultaneous users and consuming anywhere between 100Mbps to 1Gbps of internet connectivity, chaos would ensue unless it was carefully designed and implemented.

Although networks are thought of as being one big entity in reality they are broken down into many ‘virtual networks’ which operate independently and are isolated from each other. This approach is very important from a management, security, reliability and performance point of view. For example, you would not want public users being able to access a network that is being used for payment transactions.

All of our events are rated based on a complexity score and this helps define how the network is designed. Larger and more complex events are designed using a fully routed topology rather than a simple flat design. This approach provides the best performance and resilience operating a bit like the electricity ‘grid’ network where a number of nodes are connected together in a resilient manner to provide a multipath backbone and then the customer services are connected to the nodes. This approach means that each node is provided with a level of isolation and protection which is not possible on a simpler flat network.

This isolation is important as a network grows due to the way when devices connect they are designed to send out ‘broadcasts’ to everyone on the network. With a large number of devices these broadcasts can become overwhelming on a flat network but on a routed network the broadcasts can be filtered out at the appropriate node. Faulty or incorrectly configured equipment can sometimes cause ‘network storms’ where huge amounts of network traffic is created in milliseconds reducing performance for all users, a routed topology offers much more protection against this isolating any problems to a small subsection of the network.

Every site has different network requirements so there may be anywhere between 5 and 50 virtual networks known as VLANS to ensure all the appropriate users and network traffic are kept separate. Traffic shaping rules are applied to these different networks to prioritise the most important networks, along with filtering and logging as required.

At the heart of this is what we call the ‘core’, the set of components which control the key aspects of the network such as the internet access, filtering, firewall, authentication, routing, wireless management, remote access and monitoring.

With several different connections to the internet, traffic is distributed across the different connections – this may be by load balancing, bonding, or policy routing. This is a complex area as different types of network traffic may only be suitable for certain types of connection. For example, voice traffic and encrypted VPNs do not work well over a satellite link due to the high latency (delay) of satellite.

The core routers also contain a firewall, this is the protection between the external internet and the internal network. Protecting against intrusion and hacking is sadly a very important factor with all internet connected systems subject to a constant stream of attacks from remote hackers in places such as China and Russia.

Additional firewalls also exist to control traffic across the internal networks. By default, everything is blocked between networks but for some services limited access may be required across VLANS so specific rules are added – an approach known as pin-holing. Filtering can be used to block particular websites or protocols (such as bit torrent and peer to peer networking); this may be done to protect users from undesirable content or to ensure the performance of the network is maintained.

mediacentre

Prioritisation of voice traffic from phones is important to ensure call quality, especially in a media centre

Rate shaping and queuing are additional important controls to manage bandwidth to specific groups and users ensuring everyone gets the speeds they asked for. This is especially important for real-time services such as voice calls and video streaming. Traffic is managed at a user and network level using dynamic allowances so that all available bandwidth is utilised in the most effective manner without impacting any critical services. Users or networks may be given a guaranteed amount of bandwidth but this may be exceeded in a ‘burst’ mode provided there is spare capacity on the incoming internet links.

The core also houses the PBX, the onsite telephone exchange which manages all the phones and calls with big sites having as many as 200 phones and generating thousands of calls. All the features of a typical office telephone system are implemented with ring groups, voicemail, call forwarding, IVR, etc. As all of the phones are Voice Over IP (VoIP) they are connected via standard network cabling so can easily be moved between locations. Additional numbers and handsets can also be added very quickly.

The vast majority of users these days are connected via the Wi-Fi network which requires careful management and design. The detail behind this would run to several pages so for the purposes of this blog we will keep things relatively simple and look at a few key aspects.

Frequency/Standard – Wi-Fi currently operates at two frequencies, 2.4 GHz and 5 GHz. As discussed in previous blogs there are many issues around 2.4 GHz so all primary access we provide is focussed on 5 GHz with only public access and some other legacy devices connected via 2.4 GHz. All of the Wi-Fi access points we use are at least 802.11n capable with the majority now 802.11ac enabled to provide the highest speeds and capacity.

Wireless Network Names – When you look for a wireless network on a device you see a list of available networks, these identifiers are known as SSIDs and control the connection method to the network. Different SSIDs will be used for different audiences, with some SSIDs hidden such that you can only try to connect to it if you know the name. Wireless access points can broadcast multiple SSIDs at the same time but there are limits and best practice as to how many should be used. Some SSIDs may be available across the entire network whereas others may be limited to specific areas.

Encryption & Authentication – These two areas are sometimes confused but relate to two very different aspects. Encryption deals with the way the information which is sent wirelessly is scrambled to avoid any unauthorised access. It is similar to using a website starting with ‘https’ but in this case all information between the device and the wireless access point is encrypted. There are several standards for doing this and we use WPA2 which is the current leader. Not all networks are encrypted and, as is the case with most public Wi-Fi hotspots, public access is generally unencrypted.

Authentication deals with whether a user is allowed to use a particular network and ranges from ‘open access’ where a user just clicks on an accept button for the terms and conditions through classic username/password credentials and onto RADIUS or certificate based systems which offer the highest levels of protection. One common approach is the use of a pre-shared key or pass-phrase as part of the WPA standard, knowing the pass-phrase is in effect an authentication challenge. The pass-phrase is also the seed for the encryption and the longer the pass-phrase the harder it is for a hacker to crack the encryption. The pass-phrase approach is simple to manage but has inherent weakness in that it is easily compromised by sharing between users with no control.

boat

Large scale Wi-Fi is a particularly complex area with many different requirements and challenges

On top of this various other services are employed to protect and manage the Wi-Fi. Client isolation for example stops a user on the network from seeing any network traffic from another user, whereas band steering & load balancing seamlessly move users between frequencies and wireless access points to ensure each user gets the best experience.

The rise of the smartphone has had a major impact on Wi-Fi networks at events due to the way they behave. If a smartphone has its Wi-Fi turned on, then it constantly hunts and probes for Wi-Fi networks so even in this ‘un-associated’ state it still creates an element of load on the network. Mechanisms have to be employed to drop the devices from the network unless they are truly connected (‘associated’) and active (accessing a web page for example). Even connected devices are typically dropped fairly quickly once they cease to be active so that other users can connect. This all happens very fast and transparently to the user with the device reconnecting automatically when it needs to.

This array of logical controls processes millions of pieces of information every second routing them like letters to the correct address, discarding damaged or undesirable ones and acknowledging when they have been received. Each of the components have to work in harmony with sites having anywhere up to around 30 routers, 200 network switches and 200 Wi-Fi access points. To manage this standard configurations and builds are used which have been pre-tested as this reduces the risk of introducing a problem via a new firmware or configuration change.

Next time in the final part of this series we will look at how this all comes together to deliver the end services for the users and the impact it all has on the event.

 

photo credit: Binary code via photopin (license)

fibre

“You guys do Wi-Fi at events right?” typically is the way most people remember us, the irony that the invisible part of our service is in reality the most visible. Unless you know what you are looking for at a large event site you are unlikely to notice the extensive array of technology quietly beating away like a heart.

From walking up to the entrance and having your ticket scanned, watching screens and digital signage, using a smartphone app or buying something on your credit card before you leave, today’s event experience is woven with technology touchpoints. Watching a live stream remotely or scrolling through social media content also rely on an infrastructure which supports attendees, the production team, artists, stewards, security, traders & exhibitors, broadcasters, sponsors and just about everyone else involved.

During a big event the humble cables and components which enable all of this may deal with over 25 billion individual electronic packets of data – all of which have to be delivered to the correct location in milliseconds.

In the first of three blogs looking behind the scenes we take a look at how the core network infrastructure is put together.

Let’s Get Physical

When an event organiser starts the build for an event, often several weeks before live, one of the first things they need is connectivity to the internet. Our team arrives at the same time as the cabins and power to deliver what we call First Day Services – a mix of internet connectivity, Wi-Fi and VoIP telephony for the production team.

Connectivity may be provided by traditional copper services such as ADSL or via satellite but more typically is now via optical fibre or a wireless point to point link as the demands on internet access capacity are ever increasing. Even 100Mbps optic fibre connections are rapidly being surpassed with a need for 1Gbps fibre circuits.

Distribution Board

PSTN, ISDN, ADSL and fibre all are commonplace on a big site

Wireless point-to-point links relay connectivity from a nearby datacentre or other point of presence, however, this introduces additional complexity with the need for tall, stable masts at each end of the link to create the ‘line of sight’ required for a point to point link. To avoid interference and improve speeds the latest generations of links now utilise frequencies as high as 24GHz and 60GHz to provide speeds over 1Gbps. Even with the reliability of fibre and modern wireless links it is still key to have a redundant link too so a second connection is used in parallel to provide a backup.

From there on the network infrastructure is built out alongside the rest of the event infrastructure working closely with the event build schedule. Planning is critical with many sites requiring a network infrastructure as complex as a large company head office, which must be delivered in a matter of days over a large area.

The backbone on many sites is an extensive optical fibre network covering several kilometres and running between the key locations to provide the gigabit and above speeds expected. On some sites a proportion of the fibre is installed permanently – buried into the ground and presented in special cabinets – but in most cases it is loose laid, soft dug, flown, ducted, and ramped around the site. Pulling armoured or CST (corrugated steel tube) fibre over hundreds of metres at a time through bushes, trees, ditches and over structures is no easy task!

Optical fibre cable can run over much longer lengths than copper cable whilst maintaining high speeds, however, it is harder to work with requiring, for example, an exotically named ‘fusion splicer’ to join fibre cores together. On one current event which uses a mix of 8, 16 and 24 core fibre there are over 1,200 terminations and splices on the 5.5km of fibre. With the network now a critical element redundancy is important so the fibre is deployed in ‘rings’ so that all locations are serviced from two independent pieces of fibre – a tactic known as ‘diverse routing’ – so that if one piece of fibre becomes damaged the network continues to operate at full speed.

Each secure fibre break-out point, known as a Point of Presence (POP), is furnished with routing and switching hardware within a special weatherproof and temperature controlled cabinet to connect up the copper cabling which is used to provide the services at the end point such as VoIP phones, Wi-Fi Access points, PDQs and CCTV cameras.

Each cabinet is fed power from the nearest generator on a 16-amp feed and contains a UPS (Uninterruptible Power Supply) to clean up any power spikes and ensure that if the power fails not only does everything keep running on battery but also an alert is generated so that the power can be restored before the battery runs out.

Although wireless technology is used on sites there is still a lot of traditional copper cabling using CAT5 as this means power can be delivered along the same cable to the end device. Another aspect is speed, with most wireless devices limited to around 450Mbps and shared between multiple users the actual speed is too low for demanding services, whereas CAT5 will happily run at 1Gbps to each user.

For critical reliability wireless also has risks from interference so where possible it is kept to non-critical services but there are always times when it is the only option so dedicated ‘Point-to-Point’ links are used – these are similar to normal Wi-Fi but use special antennas and protocols to improve performance and reliability.

Cheery picker

A head for heights is important for some installs!

Another significant technology on site is VDSL (Very High Bit-Rate DSL), similar in nature to ADSL used at home but run in a closed environment and at much higher speeds. It is the same technology as is used for the BT Infinity service enabling high speed connections over a copper cable up to around 800m in length (as opposed to 100m for Ethernet).

All of these approaches are used to build out the network to each location which requires a network service be it a payment terminal (PDQ) on a stand to a CCTV camera perched high up on a stage. Although there is a detailed site plan, event sites are always subject to changes so our teams have to think on their feet as the site evolves during the build period. Running cables to the top of structures and marquees can be particularly difficult requiring the use of cherry pickers to get the required height.

After the event all of the fibre is coiled back up and sent back to our warehouse for re-use and storage. The copper cable is also gathered up but is not suitable for re-use so instead it is all recycled.

The deployment of the core network is a heavy lift in terms of physical effort but the next step is just as demanding – the logical network is how everything is configured to work together using many ‘virtual networks’ and routing protocols. In part 2 we will take a look at the logical network and the magic behind it.

 

Photo Credit: Fibre Optic via photopin (license)

x-default

Computer users are familiar with viruses and malware but the term ‘ransomware’ is a relative newcomer brought to prominence after several highly publicised cases. In 2014 the Sony attack brought ransomware into the headlines costing the company millions and effectively taking the entire company’s computer network offline. Attacks have continued to rise with 2016 expected to reach a new peak and with more sophisticated forms. In April 2016 a cryptolocker variant which had users home addresses started to appear tricking people into thinking it was legitimate link.

The principle behind ransomware is straightforward, a user’s computer becomes infected via one of the normal routes such as clicking on a URL in an email but instead of installing a virus which is annoying or disruptive, the software encrypts all, or a subset, of the user’s files rendering them unreadable unless the user agrees to pay a ransom to recover the key to un-encrypt them. With modern encryption techniques there is no realistic way of un-encrypting without the key.

Alongside the rise of ransomware users are increasingly taking advantage of file synchronisation services such as Google Drive, Microsoft OneDrive, Drop Box & Box which are great for maintaining files across multiple devices and providing a transparent backup of files. The downside of these services is that if a file becomes corrupted or infected with ransomware such as Cryptolocker on one device the damaged or infected file quickly replicates across all devices.

For event staff sharing files across teams and sending out links to files on cloud based services the risk is high. It only takes a moment, one click on a URL in an email from a known source and suddenly you have a potential disaster on your hands at a critical moment.

Avoiding infection is always the most desirable approach and there is no excuse for not running a real-time virus scanner with up to date virus definitions. There are plenty available and some of these are available free or built into the operating system as with Microsoft Windows 8 and 10. No virus scanner is infallible but they are an important line of defence.

Taking a few moments to double check an email or URL before clicking on it can save hours of frustration – the scammers are well versed on how to make an email and URL look genuine. Better still, don’t click the link but login to the cloud service directly from a browser and navigate to the new content – it takes a few moments longer but is much safer.

The proliferation of file synchronisation services has tended to mean people focus less on traditional backups but this can create a data recovery disaster if a user suffers a ransomware attack as all instances of the files become infected. The solution is to ensure that multi-version file history is enabled. Each of the synchronisation services provide this in slightly different ways and to different levels (in some cases it is a paid extra) but the principle is the same – when a file is changed the previous version (or versions) are still stored and can be reverted to. If you suffer an attack you can revert to an earlier, non-infected version.

For extra piece of mind, especially for critical documents, a weekly backup onto a USB memory stick or a writeable DVD which is then put away in a secure location is cheap and effective. Spending a few minutes now to make sure you have a backup strategy can save hours of time, stress and potential cost at a later date as sadly these attacks will continue to increase in frequency and sophistication.

Photo credit: Cryptolocker ransomware via photopin (license)

Event technology plays a major role in the way we plan and organize our events today. According to the below infographic, which takes a close look at the impact of technology on the success of events in 2016, a huge 75% of event professionals are expected to buy apps to facilitate engagement with their audience. Many companies have also stepped up their live streaming activities to reach a larger audience and stand out from the competition. Social media, which offers companies powerful opportunities to promote event awareness or create a new information channel, remains another top favourite.

Of course all of this introduces potential complexity which requires detailed knowledge and planning across a broad spectrum of technology. With the summer season of events already ramping up fast it is critical that organisers plan well in advance and work with the right experienced people to ensure all the different aspects are integrated into a realistic and workable solution. Last minute panics on-site are not desirable and generally push up costs, a well planned, integrated approach is much better!

Source: http://www.losberger.co.uk/

Event Technology: Will This Define Success in 2016?

15360051168_4162e2067e_kSorry to disappoint, but yes our blog last week on Li-Fi at festivals was an April Fool’s joke. The response to it though highlights just how much importance people put on remaining connected whilst at events.

Li-Fi is a real technology and does hold promise but it is practically much more suited to indoor environments and certainly not outdoor lighthouses! As with many technologies theoretical speeds are indeed very fast in the lab but real-world use is some way off, in the meantime Wi-Fi and 3G/4G remain the primary options for keeping connected.

All is not lost though as these technologies continue to develop, and more and more events are deploying infrastructure to improve attendee experience. Wi-Fi has moved a long way from the days of 11Mbps 802.11b, one of the first standards. Modern 802.11ac wireless access points support far more users, offer much higher speeds and contain a raft of technology to create the best user experience. A well designed high-density Wi-Fi deployment using 802.11ac and directional antennas can support thousands of simultaneous users and still provide good speeds.

The rapid deployment of 4G infrastructure by mobile carriers has improved connectivity at smaller events but events attracting more than a few thousand quickly overload cell towers which are limited by spectrum availability and coverage size.

Testing is underway with new technologies which may help – the first is LTE-U (Long Term Evolution Unlicensed) which more simply put is using unlicensed spectrum such as 5 GHz to deliver additional 4G capacity. The challenge is that this technology introduces yet another connectivity method into what is becoming very congested spectrum. It is in effect robbing Peter to pay Paul and therefore the approach has split the industry due to concerns over the impact it may have on Wi-Fi installations.

Another approach in testing, supported by Ruckus and Qualcomm amongst others, is OpenG using shared spectrum at 3.5 GHz in the US. It is not dissimilar to LTE-U but because it uses different shared spectrum does not clash with existing Wi-Fi. With the Ruckus solution the 3.5GHz radio is being integrated into existing dual-band Wi-Fi access points providing a triple radio solution in one unit which can be deployed easily.

Wi-Fi also continues to evolve with 802.11ac now at ‘wave 2’, a fuller implementation of the standard featuring ‘Multi-User MIMO’, a way of better utilising spatial channels across devices giving increased capacity. Then there is 802.11ax, touting speeds of 10 Gbps but we won’t see that any time soon as the standard is unlikely to be ratified until at least 2019 by which time Li-Fi may also be a reality!

Unfortunately, as is typical with these mobile technology evolutions, once testing and approval is complete there is a lag whilst the mobile handset manufacturers catch up with integrating the technology and penetrating the market which can add several years before mass market adoption is reached.

In the meantime, well implemented 802.11ac Wi-Fi remains the best approach for high density connectivity, and that’s certainly what we will be using this summer.