'The AI conversation is important in terms of both urgency and impact.'

No matter what field of work you are in these days, it is getting hard to go one day without hearing about artificial intelligence (AI) and machine learning.

To understand AI, it is better to come back to the fundamental question: what is intelligence? Throughout history, defining the concept of intelligence has been debated. To have a functional definition, we'll consider intelligence to be the ability to accomplish a complex set of goals. Artificial intelligence, by extension, means an artificial entity, a system or program, that possesses such an ability.

This means any system endowed with logic that can solve a class of problems or achieve well-defined goals reasonably well compared to their human counterparts, can be classified as AI. In a lot of cases, AI applications can match if not outperform their human counterparts.

A Brief History of AI

If you are a fan of science fiction, you'll recall one of the most well-known sources on AI, I, Robot. I, Robot became a movie in 2004 and was based on a novel with the same title written by Isaac Asimov in the winter of 1950. Coincidentally, the same year Alan Turing published his seminal work, Can Machines Think, where he coined the original concept of the Turing Test as the first proposed means to examine whether a system can be considered artificially intelligent.

Figure 1 - Booms and Busts in AI Development [1]

The above chart depicts the booms and winters of AI dating back to its first emergence in science as well as in pop culture. The pivotal moment when AI became a concept known today was during the Dartmouth Summer Research Project in 1956. Then, scientists boldly hypothesized that a 'significant advance [on one or more problems related to machine intelligence could] be made with a selected group of scientists working together.' While the workshop did yield limited practical advancement, it sparked wide academic interest in the field. In 1970, Minsky famously (and exuberantly) claimed in, '3-8 years, we will have a machine with general intelligence of a human being.' The GOFAI (Good Old-Fashioned AI) approach leveraged brute-force and heuristic search algorithms and was predominant in the 60s and 70s.

When GOFAI failed to deliver on the hyped expectations, the field of AI went into its first winter. In the early 80s, a new concept of expert systems - a system to represent knowledge and make expert-like decisions was introduced by Edward Feigenbaum. This ultimately enabled

IBM to build its famous chess-playing AI running on the powerful Deep Blue supercomputer that defeated then-reigning chess world champion, Gary Kasparov. Though this was impressive and helped boost IBM's share price, the industry application of this approach was still limited due to the amount of effort required to manually construct such an immense database of knowledge required for each application domain. The AI market fell into another winter from the 90s to the mid-2000s.

However, research in another domain of AI, machine learning, went on despite the AI winter. In the late 2000s, the advancement in a special branch of machine learning called deep learning drastically catapulted the potential of AI far beyond traditional AI paradigms. Different from traditional AI approaches (heuristic search and expert systems), machine learning uses highly mathematically sophisticated concepts called backpropagation.

The use of backpropagation, or backward propagation of errors, seeks to determine the conditions under which errors are removed from networks built to resemble the human neurons by changing the weights (how much a particular input figures into the result) and biases (which features are selected) of the network. The goal is to continue changing the weights and biases until the actual output matches the target output. At this point, the artificial neuron fires and passes its solution along to the next neuron in line. This much closer mimics how the human brain works in terms of learning and acquiring new skill sets.

The applications enabled by deep learning have become so prevalent that most of us are not even aware that we are using them in our everyday lives. From making appointments by conversing with our smartphones to getting movie recommendations, from investing to fighting identity theft, AI-powered applications are omnipresent. The advent of deep learning and its applications has brought the field of AI out of its second winter.

The Impact - Hype and Reality

According to PwC and CB Insights, the venture capital funding of AI companies hit a record $9.3 billion high in 2018 - a 72% increase from the previous year. The largest American venture deal in AI for all 2018 was the $500 million funding round to self-driving car start-up Zoox Inc, which produces AI-enabled software to recognize people and objects. Seven out of 10 of the world's most valuable brands power their primary product offering with AI. The other three all have a strong presence of AI embedded in their service offerings to help recommend and protect consumers. In the image below, you can see where Gartner ranks most of the well-known AI-enabled technologies in its famous hype curve.

Figure 2 - Gartner Hype Cycle for Emerging Technologies

Key AI Interests in Transportation

Transportation is one of the most important areas where modern AI demonstrates its compelling advantage over conventional algorithms used in classic AI paradigms. To demonstrate the effectiveness and promises of AI-based solutions in the space of transit, we will be looking at self-driving vehicles, traffic management systems, and on-time performance and real-time predictions. The implication of AI to transportation is interesting as transportation is one of the oldest industries known to humanity. It was estimated that the history of transportation started back 40,000 to 60,000 years ago when human beings first crossed the ocean with boats and colonized Oceania.

Self-Driving Vehicles

Self-driving cars are of high interest in the transportation industry. It is hard not to notice autonomous vehicles on every headline when reading tech news. With the maturing of AI technology, the development of autonomous vehicles has accelerated drastically from concept and early prototypes to reality. Deep learning research and affordable, powerful GPUs (graphic processing units) enable real-time decision making based on image recognition and obstacle recognition systems built with LiDAR technology and a large array of cameras.

Pioneered by innovative companies such as Waymo, Tesla, and Navya, self-driving cars leverage learning algorithms and GPUs to process a blinding amount of information fed by sensors, extract key intelligence from these streams of data, and make just-in-time decisions based on this intelligence. Self-driving cars can learn road conditions as well and improve their driving. It is worthwhile to point out that, in 2018, the largest sums of venture capital invested in AI was invested in companies that are working on self-driving cars and related technologies (image analysis and object recognition technology).

Some of Trapeze's partners such as Navya is leading in this space as well. In 2018, Trapeze purchased a Navya vehicle to run an integration pilot in our Switzerland office. We completed the LIO ITS solution integration with the Navya autonomous vehicle. Currently, in North America, we are working on integrating TransitMaster and a Navya vehicle. This will enable a network approach to manage all the autonomous buses with the next generation of cloud software.

We have all heard about the promises of self-driving cars by now. So, how powerful are they, and when can we expect them? Based on the Institute of Electrical and Electronics Engineers' definition, there are five levels of autonomous driving vehicles:

Figure 3 - Five Levels of Autonomous Driving Vehicles

Level 1 - driver assistance: Control is still in the hands of the driver, yet the car can perform simple activities such as controlling the speed. We already have this

Level 2 - partial automation: The driver's responsibility is to remain alert and maintain control of the car. This level has been available on commercial cars since 2013

Level 3 - conditional automation: The car can drive by itself in certain contexts, under speed limits, and under vigilant human control. The automation could prompt the human to resume driving control. This has been available since 2015

Level 4 - high automation: The car performs all the driving tasks (steering, throttle, and brake) and monitors any changes in road conditions from departure to destination. This level of automation doesn't require human intervention to operate, but it's accessible only in certain locations and situations, so the driver must be available to take over as required. Vendors expect to introduce this level of automation around 2020. Tesla has claimed to have achieved this level of automation with its auto-pilot.

Level 5 - full automation: The car can drive from departure to destination with no human intervention, with a level of ability comparable or superior to a human driver. Level-5 automated cars won't have a steering wheel. This level of automation is expected by 2025. Waymo has demonstrated that full automation has been achieved in limited cities

Navya, one of Trapeze's partners in the autonomous vehicle space, has completed some amazing development towards Level 4 to Level 5 autonomous vehicles. Particularly in transporting passengers across a small distance in a low traffic complexity setting.

When the age of fully autonomous vehicles comes, massive economic benefits will be realized in fewer accidents, lower insurance costs, fewer jobs required in driving to save time for human operators to do other more productive tasks. This could be for personal vehicles, allowing for individuals to work during their commute. Or this could be eliminating taxi, truck, and, possibly, bus drivers. That's a big concern in public transit. However, in almost all the active AV pilots, these people are transitioning to more customer service roles onboard the vehicle - still able to provide information, directions, and stop details.

Self-driving vehicles are poised to disrupt public transit as well. Driverless buses can be seen in the streets of Europe. The world's first driverless bus was introduced in the French city of Lyon back in 2016, and there has been great progress ever since. In 2018, Stockholm also introduced driverless buses that could travel at 20 mph. In Switzerland, AMoTech and the local agency, implemented a model for integrating self-driving vehicles into their operation control system, and is refining it continuously. This work can set an important foundation on how autonomous vehicles operate.

Using sensors, cameras, GPS technology, and AI, these buses can carry passengers to their destinations. This will have deep and far-reaching implications for many aspects of transit in the long run as more transportation modes will start becoming automated.

Traffic Management Systems

The other equally prominent area of AI application in transportation is traffic management. The quality is greatly affected by the traffic flow patterns. Understanding these patterns is paramount.

Traffic congestion cost Americans $87 billion in 2018. AI could streamline traffic flow and reduce congestion. Smart traffic light systems can manage traffic more efficiently, which can save a lot of money. AI can also process complex data and suggest the best route to drivers in real-time based on traffic conditions. With the help of machine learning, AI systems can predict and prevent traffic jams.

Thanks to its immense processing power, the GPUs are now used in various IoT (Internet of Things) devices to accomplish the heavy lifting of real-time image recognition and prediction that traditionally happened in data centers. This decentralized architecture helps greatly accelerate the implementation of machine learning and AI. The recognition algorithms can provide better information on the mix of traffic, density, and rate of flow. The optimization algorithms can aggregate these data points by region to produce the optimized control pattern to reduce traffic jams and distribute flow optimally. This architecture allows for much more rapid decision making. It gives the control system a significantly higher degree of failure tolerance and redundancy compared to the traditional hub-and-spoke model.

Smart cameras at junctions can automatically identify different road users, allowing the traffic management system to adapt according to their needs

There are innovative companies such as Vivacity, NoTraffic, and Siemens Mobility to experiment with intelligent camera systems integrating with traffic lights to change how traffic management is done today. Intelligent traffic management systems, driven by machine learning, can advise transit agencies to dynamically change the routes to reduce inefficiencies and time in traffic. The positive implications will be a reduction of cost and environmentally harmful emissions and an increase in rider experience due to shorter travel times.

On-time Performance and Real-time Predictions

The most important aspect of transit is the quality of service. A big component of ridership satisfaction is the real-time prediction of bus arrival time. This applies to both fixed route transit and on-demand transit. The end-user experience is closely associated with how accurately the system can predict arrival times with so many different factors involved, such as distance between stops, geography, traffic, weather, and timing.

Traditional algorithms typically use a fixed time segment between stops. The issue is that if one bus starts to deviate from the planned arrival time, the prediction gets thrown off, and the inaccuracy cascades through all subsequent buses after. This approach also does not consider times of day, historical trends for certain stops during certain times of the day, the weather, and a few other modalities of information that can affect the prediction.

Leveraging a rich set of data accumulated from over 20 years of operation, Trapeze Group is developing a data lake and building a set of predictive features to increase the accuracy of prediction to offer exceptional service to you and your passengers' wait times and enhance their experience. This service leverages historical data set on a fixed schedule for arrival and other modalities of information such as weather patterns, rider count information (obtained from our CAD/AVL system), geography, and time of day to create a data model using all these relevant features.

Machine learning will help filter and predict the arrival time based on selected features and greatly boost accuracy by cross-examining multiple, seemingly discrete factors that impact travel time. The accuracy will be based on a multitude of information and advanced machine learning techniques. Once complete, this will elevate average prediction accuracy up to the mid 90% range.

Our sister company, TripSpark, has utilized a simplistic, yet ingenious, algorithm that dynamically adjusted the predictive time based on time delays scaled to the distance, as well as historical arrival times on each stop in all the routes for a particular city combined with a Kalman filter[2] to take a composite prediction. This elevated the prediction accuracy from 60%, using a naive algorithm, to a 90% average.

These advances greatly improve the satisfaction of riders by shortening the wait for transit and reducing the total travel time.

There are many exciting discoveries and applications of AI in transportation. The superior predictions and just-in-time decision making of AI combined with IoT devices and sensors will fundamentally change how transit operates. Overall, AI and machine learning will mean a much more cost-effective, user-friendly, and overall pleasant experience.

Endnotes:

[1] Matsuo, Yutaka. 'Does artificial intelligence go beyond humans: beyond deep learning, 2015' and https://www.technologystories.org/ai-evolution/#_ftnref2

[2] An algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe.

Read more from Alex Ni

Alex Ni is Chief Technology Officer at Trapeze Group North America. He is a seasoned technology leader who has held many titles, from developer, architect to delivery manager, innovation lead, and start-up CTO across different industries from mobile, telecom to digital media, biotech and fintech. ?A true tech nerd at heart, Alex is most passionate about building world-class technology companies by creating an open, transparent, and accountable engineering culture that delivers quality. Alex's technical interests include building scalable cloud-based software systems, machine learning /AI-driven applications, web/mobile application development and blockchain. Other than his technical skill set, Alex also holds a CPA/CMA designation, and has his MBA focusing on technology management and valuation.

The latest in transit, delivered straight to your inbox.

(C) 2020 Electronic News Publishing, source ENP Newswire