October 2017 - Bitly

1 downloads 156 Views 8MB Size Report
agency to maintain the database. That's one of the things attracting banks and trading exchanges to the ... Training Dat
October 2017

CONTENT Eight things your BOT must have to be considered intelligent

03

Three future career paths, data analysts must be looking at

07

Five things you should know about blockchains

12

Five ways insurance companies can adopt deep learning

16

Three biggest trends changing the data analytics world

20

Five AI & machine learning tech to look out for

23

Ten amazing ways chatbots are making life easier

30

1 CHATBOTS

Introduction

As the internet has grown so has folks’ tendency to chat instead of talk. Whether it’s booking an appointment at your local pediatrician or asking your boyfriend if he wants to go out for some pasta, people like texting. This is what chatbots want to capitalize on – People’s aversion to talking on the phone. Companies all over the world have embraced chat bots with open arms. They bring along with them a sense of humanity that no IVR can match. One of the reasons is the complete lack of any voice based interaction. While those beautifully sounding ladies on the IVR systems might be speaking correct English (or any other language(s)), they speak it with such robotic emotions that the only thing the person on the other end of the phone wants to do , is strangle themselves (or the IVR lady). A lack of direct human interaction allow chatbots to be much more than a computer program. Employing the help of mankind’s greatest asset – Imagination, chatbots can (almost) substitute for a real live human person on the other end.

However, this huge praise that Chatbots have come to receive has also muddled up the market. There is a huge different between a generic chatbot that can only choose between a set of responses and intelligent chatbots which try to make sense of the user’s chats and respond accordingly. If you are a business, you need the latter but chances are the agency you hired to develop you a chatbot is trying to fool you by giving a very cleverly disguised version of a generic chatbot.

08

things your BOT must have to be considered intelligent

04

Computer Scientists from all over the world are doing great work in making sure that chatbots can really feel like talking to another live human. Conversational Artificial Intelligence (the field of study that chatbots fall under) has seen tremendous progress in the last few years. Tech giants like Facebook, Google and Apple are constantly coming up with both highly valuable research papers as well as publishing user apps that at the same time is both showing off the work they have done as well as training their chatbots to become even better.

01 Carry an Intelligent Conversation :

02 Build Contextual Engagement :

03 Leveraging Real-Time Transaction Data :

04 Reuse Existing Content : 05

A conversation is much more than saying yes or no. Moreover, the longer a conversation the more complex it gets.To carry an intelligent dialogue, the bot must be able to maintain the context of the conversation at all times.

It also has to understand that natural conversations don’t always progress linearly – the bot must be able to process an unexpected reply and adapt to changes in the course of the conversation.

A smart bot has to understand who it is chatting with. In order to provide a truly personal experience, the chatbot has to know about the user’s interests, attributes and personal information – then tailor the conversation to fit them.

The bot needs to provide content, advice, and offers that exactly fit the user. If all the information is generic, it will be shallow, unengaging, and in many cases, not very useful.

Connected with the need for contextual engagement, an intelligent bot must be able to access real-time insights on transactions.

Without real-time data access and analytics, the power of artificial intelligence (AI) and contextual advice (either human-based or with chatbots) is limited.

To have a meaningful impact, it is crucial for the bot to be able to access content created and maintained in digital repositories across all channels.

From digital ‘brochureware’ to FAQs, rules and regulations, and rate information, bots must be able to access and leverage this insight in real-time.

05 Build Deep Knowledge :

06 Work Seamlessly Across Channels :

07 Get Smarter Over Time :

08 Anticipate Customer Needs. : 06

advice

To build engagement, a bot needs to be able to provide advice, not just balances. Personetics believes bots need to be purpose-built – with deep knowledge on issues important to the customer.

With PayPal supporting payments through Facebook Messenger, the bar transactions through the bot channel has been set and is being raised.

Customers expect a consistent experience across the digital landscape – online, mobile app, Facebook Messenger, Amazon’s Alexa, etc. A bot can not be a silo, but should be able to traverse across and between multiple channels.

This may be a challenge for organizations who still can not achieve this within internal channels (mobile, branch, online, call center).

An intelligent bot must get to know customers better through over time as more conversations and transactions take place.

It must improve based on how a customer reacts to information and advice provided by the bot over time.

Almost half of all bots are only used once. This happens when a bot experience does not meet exceed expectations.

To get customers in the habit of conversing with your bot, it needs to proactively reach out to customers with information, insight, and advice – presented at the right time and place based on predictive analysis of individual customer needs

2 CAREER PATHS

Introduction

Data has unanimously become the single most valuable resource for any organization. The more you know , the better can you do whatever it is that you do. As such, the demand for Analysts, people who could make sense of all that data, grew exponentially in the last decade. From the most basic levels of operational architectures to monitoring thousands of users – trying to find that one tiny detail that might transform their experiences, Analysts allowed decision makers to actually see what their decisions are doing. To plan ahead . To think strategies decades in advance. But then, came the AI boom and nothing remained the same.

The good thing is , due to the still unstable nature of the Data Industry Ecosystem, now is the best time to move up the ladder. You might have to work really hard picking up new skills on the way , but combined with your pre-existing knowledge of data , you can add value to a business that is irreplaceable.

03

future career paths, data analysts must be looking at

08

Far surpassing the capabilities of the human brain , machine learning based artificial intelligence solutions are now able to do the work that a team of Data scientists could do in a week , in less than an hour. The only catch here is that building such a smart software takes time. The bigger companies have already moved onto AI but the younger ones are still in the process of building it. Data Analysts have to accept the reality – Their role IS going to change. There is no stopping it.

01 Data Explorer :

A Data Explorer is expected to be able to identify and connect to new data sources, merge and prepare the data, and build production-ready data pipelines. The purpose of the products you’ll be helping to build is for them to run in production, and so you’ll be obsessed with automation and reproducibility. You’ll be the local expert on the details of the data – when a new data source is added, you’ll know what fields it contains and which new features you might be able to engineer from it.

You will also have your eyes open to new open data sources that you might be able to use to enrich your internal data. And although a good portion of feature engineering will be done by the Data Modeler, you will be in charge of engineering features like KPIs, which require your deep familiarity with the business implications of the data. You’ll still need to be familiar with machine learning algorithms, and you’ll probably need to have a firm grasp on data architecture concepts, such as distributed computation.

Good Fit For : Analysts with skills in : SQL, SaaS, Excel, Data Visualization Things You Need To Learn : Business Intelligence , Data Cleaning, Machine Learning , Automation , Distributed Computing

Data Exp lorer

09

Study Resources : Excel Guide by Trello’s Founder Joel Spolsky – You suck at Excel Online course on Basics of Data Cleaning by the European Data Portal Udacity Course on Python Udacity Course on Machine Learning

02 Data Modeler :

A Data Modeler is in charge of building predictive models and generating either a product or a service from those models, and then implementing them. You will create checks and metrics for monitoring these models, because there will be a huge amount of them in production! You will be a master of machine learning models and the frameworks used to validate their quality.

You will apply your creativity in feature engineering: using abstract mathematical techniques to select and combine the right variables and use them in the right model. This will often require you to reduce the number of variables from an enormous number down to something more manageable. In short, you will be the go-to person on your team for all thing math, stats, and algorithms – and also for knowing how to use different types of data in the many models available to you at your fingertips.

Good Fit For : Statisticians, Computer Scientists, Financial analysts, and, Mathematicians; Things You Need To Learn : Python , R , Data Visualization, Data Modelling , Machine Learning Study Resources:

Data Modeler

10

Andrew Ng’s Course on Machine Learning Anand Rajaraman and Jeffrey Ullman’s book – Mining of Massive Datasets Oxford professor Nando de Freitas’s Deep learning lecture series on YouTube Python Machine Learning: a practical guide around scikit-learn

03 Data & Analytics Product Owner :

tics y l a An er n & w a Dat duct O Pro

A Data & Analytics Product Owner is a Jack of all trades. There are many paths that lead to this role. You might already have been a Data Explorer or a Data Modeler; you might lead an analytics team, or you might come from outside the analytics team altogether. No matter what their background, these Product Owners have established that they have a good, well-rounded expertise in the world of data and machine learning, coupled with complementary skills in management and communication. Their main job is supporting Data Modelers and Data Explorers by gathering requirements, prioritizing tasks, and making sure the products and services being built are working for the end users within and beyond your organization.

They have to be able to explain data and analytics products and have deep knowledge of the user profiles. They are the bridge between the data team and those who rely on the data team. Product Owners are expected to apply user experience (UX) and design thinking concepts to data products and services that will no longer be used only by technical users but instead by the broader organization and even users and customers outside the organization. They are the person the organization relies on to ensure value comes out of all the data and analysis..

Good Fit For : Analytics Managers, Senior Analysts, Product Managers with Data Science exposure Things You Need To Learn : Product & Team Management, Scrum , Everything else needed by Data Modelers and Explorers. Study Resources: All of the resources for Data Modeler and Explorer Agile Manifesto – For those new to management The Elements of SCRUM – By Chris Sims and Hillary Louis Johnson

Conclusion 11

The world is rapidly changing with new factors and trends coming into account all the time. A good career requires hard work , constantly. You have to train yourself regularly to keep on top of the game. The above three are great paths for you to steer your career towards. But if not , don’t forget, there is a whole world waiting for you.

3 BLOCKCHAINS

image source : blockgeeks.com/guides/what-is-blockchain-technology/

05

things you should know about blockchains

13

01 Don’t call it “the” blockchain :

02 Security, transparency: the network’s run by us :

03 Big business is taking an interest in blockchain technology : 14

The first thing to know about the blockchain is, there isn’t one: there are many. Blockchains are distributed, tamper-proof public ledgers of transactions.

The most well-known is the record of bitcoin transactions, but in addition to tracking cryptocurrencies, blockchains are being used to record loans, stock transfers, contracts, healthcare data and even votes.

There’s no central authority in a blockchain system: Participating computers exchange transactions for inclusion in the ledger they share over a peer-to-peer network. Each node in the chain keeps a copy of the ledger, and can trust others’ copies of it because of the way they are signed. Periodically, they wrap up the latest transactions in a new block of data to be added to the chain. Alongside the transaction data, each block contains a computational “hash” of itself and of the previous block in the chain.

Hashes, or digests, are short digital representations of larger chunks of data.

Blockchain technology was originally something talked about by antiestablishment figures seeking independence from central control,

but it’s fast becoming part of the establishment: Companies such as IBM and Microsoft are selling it, and major banksand stock exchanges are buying.

Modifying or faking a transaction in an earlier block would change its hash, requiring that the hashes embedded in it and all subsequent blocks be recalculated to hide the change. That would be extremely difficult to do before all the honest actors added new, legitimate transactions — which reference the previous hashes — to the end of the chain.

04 No third party in between :

05 Programmable money :

15

Because the computers making up a blockchain system contribute to the content of the ledger and guarantee its integrity, there is no need for a middleman or trusted third-party agency to maintain the database. That’s one of the things attracting banks and trading exchanges to the technology — but it’s also proving a stumbling block for bitcoin as traffic scales.

The total computing power devoted to processing bitcoin is said to exceed that of the world’s fastest 500 supercomputers combined, but last month, the volume of bitcoin transactions was so great that the network was taking up to 30 minutes to confirm that some of them had been included in the ledger. On the other hand, it typically only takes a few seconds to confirm credit card transactions, which do rely on a central authority between payer and payee.

One of the more interesting uses for blockchains is for storing a record not of what happened in the past, but of what should happen in the future. Organizations including the Ethereum Foundation are using blockchain technology to store and process “smart contracts,” executed by the network of computers participating in the blockchain on a pay-as-you-go basis.

They can respond to transactions by gathering, storing or transmitting information or transferring whatever digital currency the blockchain deals in. The immutability of the contracts is guaranteed by the blockchain in which they are stored.

4 INSURANCE COMPANIES & DEEP LEARNING

Introduction

New technologies often trigger unrealistic expectations in the market. It happened with the introduction of computers and yet again during the early years of the Internet. While adopting a new technology, an organization may often overestimate its benefits and, at the same time, underestimate the prequisites for its success. Insurance companies are going through a similar experience with the adoption of AI. It might take a while for the new framework and technologies to mature before delivering a handsome Return on Investment (ROI). The only way to validate this premise is to put it to test.

05

ways insurance companies can adopt deep learning

17

Insurance companies have traditionally operated in silos, adopting proprietary software and following data secrecy practices. It will be a culture-shift to adapt to the new normal. However, insurers have to invest in technology today, and reinvent their business to remain relevant in future. Industry leaders must pursue strategic long-term growth over short-term maneuvers. Driven by this imperative, some insurers have opened up their data sets and partnered with startups to explore the benefits of AI.

01 Rapid Experimentation :

Understanding the limitations of deep learning provides critical context to design use cases. The best way to understand the capability of a certain technology is to experiment. Many pilot projects make the mistake of spending too much time on setting up the experiment than on running and learning from it. The risk and cost of inaction is higher than that of pursuing a mediocre use case. Experts recommend implementing multiple pilot-projects at once instead of rolling them out one after the other.

02 Gathering / Generating Training Data Sets :

18

The efficiency of an DL algorithm depends on the quality and size of the training data sets. A continuous stream of transactional data is often not enough to train the machine. The data needs to be indexed and labeled appropriately for the machine to make sense of it. Let’s take the example of credit card transactions. The raw data might not be enough; each transaction needs to be identified as either ‘genuine’ or ‘fraud’ so that the algorithm can identify trends that can distinguish the two types.

For example, a pilot-project on Customer Discovery can be implemented along with adoption of new Customer Support tools. The two use cases complement and yet do not interfere with each other. An organization can attempt deep learning either at the task level (classification, recommendation etc.) and/or at functional level (underwriting, claims processing etc.). The actual application of deep learning depends on the end objectives — reduction in operating costs, and increase in revenue and efficiency.

Sometimes, these data sets are not linear but relational – for example, to monitor the risk (fraud or compliance), it is important to understand context of the entities; this context would be gathered from third-party sources or from another dataset internally.

03 Strategic Growth Vs Long Time Benefits :

04 Creating a Man + Machine Eco-System :

05 Creating Internal Competency and Resources :

19

There is always a trade-off while having to choose between complex use cases and simple ones. Complex use cases can take longer pilot times but can deliver higher ROI. On the other hand, simple use cases take lesser effort and resources but deliver short-term business outcomes.

Hence, the product development roadmap becomes an important consideration while adopting deep learning technologies.

It is important to test the outcomes generated by AI algorithms through manual validation. The AI predictions must be compared with actual outcomes to understand the effectiveness of the algorithm. Comparing this outcome with current work flow results will help the AI system with continuous learning.

Alternative options of feedback must be created for the AI system to validate its outcome, particularly for use cases where reasoning is crucial. For example, while determining medical admissibility of claims application, it important to consider the mandatory documentation before processing the claims. A man+machine ecosystem can gather enough relational information to design a complex system to automate such high level tasks in future.

In any large enterprise, the adoption of deep learning is not limited to few use cases. Although insurers may embark the ML/AI journey with an external partner/vendor, it is important to create in-house resources and experts to extend the learning to other aspects of the business.

This can reduce the customization/tuning and integration cycles. Additionally, it is also crucial to develop strong AI product development skills to better manage future ‘AI’ investments. Developing strong SMEs internally can accelerate adoption and deliver better business outcomes.

The best approach is to plan the product development with small upgrades such that the ROI/outcomes can be demonstrated at every product upgrade.

5 DATA ANALYTICS WORLD

Introduction

You don’t need us to tell you that the data world – and everything it touches, which is, like, everything – is changing rapidly. These trends are driving the opportunities that will fuel your career adventure over these next few years. At the heart of these trends is a massive wave of data being generated and collected by organizations worldwide.

With this data we can shift our focus as analysts from explaining the past to predicting the future. And in order to do this, we need to spend less time doing the same things over and over and more time doing brand new things. And accomplishing all these changes will require us to work together differently than we do now

03 Automation of Task

biggest trends changing the data analytics world

21

01 Bigger, Larger and Faster Data :

You’ve probably already heard the fact that every two years we, as humans, are doubling the amount of data in the world. This literally exponential growth of data is impacting analysis in some big ways: • Big data means new infrastructure: distributed computing like Hadoop.

02 Predictive Analytics:

03 Automation of Tasks :

22

• Large datasets mean new tools. Excel can no longer do the work it once did. We’ve seen analysts using Access to cut datasets down into Excel digestible pieces. • We are on the cusp of the real-time data revolution. Services like Kafka will enable organizations to apply their data products in real time, which will revolutionize everything from operations to customer service. The urgency of top-notch analytics will be paramount!

The vast majority of time spent by the vast majority of today’s analysts is on understanding data collected in the past, often in the form of reports and dashboards. Those days are coming to an end. The data and tools now available are allowing analysts to go beyond just convincing someone to do something and instead to often just do it themselves. For example: • Using customer data to identify which customers are most likely to churn (stop being subscribers/customers) – offer them special deals automatically in order to keep them.

• Using Internet of Things (IoT) data to identify which machines in a factory are most likely to break down, and fix them before they cause a disruption to production. This is called “predictive maintenance”, and not only does it reduce downtime, it can also substantially lower insurance rates. • Using customer behavior data to narrow down potential fraud cases for insurance companies. As the predictive model gathers more data, it becomes even better at figuring out which cases the company should focus their investigative resources on.

Once upon a time, analysts built a model in Excel, and once a month or so, they exported the model to PowerPoint and send it to (or even printed it out for) the managers who relied on regular reports. Soon, there were too many reports, so maybe they used macros in Excel to automate the creation of reports. Or maybe they were lucky enough to have a dashboard program that had some automation functionalities built in. The future promises even more than this:

• Replicable data preparation flows/recipes that can be applied and customized easily and quickly to brand new sources of data and for brand new applications. • Models scheduled to re-run regularly and produce a set of metrics that will determine whether or not they are performing as needed. • Meta-reports: regular reports on the state of the many models deployed in production, so that analysts can feel comfortable and in control.

6 AI & MACHINE LEARNING TECH

Introduction

Distilling a generally-accepted definition of what qualifies as artificial intelligence (AI) has become a revived topic of debate in recent times. Some have rebranded AI as “cognitive computing” or “machine intelligence”, while others incorrectly interchange AI with “machine learning”. This is in part because AI is not one technology. It is in fact a broad field constituted of many disciplines, ranging from robotics to machine learning. The ultimate goal of AI, is to build machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. In order to get there, machines must be able to learn these capabilities automatically instead of having each of them be explicitly programmed end-to-end. It’s amazing how much progress the field of AI has achieved over the last 10 years, ranging from self-driving cars to speech recognition and synthesis. Against this backdrop, AI has become a topic of conversation in more and more companies and households who have come to see AI as a technology that isn’t another 20 years away, but as something that is impacting their lives today.

05

ai & machine learning tech to look out for

24

Indeed, the popular press reports on AI almost everyday and technology giants, one by one, articulate their significant long-term AI strategies. While several investors and incumbents are eager to understand how to capture value in this new world, the majority are still scratching their heads to figure out what this all means. Meanwhile, governments are grappling with the implications of automation in society. Given that AI will impact the entire economy, actors in these conversations represent the entire distribution of intents, levels of understanding and degrees of experience with building or using AI systems. As such, it’s crucial for a discussion on AI — including the questions, conclusions and recommendations derived therefrom — to be grounded in data and reality, not conjecture. It’s far too easy (and sometimes exciting!) to wildly extrapolate the implications of results from published research or tech press announcements, speculative commentary and thought experiments.

01 Reinforcement learning (RL) :

25

RL is a paradigm for learning by trial-and-error inspired by the way humans learn new tasks. In a typical RL setup, an agent is tasked with observing its current state in a digital environment and taking actions that maximise accrual of a long-term reward it has been set. The agent receives feedback from the environment as a result of each action such that it knows whether the action promoted or hindered its progress. An RL agent must therefore balance the exploration of its environment to find optimal strategies of accruing reward with exploiting the best strategy it has found to achieve the desired goal. This approach was made popular by Google DeepMind in their work on Atari games and Go. An example of RL working in the real world is the task of optimising energy efficiency for cooling Google data centers. Here, an RL system achieved a 40% reduction in cooling costs. An important native advantage of using RL agents in environments that can be simulated (e.g. video games) is that training data can be generated in troves and at very low cost. This is in stark contrast to supervised deep learning tasks that often require training data that is expensive and difficult to procure from the real world.

Applications: Multiple agents learning in their own instance of an environment with a shared model or by interacting and learning from one another in the same environment, learning to navigate 3D environments like mazes or city streets for autonomous driving, inverse reinforcement learning to recapitulate observed behaviours by learning the goal of a task (e.g. learning to drive or endowing non-player video game characters with human-like behaviours). Principal Researchers: Pieter Abbeel (OpenAI), David Silver, Nando de Freitas, Raia Hadsell, Marc Bellemare (Google DeepMind), Carl Rasmussen (Cambridge), Rich Sutton (Alberta), John Shawe-Taylor (UCL) and others. Companies: Google DeepMind, Prowler.io, Osaro, MicroPSI, Maluuba/Microsoft, NVIDIA, Mobileye, OpenAI.

02 Generative models :

26

In contrast to discriminative models that are used for classification or regression tasks, generative models learn a probability distribution over training examples. By sampling from this high-dimensional distribution, generative models output new examples that are similar to the training data. This means, for example, that a generative model trained on real images of faces can output new synthetic images of similar faces. For more details on how these models work, see Ian Goodfellow’s awesome NIPS 2016 tutorial write up. The architecture he introduced, generative adversarial networks (GANs), are particularly hot right now in the research world because they offer a path towards unsupervised learning. With GANs, there are two neural networks: a generator, which takes random noise as input and is tasked with synthesising content (e.g. an image), and a discriminator, which has learned what real images look like and is tasked with identifying whether images created by the generator are real or fake. Adversarial training can be thought of as a game where the generator must iteratively learn how to create images from noise such that the discriminator can no longer distinguish generated images from real ones. This framework is being extended to many data modalities and task.

Applications: Simulate possible futures of a time-series (e.g. for planning tasks in reinforcement learning); super-resolution of images; recovering 3D structure from a 2D image; generalising from small labeled datasets; tasks where one input can yield multiple correct outputs (e.g. predicting the next frame in a vide0; creating natural language in conversational interfaces (e.g. bots); cryptography; semi-supervised learning when not all labels are available; artistic style transfer; synthesising music and voice; image in-painting. Companies: Twitter Cortex, Adobe, Apple, Prisma, Jukedeck*, Creative.ai, Gluru*, Mapillary*, Unbabel. Principal Researchers: Ian Goodfellow (OpenAI), Yann LeCun and Soumith Chintala (Facebook AI Research), Shakir Mohamed and Aäron van den Oord (Google DeepMind), Alyosha Efros (Berkeley) and many others.

03 Networks with memory :

27

In order for AI systems to generalise in diverse real-world environments just as we do, they must be able to continually learn new tasks and remember how to perform all of them into the future. However, traditional neural networks are typically incapable of such sequential task learning without forgetting. This shortcoming is termed catastrophic forgetting. It occurs because the weights in a network that are important to solve for task A are changed when the network is subsequently trained to solve for task B. There are, however, several powerful architectures that can endow neural networks with varying degrees of memory. These include long-short term memory networks (a recurrent neural network variant) that are capable of processing and predicting time series, DeepMind’s differentiable neural computer that combines neural networks and memory systems in order to learn from and navigate complex data structures on their own, the elastic weight consolidation algorithm that slows down learning on certain weights depending on how important they are to previously seen tasks, and progressive neural networks that learn lateral connections between task-specific models to extract useful features from previously learned networks for a new task.

Applications: Learning agents that can generalise to new environments; robotic arm control tasks; autonomous vehicles; time series prediction (e.g. financial markets, video, IoT); natural language understanding and next word prediction. Companies: Google DeepMind, NNaisense (?), SwiftKey/Microsoft Research, Facebook AI Research. Principal Researchers: Alex Graves, Raia Hadsell, Koray Kavukcuoglu (Google DeepMind), Jürgen Schmidhuber (IDSIA), Geoffrey Hinton (Google Brain/Toronto), James Weston, Sumit Chopra, Antoine Bordes (FAIR).

04 Learning from less data and building smaller models :

28

Deep learning models are notable for requiring enormous amounts of training data to reach state-of-the-art performance. For example, the ImageNet Large Scale Visual Recognition Challenge on which teams challenge their image recognition models, contains 1.2 million training images hand-labeled with 1000 object categories. Without large scale training data, deep learning models won’t converge on their optimal settings and won’t perform well on complex tasks such as speech recognition or machine translation. This data requirement only grows when a single neural network is used to solve a problem end-to-end; that is, taking raw audio recordings of speech as the input and outputting text transcriptions of the speech. This is in contrast to using multiple networks each providing intermediate representations (e.g. raw speech audio input phonemes words text transcript output; or raw pixels from a camera mapped directly to steering commands). If we want AI systems to solve tasks where training data is particularly challenging, costly, sensitive, or time-consuming to procure, it’s important to develop models that can learn optimal solutions from less examples (i.e. one or zero-shot learning). When training on small data sets, challenges include overfitting, difficulties in handling outliers, differences in the data distribution between training and test.

An alternative approach is to improve learning of a new task by transferring knowledge a machine learning model acquired from a previous task using processes collectively referred to as transfer learning. A related problem is building smaller deep learning architectures with state-of-the-art performance using a similar number or significantly less parameters. Advantages would include more efficient distributed training because data needs to be communicated between servers, less bandwidth to export a new model from the cloud to an edge device, and improved feasibility in deploying to hardware with limited memory. Applications: Training shallow networks by learning to mimic the performance of deep networks originally trained on large labeled training data; architectures with fewer parameters but equivalent performance to deep models (e.g. SqueezeNet); machine translation. Companies: Geometric Intelligence/Uber, DeepScale.ai, Microsoft Research, Curious AI Company, Google, Bloomsbury AI. Principal Researchers: Zoubin Ghahramani (Cambridge), Yoshua Bengio (Montreal), Josh Tenenbaum (MIT), Brendan Lake (NYU), Oriol Vinyals (Google DeepMind), Sebastian Riedel (UCL).

05 Hardware for training and inference :

29

A major catalyst for progress in AI is the repurposing of graphics processing units (GPUs) for training large neural network models. Unlike central processing unit (CPUs) that compute in a sequential fashion, GPUs offer a massively parallel architecture that can handle multiple tasks concurrently. Given that neural networks must process enormous amounts of (often high dimensional data), training on GPUs is much faster than with CPUs. This is why GPUs have veritably become the shovels to the gold rush ever since the publication of AlexNet in 2012 — the first neural network implemented on a GPU. NVIDIA continues to lead the charge into 2017, ahead of Intel, Qualcomm, AMD and more recently Google. However, GPUs were not purpose-built for training or inference; they were created to render graphics for video games. GPUs have high computational precision that is not always needed and suffer memory bandwidth and data throughput issues. This has opened the playing field for a new breed of startups and projects within large companies like Google to design and produce silicon specifically for high dimensional machine learning applications. Improvements promised by new chip designs include larger memory bandwidth, computation on graphs instead of vectors (GPUs) or scalars (CPUs), higher compute density, efficiency and performance per Watt.

This is exciting because of the clear accelerating returns AI systems deliver to their owners and users: Faster and more efficient model training better user experience user engages with the product more creates larger data set improves model performance through optimisation. Thus, those who are able to train faster and deploy AI models that are computationally and energy efficient are at a significant advantage. Applications: Faster training of models (especially on graphs); energy and data efficiency when making predictions; running AI systems at the edge (IoT devices); always-listening IoT devices; cloud infrastructure as a service; autonomous vehicles, drones and robotics. Companies: Graphcore, Cerebras, Isocline Engineering, Google (TPU), NVIDIA (DGX-1), Nervana Systems (Intel), Movidius (Intel), Scortex

7 CHATBOTS

Introduction

Chatbots can schedule meetings, tell you the weather, and provide customer support. And that’s just the beginning. Want to order pizza, schedule a meeting, or even find your true love? There’s a chatbot for that. Just as apps once were the hot new thing that would solve whatever problem you had back in 2009, now we’re moving into the age of chatbots. Chatbots make life even easier for consumers. With chatbots, there’s no more long waits on hold to talk to a person on the phone or going through multiple steps to research and complete a purchase on websites.

10

amazing ways chatbots are making life easier

31

Millions of people already get it. They’re using chatbots to contact retailers, get recommendations, complete purchases, and much more. Adoption of chatbots is increasing. People are discovering the benefits of chatbots. All of this is good news for entrepreneurs and businesses because pretty much any website or app can be turned into a bot. Now is the perfect time to hop on the bandwagon.

01 Order Pizza :

02 Product Suggestions :

03 Customer Support :

04 Weather :

05 Personal Finance Assistance : 32

It’s ridiculously easy to order pizza with the help of chatbots. You can order by texting, tweeting, voice, or even from your car.

Domino’s was one of the early adopters of chatbots. Today, Domino’s lets you easily build a new pizza (or reorder your favorite pizza) and track your order all from Facebook Messenger.

Many consumers know they want to buy some shoes, but might not have a particular item in mind. You can use chatbots to offer product suggestions based on what they want (color, style, brand, etc.)

It’s not just shoes. You can replace “shoes” with any other item. It could be clothes, groceries, flowers, a book, or a movie. Basically, any product you can think of. For example, tell H&M’s Kik chatbot about a piece of clothing you have and they’ll build an outfit for you.

Last year, brands including AirBnB, Evernote, and Spotify started using chatbots on Twitter to provide 24/7 customer service.

The goal of these customer support chatbots is to quickly provide answers and address customer complaints, or simply track the status of an order.

There are numerous weather bots to choose from. Most are pretty basic, though a few are designed to be a bit more fun.

You can use these to ask about the current conditions in your area and find out whether you should bring the umbrella before you leave for work. Some bots allow you to set regular reminders for a certain time of day.

Chatbots make it easy to make trades, get notifications about stock market trends, track your personal finances, or even get help finding a mortgage.

Banks have created chatbots to let you check in on your account, such as your current balance and most recent transactions. And there are tax bots that help you track your business and deductible expenses.

06 Schedule a Meeting :

07 Search for & Track Flights :

08 News :

09 Find Love :

10 Send Money :

33

With so many schedules to juggle, setting up meetings can be a pain. Unless you let a chatbot do the work for you. Meekan is one such example.

Simpy request a new meeting and this Slack chatbot will look at everyone’s calendars to find times when everyone is available.

You can use chatbots to get some vacation inspiration. Others will let you search for and compare flights based on price and location. Kayak’s chatbot even lets you book your flights and hotels entirely from inside Facebook Messenger.

Once you’re all booked, there are other chatbots that will let you track current flights, wait times, delays, and more.

Chatbots help you stay up to date Or you can get the latest tech on the news or topics that headlines from TechCrunch or matters to you. Engadget. You can get the latest headlines from mainstream media sources like CNN, Fox News, or the Guardian.

A match made by chatbots? It could happen. Instead of swiping left or right on an app, you could use Foxsy.

This Messenger bot promises to help you find a “beautiful and meaningful connection with the right person.”

You can easily send payments to your team or friends with chatbots.

All you have to do to send money on the Slack PayPal account is type /paypal send $X to @username.