From Left: Open University of Sri Lanka Electrical & Computer Engineering Head Dr. Ajith Madurapperuma, Zone24x7 Software Architect Rashan Peiris, VirtusaPolaris Associate Director Janaka Pitadeniya, and Informatics Institute of Technology Dean Dr. Ruvan Weerasinghe
By Chandeepa Wettasinghe
While many would imagine living in a popular science fiction narrative when they hear the word artificial intelligence (AI), the recently held Colombo AI Meetup highlighted how most people are now coming into contact with AI on a daily or even an hourly basis.
Organized by the Sri Lanka Association for Artificial Intelligence in partnership with VirtusaPolaris, the AI Meetup Series makes the participants intimately familiar with the ins and outs of various aspects of real world AI applications as well as academic research into the subject.
The latest edition saw the gathering addressed by Informatics Institute of Technology Dean Dr. Ruvan Weerasinghe, who is an expert in natural language processing, and Zone24x7 Software Architect Rashan Peiris, who is an expert in retail applications, both of whom have had international exposure in AI.
The following was gleaned after leaving out the highly technical discussions that dominated the session.
Natural language processing
Examples of natural language processing can be seen in voice recognition software on computers and smart phones, with applications like Apple’s Siri, as well as in word prediction in texting apps and on web browsers.
Dr. Weerasinghe said that what initially started as creating computer programmes to mimic how humans spoke and processed language, turned into statistical modelling, as the tech industry revolutionized the process.
He noted that initial attempts to ask ordinary people how language is processed failed, because they had a lot of cultural background that helped process their native tongue, and that while people knew if mistakes are made by another, they are not technically adept at explaining how and why it is wrong.
“So we said let’s ask the next group of people who know. Linguists. So they told us there’s a process, and we liked it because we’re people who like to solve big problem. First there’s phonetics. We used software to ‘see’ our voices, but even with spectrograms it’s really messy. We think we’re keeping spaces when we speak but it’s a continuous signal. So it’s a challenge of speech recognition,” he said.
Further, Dr. Weerasinghe noted that spoken words are harder to process due to various accents and languages. He also noted the irregular grammar in languages, which are only logical phonetically; for example, in prefixes, where unregulated is the opposite of regulated, but irregular is the opposite of regular.
“Then once you have the words and word forms, you need syntax. We compiled thousands of parse trees with grammar, to form sentence requirement. But once we get the form, we need the meaning, which is semantics, and for sentences we need compositional semantics,” he said.
He explained that attempts were successful in encoding a lexicon/dictionary to create logical sentences, but that the progress stopped there.
“The process doesn’t stop there unfortunately. We don’t just speak unrelated sentence. Almost every sentence you speak is related to the previous sentence. This works fine in a monologue or face to face, but on the internet or in dialogue or a multiparty conversation we mess it up. Turn taking is a really complicated thing,” he said.
Finally, he said that the meaning of words and the meaning in real life differed too, complicating matters further. For example, the question ‘Can you close the door?’ could mean if the person is physically capable of closing the door, but in reality, it’s a request to close the door.
“So after doing all these, the people got really tired. We have to do all this to process language. Then some bright folks came and said, ‘We don’t have to ask the linguists. We just look at the body of the text’. The lead scientist at IBM did most of the work. Now we talk about Google and Apple, but all that stuff was done then at IBM. He had a team with linguists, psychologists, programmars and all. He said ‘Whenever I fire a linguist, my recognition rates go up. Very controversial at that point,” Dr. Weerasinghe said.
He said that dropping the requirement that computers should learn languages the way humans do disrupted the process, paving way for statistics based machine learning systems used today.
“Machine learning is statistics: finding the probability of words. We will count how many times one word is followed by another, divided by the first word, and you will get an estimate of word 1 given the previous word. You can put this to any word. Probability of a sentence is how many sentences does this word have, and how many times does this sentence occur. Suddenly we started getting really good language modelling,” he added.
However, he said that the initial successes only applied for common sentence structures. In order to account for different structures, languages, speech, etc., programmars created supervised and unsupervised ‘deep learning’ programmes where computers were provided additional data to create better modelling, such as when Google provided Sinhala translation after locals helped to input various Sinhala translations and grammar s into Google servers recently.
“For translations and deep learning we need more data and big and better algorithms. Why big companies like Google and Apple are now big in machine learning is because they have zillions of data,” he said.
Peiris said that the largest department stores in the world had long been losing profit margins, and that even engaging in price wars amongst themselves by providing discounts did not make customers happy.
“So they spent money on research to find out how to increase revenue. They have aisles of products, but it’s very boring. There’s very little interaction with the store, and it’s also very difficult to find the product because department stores have a large area, with specific products having specific areas. Even their websites had little interaction and was boring,” he said.
He noted that the initial difficulty of shopping, combined with the 30-day return policies in those countries led to huge losses as further discounts had to be given to the refurbished products.
“Firstly they tried a financial solution. Lower prices. That didn’t work. So they needed a different way of looking at this problem. So we came up with a retail platform that can sense a customer, and sense the needs and wants of the customer, and give them informed decisions to make them happy. A retail platform can do all of this, but it should know when, where and what to present,” Peiris said.
He said that applying AI into retail started with natural language processing as well, by analyzing web searches of customers to show ‘recommended products’ or ‘others who purchased this product also brought this’ panels on store websites, which was eventually expanded into social media, and into using e-mails to remind customers to purchase an unsold product, or to drive repeat purchases.
Peiris added that this has now been developed into physical in-store experiences, where a consumer’s phone is sensed by computers at the store, with large screens at store entrances displaying tailor made advertisements depending on different tastes of each customer.
He went on to say that each retailer’s smartphone app also provides pricing and location of different products making shopping easier.
Peiris noted that such a retail platform could address common problems that even Sri Lankan consumers suffer daily, where store workers are either always harassing or not being responsive, and the absence of price tags.
“We have different channels and different sensors to communicate with and sense customer requirements in different environments. They track the consumers, locations and promotions. Each application, each component may not have a high level of intelligence, but as a collective organism, it has a new level of intelligence. This is where the concept of super organism comes in. This is the thinking behind this platform,” he said.
While Dr. Weerasinghe said that what was achieved in the recent decade, which he termed the Second AI Revolution, was impressive, he noted that there was a big gap between it and the First AI Revolution in the 1980s and 1990s. He said that the trend of AI development coming in spurts is likely continue in the future as well.