article_id
int64
6
10.2M
title
stringlengths
6
181
content
stringlengths
1.17k
62.1k
excerpt
stringlengths
7
938
categories
stringclasses
18 values
tags
stringlengths
2
806
author_name
stringclasses
605 values
publish_date
stringdate
2012-05-21 07:44:37
2025-07-11 00:01:12
publication_year
stringdate
2012-01-01 00:00:00
2025-01-01 00:00:00
word_count
int64
200
9.08k
keywords
stringlengths
38
944
extracted_tech_keywords
stringlengths
32
191
url
stringlengths
43
244
complexity_score
int64
1
4
technical_depth
int64
2
10
industry_relevance_score
int64
0
7
has_code_examples
bool
2 classes
has_tutorial_content
bool
2 classes
is_research_content
bool
2 classes
10,065,380
How to handle dynamic data with chaotic neural networks?
In data science, we see the emergence of the chaotic nature of the environment. This environment consists of data, layers, mathematics, and a lot of things. It is always seen in normal practices that we use standard neural networks that are static in front of the problem of dynamic or chaotic behaviour. Chaotic neural networks are something that is specialized in dealing with the dynamic and chaotic nature of the environment and data. In this article, we are going to discuss the chaotic neural network. The major points to be discussed in the article are listed below. Table of content What is a chaotic neural network?Mathematics behind chaotic responsesModeling the chaotic responses Where are the chaotic neural networks used?The architecture of the chaotic neural network Let’s start with introducing chaotic neural networks. What is a chaotic neural network? In general English, the word chaotic can be explained as the state of complete confusion and disorder. In technology and the industrial world, we can see that the implementation of the chaotic phenomenon is everywhere. In data science, we can say that if a model can have complex dynamics and abundant chaotic pattern detection ability then it can be applied to implementation in neurocomputing much better. We can clarify the significance of chaotic phenomena in neural networks by taking an example of an artificial neural network where we can use it to measure the dynamic characteristics of the artificial neural networks. Just like another network this network also has mathematical aspects or we can say these are also a kind of mathematical model. Talking about the chaotic dynamics we can find one example of this in the nerve membranes. Many experiments determine that real neuron membranes in the resting state respond to periodic pulse stimulation not just synchronously, but also chaotically, depending on the intensity and time of the stimulating pulses. There are equations like the hodgkin-Huxley equation and Fitzhugh-Nagumo equation that can be used to analyze neurocomputing. We can say the this network is the network that can be used in the analysis of the chaotic nature of the network and data. Let’s take a look at the simple networks that can generate chaotic responses. Are you looking for a complete repository of Python libraries used in data science, check out here. Mathematics behind chaotic responses In this section, we will look at how the simple neural network generates chaotic responses and how we can model them to make them synchronous. As we discussed the Fitzhugh-Nagumo equation which is a method to model chaotic responses. This equation assumes that the increment in the chaotic nature of past data increases the chaotic nature in present and can be given by the following equation: x(t+1)=u(A(t)- d=0tkdx(t-d)-) Where, u = unit step function A(t) = variation in magnitude of strength at time t k = damping factor = threshold Here we can define a new variable using the following formula y(t+1)=A(t)-  d=0tkdx(t-d)-) Here we can see how we can calculate the chaotic nature of the new variable. Now let’s see how we can model. Modeling the chaotic responses In the above explanation, we have seen that the chaotic nature of neurons of the network can be considered as part of the neural networks and can be called a chaotics neural network. Such a network has two kinds of inputs, one is feedback input and the second is externally applied input. The M chaotic neuron of the neural network can be modeled in terms of the dynamics as follows: This equation has three properties: It is a continuous output function Can perform the sum of Spatio-temporal nature of both types of input The relative refractoriness Expanding the above-given equation can help us in dealing with the chaotic nature of neurons. With a few expansions, this network will seem like a discrete-time neural network and after that that will be converted into a back-propagation network which means in the whole scenario, we will be able to model chaotic neurons using a natural extension of neural networks. Where are the chaotic neural networks used? In the last section, we discussed how chaotic dynamics can be generated in the neurons of neural networks and that can be used to model the chaotic nature of data. Some of the usages of the this network can be found in the following fields We can utilize such networks in the motion control systems where the change in the position of any object can be determined by the motion function and the calculation of the function can be extracted from the idea of this network.These networks can be used in combinatorial optimization problems. Since the standard neural networks have the nature to be trapped in local minima, the chaotic nature of neurons can help them in escaping from local minima.Adding dynamic nature to the neurons makes the network promising against weak input signals or we can say against fluctuating input. In a variety of cases like fluctuating time series and audio data, these networks are very helpful in making inferences.Internal parameters of the standard neural networks are not capable of adjusting according to the outside system. Such a system can facilitate the biomedical applications of chaotic resonance. These networks can be used in solving problems like travelling salesman problems, traffic detection problems, etc. The architecture of the chaotic neural network As discussed above we can say that the architecture of such a network can be similar to the standard neural network but the main difference between them is that in the layers of a chaotic neural network we find chaotic maps that can follow the rule of generalized Luroth series. Applying such maps to the network enhances the capability of the network for compression, cryptography, and computing XOR and other logical operations. The below picture can be a representation of the this network. Image source In this article, we are focusing on the chaotic neural network so we only need to understand the middle portion of the above picture that represents the architecture of the GLS chaotic neural network that can be utilized for classification tasks. In this kind of architecture firing of GLS becomes chaotic and neurons halt while their activity value starts from q which is an initial neural activity that reaches on stimulus. Final words In this article, we discussed chaotic neural networks that control the chaotic nature of the environment. Along with this, we have also gone through the mathematics behind this type of modelling and its use cases. We have also discussed its architecture based on a network that was designed to deal with classification problems.
We can clarify the significance of chaotic phenomena in neural networks by taking an example of an artificial neural network where we can use chaotic neural network to measure the dynamic characteristics of the artificial neural networks.
["AI Trends"]
["AI (Artificial Intelligence)", "Data Science", "Data Scientist", "Deep Learning", "Machine Learning", "Python"]
Yugesh Verma
2022-04-21T11:00:00
2022
1,099
["data science", "Go", "API", "TPU", "AI", "neural network", "Machine Learning", "Python", "ViT", "programming_languages:Python", "Deep Learning", "Data Science", "Data Scientist", "R", "AI (Artificial Intelligence)"]
["AI", "neural network", "data science", "TPU", "Python", "R", "Go", "API", "ViT", "programming_languages:Python"]
https://analyticsindiamag.com/ai-trends/how-to-handle-dynamic-data-with-chaotic-neural-networks/
3
10
0
true
true
true
10,129,618
Zoom Wants to be More Than Just a Video-Conferencing Platform
Zoom, the video conferencing giant, recently announced the launch of Zoom Workplace, an AI-powered collaboration platform designed to transform how teams work together. While every enterprise is salivating at the thought of AI giving a massive push to their bottom line, Zoom has been compelled to adopt it facing intense competition from the likes of MegaMeeting, and Cisco’s Webex – each with their own unique selling points. Zoom is transitioning from being just a meeting application to a comprehensive collaboration platform called Zoom Workplace. In addition to its core video conferencing capabilities, the platform now includes features such as mail, chat, calendaring, and integrations with over 2,500 third-party applications in the Zoom App Marketplace. In an exclusive interview, Ricky Kapur, the head of APAC at Zoom, unveiled the company’s ambitious plans for growth, AI integration, and its strategies to penetrate tier 2 & 3 cities in emerging markets like India. The Diversification Strategy Kapur emphasised Zoom’s commitment to a federated AI model, leveraging multiple providers. “We use multiple models. We use OpenAI’s models and models from Anthropic. We allow you to bring your own model, which means you can bring your colloquialism into it,” Kapur explained. The company’s AI strategy extends beyond just using natural language processing during meetings for real-time translation and transcription. Zoom is integrating AI capabilities across its entire product suite, including AI Companion, a digital assistant that automates tasks, prepares meeting summaries, and aids in content composition. The platform also offers translation and transcription services for multilingual meetings and AI-powered insights for customer service agents using Zoom Contact Center. Moreover, since it’s repositioning itself, the company now also offers business services solutions for marketing (Zoom Events). Kapur shared examples of how companies are using this solution to reimagine customer interactions, particularly in industries like banking and retail. One notable example Kapur provided was a large banking group in Indonesia that has implemented Zoom Contact Center to provide insurance advisory services in tier 2 and tier 3 cities. The bank has set up kiosks where customers can connect with advisors via video, providing a more personal and efficient service experience. They have established a network of well-equipped kiosks across the city. Each kiosk is equipped with a Zoom Contact Center and a Zoom Room, featuring a large display. “With just a click of a button, customers can connect with an advisor who guides them through a series of questions to determine their insurance needs and provide tailored advice,” said Kapur. This diversification strategy aims to capture a larger share of the enterprise software market and compete with established players in the productivity space. Competitors and a Race at Differentiation Zoom is not alone in this race, several major companies are following similar strategies, particularly in the unified communications and collaboration space. For instance, Microsoft Teams has positioned itself as a comprehensive collaboration platform that goes well beyond video meetings. Like Zoom, Teams integrates chat, video conferencing, and file sharing capabilities. However, Teams has a significant advantage in its deep integration with the Microsoft 365 suite, including Word, Excel, and PowerPoint. This integration allows for seamless collaboration on documents within the Teams environment. Microsoft has also incorporated phone system capabilities and supports a wide range of third-party app integrations, similar to Zoom’s approach. Cisco Webex has also evolved into an all-in-one collaboration platform that closely resembles Zoom’s full platform strategy. Webex offers video meetings, messaging, and calling features, as well as digital whiteboarding capabilities. Like Zoom, Webex has expanded into the webinar and events space. Cisco has also heavily invested in AI-powered features, including real-time translation and transcription, which aligns with Zoom’s focus on AI integration. Google Workspace takes a slightly different approach by offering a suite of integrated tools that work together seamlessly. While not as focused on video conferencing as Zoom, Google Meet is tightly integrated with other Workspace components like Gmail, Google Chat, and Google Docs. This integration allows for a cohesive workflow across communication and productivity tools, similar to Zoom’s vision of a comprehensive platform. But Kapur stressed that the key differentiator in customer experience lies not in AI agents alone, but in the smooth handover between AI and human agents, coupled with AI-enhanced support for human representatives. “Consider a scenario where you’re facing internet connectivity issues. An AI agent can start the conversation and collect initial details, but it might hit a wall with intricate troubleshooting,” Kapur elaborated. “This is where the seamless handover to a human agent becomes invaluable, particularly when visual guidance is necessary.” He highlighted the advantage of video support in such situations, allowing customers to show their setup to agents for more effective problem-solving. Kapur noted that many existing systems lack the flexibility to switch between chat, voice, and video without requiring additional software installations for customers. Furthermore, he detailed how Zoom is enhancing human agent capabilities through AI. “We’re using advanced algorithms to analyse past tickets, conduct sentiment analysis, and provide agents with relevant product information from knowledge bases,” he said. This approach aims to equip human agents with comprehensive insights to resolve customer issues more efficiently. Focusing on SMEs and Emerging Markets The APAC head also emphasised the cost-effectiveness and simplicity of Zoom’s platform for SMEs, particularly in emerging economies like India. “We charge $20, $25 a month, whereas a competitor in that space charges $70-75. This saves an SME $50 a month for a full capability,” Kapur stated. A significant focus of Zoom’s growth strategy is capturing the SME market in emerging economies. The company is tailoring its approach to meet the unique needs of tier 2 and tier 3 cities in countries like India. Kapur outlined several initiatives, including partnering with local companies to integrate cultural and linguistic nuances into their AI models. Zoom is also exploring opportunities in sectors such as education, healthcare, and agriculture. The company is working with edtech firms to embed its video SDK into their platforms, enabling seamless integration of video conferencing capabilities for online learning. Kapur added, “We’re collaborating with healthcare providers to integrate our video platform into their existing health management systems. This initiative extends to tier 2 and tier 3 cities, where we see tremendous potential for improving patient care and connectivity.”The company is also exploring innovative use cases in agriculture, such as integrating with Skylark Drones in India, where Zoom’s technology is being used for aerial inspections and collaboration.
The APAC head also emphasised the cost-effectiveness and simplicity of Zoom’s platform for SMEs, with costs like $20, $25 a month, against competitors charging $70-75.
["Deep Tech"]
["AI Video Generation Models"]
Shyam Nandan Upadhyay
2024-07-19T15:30:41
2024
1,063
["Anthropic", "Go", "OpenAI", "AI", "sentiment analysis", "ML", "Git", "AI Video Generation Models", "RAG", "Aim", "R"]
["AI", "ML", "OpenAI", "Anthropic", "Aim", "RAG", "sentiment analysis", "R", "Go", "Git"]
https://analyticsindiamag.com/deep-tech/zoom-wants-to-be-more-than-just-a-video-conferencing-platform/
4
10
3
false
true
false
10,005,906
My Experiences With ABU Robocon, Asia’s Largest & Oldest Robotics Competition
The scourge of Covid 19 has felled many a popular sporting event across the world. A victim of this pestilence is a little heard-of (outside its fan base) game called Robocon which is promoted by the Asia-Pacific Broadcasting Union (ABU). The ABU is the biggest broadcasting union in the world. Currently, the ABU has 272 members in 76 countries on four continents. Through its members’ network, the ABU can reach 3 billion or 300 crore people across the Asia – Pacific region. Since 2002, ABU Robocon has been held every year in August in a different country. This year’s championship, slated for 23 August in Fiji, has now been postponed indefinitely. My involvement with the Robocon for the initial three years was a thrilling experience. One morning in October 2000, I got a note from my boss in Doordarshan regarding an imminent visit by two officials from the ABU in connection with the proposed ABU Robocon to be held in Japan two years later. Robocon? No one knew what that was. But, as I had been looking after the International Relations in DD, I was asked to co-ordinate with the ABU officials. Robocon, I gathered, was an acronym for Robotic Contest. In this unique amalgam of TV, sports and engineering skills, Robocon was designed to help students of engineering to translate their theoretical knowledge into an ingenious and innovative practical form. And, that too in a fun-filled way!  In other words, the undergraduates in a team of three were to design and fabricate robots themselves on a given theme, the objective being to complete certain predefined tasks within a maximum time of 3 minutes before a competing team did that or to prevent the robots of the opponent to achieve the goal. I also gleaned that Prof. C. Amarnath of IIT Bombay had been organising an inter college robotic competition. Armed with this information, Messrs Minoru Kurita and Nobuhiro Sato of ABU and I decided to meet Prof. Amarnath. Nothing much came out of this meeting, though. Meanwhile, I had a change in my assignment and the Robocon project was mothballed as were my plans to produce a 13 part series on this novel idea. But in order to honour its commitment to participate in the inaugural ABU Robocon in Tokyo scheduled for August 2002, DD, after a gap of one year, asked me to revive the project but sans any monetary support for it. Of the several agencies that were approached, it was the MHRD, thanks to Mr. K. S. Sarma, Additional Secretary which agreed to finance the project. It preferred IIT Kanpur over IIT Bombay to be the nodal agency. Months went by in meetings and travels to and from Kanpur. It was only in February 2002 that IITK was finally ready to host the National Robocon. But given the short lead time to the ABU contest which was just six months away, it was not found feasible to hold an open contest. Instead, only a few teams had to be invited for the final selection based on their past performances in other smaller robotic events. Out of the four teams invited by the contest Director, only three — IIT Kanpur, Institute of Technology, Nirma University, Ahmedabad and Vivekanand Education Society Institute of Technology [VESIT], Mumbai — agreed to participate. Thus, all my grandiose plans to hold a nation-wide contest and to make a 13 part series on DD Robocon had to be abandoned. Not only that. There were many sceptics who felt that the designing, fabrication and testing of robots and that too by students and then the staging of the contest could not be achieved in such a short time of 4-5 months, especially when it was also the examination time and each concerned institution had a different examination schedule to follow. Therefore, instead of wasting time, money and effort, it was wiser to participate as an observer rather than as a contestant in the inaugural ABU Robocon. However, a “can do and will do” attitude of the core committee comprising the mentors of the three teams, the team members and this writer finally prevailed. This resolve helped us to negotiate the many bumps and blockades that we faced at every step in the ensuing months. Staging the contest Staging the final event as a TV show or spectator sport had its own problems. The foremost being the short duration of the game: at the most 3 minutes. Hence, for a match consisting of the best of 3 sets/rounds of 3 minutes each, the maximum possible duration of the event could only be 9 minutes. And in case of a straight sets win by a team, the play time would not exceed 6 minutes. Add a couple of minutes for the introductions, preparation time, and time out by the teams, the whole event would not last for more than 15 minutes. As it was to be a new sport for both the in-stadium spectators and the TV audiences, some additional elements were needed to make the game more entertaining and appealing. We, therefore, decided to bring in separate music bands which also featured especially composed songs by Doordarshan’s producer of the show, anchors, cheer leaders (six years before IPL promoted this concept) for each team and pit them against each other — in a kind of a duel– to lend support to their designated teams. A fair with giant wheels, puppets and a crafts mela was organised outside the contest venue. The endeavour was to turn the event into a visible, fun-filled enterprise. Doordarshan Robocon Contest Given the background of the endemic uncertainties and problems, and despite working almost round the clock, the students were unable to complete the building and testing of their robots until the kick off time. This meant that the robots had not really been tested under match conditions – no practice games could be organised and no one, including the players, had any idea as to how the game would develop. All the teams had designed “extremely competent, lightweight autonomous robots [once started, the robots would work automatically overcoming obstacles and negotiating the opponent’s machines].” Equally good were the manual robots [every team had to use one robot which had to be steered by a player]. Each team employed innovative strategies to reach the targets, recover from errors, disable the opponent’s robots etc. Though he himself was in charge of the IITK team, Mr, Amitabh Mukerjee, the contest director, particularly commended the VESIT team’s robots. Incidentally, this was the only team to have girls in it. Two at that! However, the results belied Mr. Mukherjee’s assessment. The VESIT team could manage to score only one point in the league match against the three by IITK and eight by Nirma. In the finals, Nirma trounced IITK by 21 to 7 points to earn the right to represent Doordarshan and India in the inaugural ABU Robocon on 31 August, 2002 in Tokyo. The league matches that were played earlier on the day of the domestic contest in Kanpur on 21st July, 2002 gave the first intimations of the production problems of televising the sport. For days, the production team had been mentally preparing itself to face the new reality – there wasn’t going to be either only one ball in action or just one area in the field where action would occur — features common to all sports, and something that the TV crews had learnt to deal with. Instead, many robots, moving at varying speeds, could spring up in action simultaneously from different directions and head for one of the 17 targets. To pan the camera or to zoom, to linger on a robot hitting a target or to cut to another in action was the great Hamletian dilemma. There was to be no opportunity to make amends later as the game itself would end in 3 minutes at the most. In practice, one game ended in less than a minute. It would be apt to mention that for three consecutive years, I raised with ABU this problem [of many robots being in action simultaneously] of the Robocon being TV unfriendly. Finally from the 4th Robocon held in 2005, ABU changed the rules to allow only two robots – one autonomous and one manual – per team. Notwithstanding the aforementioned impedimenta, and even the stress of tilting at the proverbial windmills, the infectious energy of the young players and their spirit of adventure motivated the crews, who had literally stood on their toes for a good part of 12 hours that day, to deliver a commendable performance. The result was the peak primetime airing of the programme on 30th July, 2002 on Doordarshan’s National Network. This endeavour resulted in Doordarshan notching up three first time achievements in its history: a Championship was named after Doordarshan, a trophy was awarded by it to an outside entity, and that Doordarshan was involved in the selection of a national team and to lead it in an international event. As the leader of the national team, I also had to carry with me the 15 minute documentary film, mandatory for every participating team leader, to Tokyo, the purpose being to introduce India to the audience and to also document the construction of the robots by the students. While the first part was accomplished easily, the filming of the designing and assembly of robots was an ordeal due to the unpreparedness of the students. Ideally, only one camera team should have filmed all the three teams’ construction activities. As that couldn’t be, different film units were deployed at intervals to film the building of the robots. Among the 20 teams that participated in the 1st ABU Robocon in Tokyo were many unfancied teams: from Fiji, Macao, Vietnam, Nepal, Mongolia, Kazakhstan, Egypt, Pakistan, Sri Lanka, and Turkey. Besides, the more “technologically advanced” countries like Japan, China, South Korea, Thailand, Malaysia, Singapore, Indonesia and Australia contested. A team leader from Thailand remarked to me that given India’s prowess in computers and software, he saw India and Japan competing in the final. Well, neither team could reach the final. Our team was knocked out in the 2nd round itself. To everybody’s great surprise, it was Vietnam which won the championship emphatically. The reason for the Indian team’s poor showing became clear later on. Many teams participating in the ABU Robocon had beaten a large number of contestants in their respective domestic contest: Thailand had 51teams competing for the top honours, Indonesia had 35; China saw the participation of 32 teams while Vietnam selected its national team from 26 teams that were in the fray. Some of these teams had not only prepared and conducted their national contests months in advance, but their National Broadcasters had enthusiastically supported the project and had instituted prize monies too as an incentive to the participants. Astonishing as it may sound, Vietnam has won the top prize beating all the fancied teams for a record 7 times followed by China with 5 wins – 4 years in a row from 2007 to 2010, and NHK of Japan which had been organising the domestic championships for a couple of years prior to the commencement of ABU Robocon could manage only 2 wins in the 18 championships held so far. And India has only clocked a zero. Another irony or embarrassment concerns the elite IITs who have managed to win the Doordarshan National Championship title only 3 times compared with the astounding 8 sweeps by the Institute of Technology Nirma University, Ahmedabad. In fact, the western Indian states of Gujarat and Maharashtra have dominated the national championships over the years. I led the Indian team to the first three ABU Robocons (2002- 2004) which were held respectively in Tokyo, Bangkok and Seoul. These championships, as is mandatory, were hosted by the respective national broadcasters. It was a very humbling experience to see their professionalism, technical competence, production quality and venue management. Moreover, the hospitality provided by them to the participants was first-rate. India did host two ABU Robocons in 2008 and 2014 respectively at Pune. But then, that was just that! Not with a bang, but a whimper.
The scourge of Covid 19 has felled many a popular sporting event across the world. A victim of this pestilence is a little heard-of (outside its fan base) game called Robocon which is promoted by the Asia-Pacific Broadcasting Union (ABU). The ABU is the biggest broadcasting union in the world. Currently, the ABU has 272 […]
["Deep Tech"]
[]
Sudhir Tandon
2020-09-01T17:00:36
2020
2,031
["Go", "ELT", "programming_languages:R", "AI", "ML", "programming_languages:Go", "Ray", "ViT", "GAN", "R"]
["AI", "ML", "Ray", "R", "Go", "ELT", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/deep-tech/my-experiences-with-abu-robocon-asias-largest-oldest-robotics-competition/
3
10
2
false
false
false
10,078,346
Meet the AI Pioneers Who Won The 2022 Princess of Asturias Award
Also read, Schmidhuber – A Name Lost in Alleys of AI Research The Princess of Asturias Foundation, a non-profit private institution in Spain, recognised four scientists for their advanced work in artificial intelligence last week. The four pioneers – Geoffrey Hinton, Yann LeCun, Yoshua Bengiowere and Demis Hassabis were honoured with the ‘2022 Princess of Asturias Award for Technical and Scientific Research’. Hinton, LeCun and Bengio were honoured for the breakthrough and advancements of machine-based deep learning, where computers are able to learn through complex algorithms automatically. The AI-innovating trio is considered the ‘godfathers of deep learning’, which uses neural networks for computer vision, voice recognition, and natural language processing. Google-owned DeepMind’s CEO Demis Hassabis was the fourth award recipient. The Princess of Asturias Foundation aims to promote scientific and cultural values, consolidating the links between the title traditionally held by the heirs to the Crown of Spain and the Principality of Asturias. Geoffrey Hinton In 1986, Hinton first invented the backpropagation algorithms, which are fundamental for training neural networks. Later in 2012, the algorithms allowed the innovator to create a convolutional neural network ‘AlexNet’, which was made up of 650,000 neurons trained with 1.2 million images. This registered an error rate in object recognition of 26%, about half of the previous AI systems. In 2021, Hinton published a document on the platform arXiv presenting ‘GLOM’, an innovative project which involved the usage of a new vector model for representing and processing visual information in a neural network. The project is still in the development phase. Yann LeCun Meta’s VP and chief AI scientist Yann LeCun made contributions to the development of Hinton’s backpropagation algorithms. In 1989, LeCun created LeNet-5 – a recognition system used for characters written on bank checks – representing major advancements for optical character recognition technology. He later pioneered the development of DjVu image compression technology, which is used by millions of users on hundreds of websites to access scanned documents on the internet today. His other stints include deep learning methods for human-computer interaction, document recognition, and speech recognition. Yoshua Bengio Canadian computer scientist Yoshua Bengio has contributed to probabilistic sequence models used for handwriting and speech recognition, along with unsupervised learning. Bengio is currently studying advanced algorithms in extracting pattern recognition and data representations, and also helping understand complex relationships and high-level concepts. He is the author of three famous books on deep learning, along with being one of the promoters of the ‘Montreal Declaration for a Responsible Advancement of Artificial Intelligence’. Demis Hassabis Hassabis is the innovator in the creation of a neural network model combining the capabilities of an artificial neural network with the algorithmic power of a computer. In 2021, the DeepMind team in a joint-project with the European Bioinformatics Institute, predicted the structure of over 350,000 human proteins (44% of all known proteins) with a high level of accuracy. The foundation’s jury said that the work “represents a huge advance in techniques, ranging from voice recognition, the processing of natural language, and the perception of objects”.
The Princess of Asturias Foundation recognized four pioneers for their advanced work in artificial intelligence
["AI News"]
["DeepMind", "Geoffrey Hinton", "Meta", "Yann LeCun"]
Bhuvana Kamath
2022-10-31T15:56:49
2022
506
["Go", "Yann LeCun", "Meta", "artificial intelligence", "programming_languages:R", "AI", "Geoffrey Hinton", "neural network", "computer vision", "Aim", "deep learning", "AI research", "R", "DeepMind"]
["AI", "artificial intelligence", "deep learning", "neural network", "computer vision", "Aim", "R", "Go", "AI research", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/meet-the-ai-pioneers-who-won-the-2022-princess-of-asturias-award/
4
10
0
false
false
true
10,021,301
QpiAI Partners With IISc To Launch Joint Certification In AI & Quantum Computing
QpiAI and IISc have launched a joint certification program on AI and Quantum Computing to help enterprises, schools and colleges to train professionals and students. QpiAI™ Quantum & AI Certification course will start from May 1, 2021, and will be available across Asia and Europe. QpiAI is an AI modelling and quantum computing company that offers the most advanced quantum modelling platforms to generate high-performance models. The course was launched after the company realised the huge talent gap across Asia and Europe to work with such advanced technologies. “We have launched QpiAI-Explorer to bridge this gap,” said Dr Nagendra Nagaraja, CEO and Founder of Qpi Technology holdings, the parent company of QpiAI. QpiAI-Explorer is an entry-level AI modelling and quantum computing platform that can run on a laptop without the use of expensive cloud resources. QpiAI, along with IISc, intends to offer three certifications currently: AI-level 1, AI-level 2 and joint AI & Quantum Computing Certifications to help train the workforce in large enterprises, schools and colleges. QpiAI comprises a strong technical team of 30 engineers who will work on the courses. “Our initiative of collaborations with various universities like IISc, IISER and TIFR have met with great enthusiasm and optimism. We also want to reach out to grass-root enterprises, schools and colleges to proliferate AI and Quantum Computing,” said Dr Amlan Mukherjee, who has a PhD from TIFR and is a post-doctoral researcher from Stuttgart University, Germany. He returned to India, to take up the challenging role as the Director of Quantum Hardware and Research at QpiAI. “We would also like to bring these certification courses in Hindi, to have a wider reach and truly democratise AI and Quantum,” he further added. Another engineer Dr. Pinakin M. Padalia also returned to India from TU Delft, Netherlands to join QpiAI as a Director of Quantum Circuits. With an aim to develop Quantum Computing in India, he strongly believes in the need to upskill the students and workforce to be ready for the Quantum world. Sachin Kumar, who is a Senior Data Scientist at QpiAI, said that hands-on AI modelling experience is absolutely crucial to accelerating Intelligent Digital Transformation. QpiAI-Explorer is designed to achieve the same. Further QpiAI is scheduled to tape-out 128-qubit Quantum Control Chip codenamed “BumbleBee” this September/October of 2021. It will be a state-of-the-art quantum control chip intended to work at 4 Kelvin (-269 oC) designed using 22 nm TSMC CMOS process. QpiAI intends to jointly deploy this chip with various institutes, combining it with semi-conductor and super-conductor qubits. Enterprises, schools and colleges can enrol for certification program here.
QpiAI and IISc have launched a joint certification program on AI and Quantum Computing to help enterprises, schools and colleges to train professionals and students. QpiAI™ Quantum & AI Certification course will start from May 1, 2021, and will be available across Asia and Europe.  QpiAI is an AI modelling and quantum computing company that […]
["AI News"]
["high performance computing", "IISc"]
Srishti Deoras
2021-03-03T14:47:59
2021
431
["programming_languages:R", "AI", "ML", "digital transformation", "Git", "IISc", "Aim", "R", "emerging_tech:quantum computing", "high performance computing"]
["AI", "ML", "Aim", "R", "Git", "digital transformation", "programming_languages:R", "emerging_tech:quantum computing"]
https://analyticsindiamag.com/ai-news-updates/qpiai-partners-with-iisc-to-launch-joint-certification-ai-quantum-computing/
2
8
1
false
false
false
10,142,715
2025 is the Year of AI Agents, and India is Leading the Charge
India is emerging as a pivotal player in the AI landscape thanks to its engineering talent and focus on application-driven innovation across sectors. And the biggest theme for next year, without a doubt, is going to be AI agents. “I’m actually convinced that a faster adoption will be seen in agentic AI. Each one of us will have an AI agent that knows us really well; it will analyse our business and routines and support us in becoming more productive and efficient,” said former CEO of Tech Mahindra and co-founder of AIonOS CP Gurnani at AIM’s MachineCon GCC Summit 2024. Echoing a similar sentiment, the VP of AI product management at Redis, Manvinder Singh, told AIM that India was perfectly positioned to lead the charge. “The Indian tech ecosystem is going to play a very critical role in agentic AI,” he added optimistically, citing companies like Kore AI and others, which are currently using Redis as a data platform to power their virtual AI agents. At Redis, Singh leads innovations in vector search, semantic caching, and agent memory. Prior to joining Redis in July 2024, he worked for a decade at Google in AI as director of product management, where he focused on building LLMs, AI frameworks, and other developer products. He is also a lead SME for Google’s AI Essentials Course on Coursera. “I helped build a partnership between Redis and Google Cloud during my time at Google,” recalled Singh. A former McKinsey associate partner, Manvinder holds an MBA from Kellogg and a BTech from IIT Delhi. “I grew up in Delhi, and my personal connection to India keeps me deeply invested in its tech ecosystem,” Singh said, emphasising his belief in India’s potential to tackle hard AI challenges and how Redis is perfectly positioned to scale this to a whole new level. Redis, FTW! “Redis has always been synonymous with speed like it was the performance database,” said Singh, emphasising its deep-rooted reliability. These capabilities, essential for building next-generation AI agents, address critical pain points like memory, context, and latency. Redis’s integration with Amazon Bedrock, Microsoft Azure, and LangChain also positions it as a developer-friendly choice. Redis owes much of its performance to its single-threaded architecture—a design choice that has sparked debates but remains a cornerstone of its success. By avoiding locks and minimising system calls, Redis achieves lightning-fast operations. As one user aptly noted, “If you don’t take locks, aren’t making syscalls nonstop, and aren’t fighting cache, you get really good performance.” While some question the limitations of a single-threaded model, others highlight its simplicity and efficiency as the key reasons behind Redis becoming a favourite for high-speed data workloads. “Redis was built to be single-threaded, which was the right design choice for caching use cases,” explained Singh, “but now we support multi-threading for things like vector search.” This move has positioned Redis as a formidable competitor to players like Milvus and Qdrant. Unlike these vector-only databases, Redis offers unmatched flexibility, allowing developers to handle multiple data types in a single platform. “Using a vector-only database is like buying a car that only takes you to the grocery store,” quipped Singh, emphasising Redis’s edge in catering to diverse use cases with a unified solution. While competitors like SingleStore, Milvus, Qdrant, and hyperscalers offer Redis-compatible solutions, it distinguishes itself with performance and flexibility. It is the most downloaded database on Docker Hub, and its ability to serve as both a caching and vector database gives it a unique edge. For instance, Asurion, a global insurance provider, optimised API usage by using Redis for semantic caching and routing, achieving a 70% hit rate and significantly reducing costs. What’s Next for Redis? In August, the company announced Redis 8, a new update that brings advanced features like JSON, search, and vector databases to its Community Edition. This update is also accompanied by Redis for AI—a package designed to power GenAI applications—marking a major leap in developer access and AI-driven innovation. Redis told AIM that it is now doubling down on its core strengths—speed, memory, and flexibility—while introducing innovations like Redis Flex, enabling terabyte-scale data handling. “We’re building new products to solve challenges like semantic caching, memory optimisation, and guardrails for responsible AI,” he shared. Strategic partnerships with AWS Bedrock and Microsoft Azure underline its commitment to becoming a cornerstone of the GenAI ecosystem.The company is also eyeing deeper integration into enterprise AI workloads, with a focus on reducing developers’ friction through tools like LangChain and investments in disk-based data capabilities.
“The Indian tech ecosystem is going to play a very critical role in agentic AI,” says Redis’ Manvinder Singh.
["AI Features"]
["AI (Artificial Intelligence)"]
Aditi Suresh
2024-12-09T11:55:29
2024
750
["Qdrant", "GenAI", "agentic AI", "AWS", "AI", "vector databases", "LangChain", "Aim", "Milvus", "Azure", "AI (Artificial Intelligence)"]
["AI", "GenAI", "agentic AI", "LangChain", "Aim", "vector databases", "Qdrant", "Milvus", "AWS", "Azure"]
https://analyticsindiamag.com/ai-features/2025-is-the-year-of-ai-agents-and-india-is-leading-the-charge/
3
10
2
false
false
false
10,050,075
IoT Has Not Lived Up To The Hype: Sunil David, AT&T (India)
Along with AI and machine learning, the Internet of Things has quickly emerged as one of the most prominent technologies and an integral part of Industry 4.0, says Sunil David, the regional director (IoT) at AT&T (India). His genuine interest in this technology has helped him emerge as one of the influential names in IoT in India. He also mentors young aspirants willing to join this field. In a conversation with Analytics India Magazine, David spoke about IoT, its current state and predictions for the future. Edited excerpts from the interview: AIM: What was the inspiration behind pursuing a career in the field of IoT? Sunil David:  My interest in IoT developed sometime around 2012 when I was in my first stint with AT&T India as Regional Sales Head for South. I used to attend industry events in India as a delegate which were related to digital transformation and IoT, which was in its early days as far as awareness was concerned. I would attend a lot of conferences on IoT held in different cities and countries. In February 2013, I attended the Mobile World Congress held in Barcelona, a fantastic experience. I could see many Telecom providers showcasing M2M and IoT solutions in their booths, and that is when I really took a significant interest in understanding IoT. I realised then that it was not just a new revenue stream for Telecom providers but also a transformative technology that can positively impact enterprises, consumers and society at large. It was still early days in India as far as I was concerned, but I used to attend pretty much every IoT Industry event across India, follow IoT related news and developments across the world. AT&T, at that point in time, had limited IoT capabilities, but I used to still make an effort to position our IoT capabilities to our enterprise customers in India. I will always cherish the one unforgettable moment when I met Kevin Ashton, who coined the term “Internet of Things” at an AT&T organised customer event held in Singapore in mid-2015. I left AT&T India in late 2015 and joined Telstra India as Head of Enterprise Sales for India for a year. I rejoined AT&T to Head the IoT Business for India and ASEAN. Even though my role was restricted to Sales and Business Development, I focused a lot on our marketing initiatives by using opportunities at industry forums to position AT&T IoT capabilities, connecting with IoT startups and attempted to build our partner ecosystem in India, given that IoT is an ecosystem play. AIM: How has IoT changed over the years? Has there been any significant improvement? Sunil David: In March 2017, when I took up my new role Heading the IoT Business, the adoption of IoT in India was still very low compared to other markets in Asia, Europe and North America. But on the positive side, there was increased awareness. Earlier, while speaking to our clients, we had to explain what IoT was. Now, they already understand the tech and want to know how to get started. The conversation has changed. The focus of the discussions is centred around discussing use cases, technical architecture, how to get started with a POC and how they could get an ROI from their investment in IoT. Currently, while IoT adoption has picked up in India amongst large enterprises, it is still relatively low compared to developed markets like China, Singapore, Europe and the US. Enterprises have not been able to scale up their IoT projects barring a few. Also, adoption amongst the MSMEs in India (especially Manufacturing MSMEs) is abysmally low, and this is a matter of concern given that MSMEs are a key contributor to the economic growth of our country and towards employment and exports. The MSMEs should understand that implementing IoT is now needed for survival and growth. Simple IoT Mantra – “Think of the Big picture, Start Small and Scale Fast”. What we see today is POCs or pilots lasting a very long time, and secondly, scaling is happening at a very low pace. True value and ROI can be realised only when projects scale. Interestingly, ever since the outbreak of COVID-19, we have seen increased adoption of IoT, albeit in specific industries. For e.g., in manufacturing, we see IoT use cases around remote monitoring of industrial assets in a factory and products in the field, and IoT-enabled safety and health solutions. However, most IoT implementations have been used in areas around cost reduction, efficiency improvement, etc., which is very bottom-line focused. We have not seen IoT spend in areas around revenue generation and customer experience improvement. AIM: Where does India stand in the IoT race? How does it compare with China, which is considered a leader in this field? In India, IoT adoption is still confined to key sectors – Manufacturing, Transportation and Logistics, Energy and Utilities sectors. Smart City projects have also leveraged IoT to a certain extent, but adoption is still on the lower side. However, the use of IoT in healthcare and consumer IoT (Smart Homes) definitely has scope for improvement. Enterprises should realise that IoT is not just a nice-to-have but a strategic necessity for companies for survival and growth. Some of the challenges that India faces are: The cost of IoT devices is still higher compared to China and acts as a barrier for adoption – for any IoT solution. The hardware cost alone makes for 40 per cent of the overall solution cost.Telecom infrastructure in India needs to be improved.Security  Lack of IoT skillsA lack of stable and consistent regulatory regime covering all important dimensions – IoT device procurement with e-SIMs, certification and security of devices, IoT network connectivity and regulations around permanent roaming, and finally around data storage, residency and governance. China’s major advantage is that it is a manufacturing powerhouse, contributing almost 29 per cent to global manufacturing output. In the case of electronics manufacturing and especially IoT devices, China has a very good ecosystem that has been built over many years. The cost of IoT devices in China is very low, given the huge ecosystem it has built with suppliers of memory and network modules and semiconductor chips. The second advantage is that the policies are tailored for IoT adoption. These policies are consistent, and there is very little room for ambiguity. The government has also been pushing hard for the adoption of Industry 4.0, of which IoT forms an integral part. Enterprises are far more mature in adopting as they have realised that IoT adoption can give them a competitive edge. Most market reports forecast that China will continue to lead when it comes to IoT adoption in the next five years. In terms of policy-level intervention in India, the government has launched the Production Linked Incentive Scheme (PLIS), which intends to give a huge thrust and impetus to electronics manufacturing, including IoT devices. A lot of assembly work of IoT devices is being done in India as we are still dependent on countries like China, Taiwan, Korea, etc., to import sensors, network modules, semiconductor chips, etc. If we build a good ecosystem in India of all the different components that go into making an IoT device, which might take a few years, I am confident that we will emerge as a leader in IoT device manufacturing. Further, if we have the required scale that factors the large domestic market and the export opportunities, the cost of IoT devices will come down drastically. The Cellular Communication Networks in India are still patchy, given the limited amount of spectrum they need to manage and hence there is scope for improvement here as well. The Indian telecom providers have their own challenges to deal with – for e.g., high spectrum costs, high taxation and low ARPUs. Another most important aspect is skill development in IoT and applied areas. We need skill from both the end-user (consumer of IoT services ) and supply (provider) side to ensure we have enough IOT skilled talent to deliver, implement and manage complex IoT solutions. AIM: What is the role of AI in IoT? Sunil David: In the near future, we will see an amalgamation of AI and IoT. A new term called AIoT is being talked about now and will definitely be the future of Industry 4.0. Just extracting information from the physical world is not enough. We need to get value from the humongous amount of data that has been collected and then use the insights from the data that will help drive decision making. While IoT can extract data from physical assets, it is important to add context to the add and correlate data from multiple data sources. This is where AI and machine learning can play a huge part by ensuring we get the right insights, predict outcomes and eventually get to a prescriptive stage when action can be taken to prevent anomalies from occurring. AIM: IoT technology is not immune to security scares. Especially in a healthcare setting, IoT security becomes a major concern. What are your comments on this? Sunil David: Today, one of the biggest barriers to IoT adoption is security. In IoT, we connect physical assets to the Internet. The moment an IoT device is connected to an asset and thus exposed to the Internet, it becomes potentially hackable. The IoT device is the weakest link in the chain. If it is compromised, the attacker can take full control of the system and create havoc, and the damage can be irreversible. Many IoT device manufacturers lay less emphasis on the security of the device since it is an additional cost. We need to engineer the device in such a way that it does not compromise on security. Securing the IoT device alone is not enough, but we must certify these devices by going through a stringent certification process. Such certifications will give confidence to the companies using the devices. When an IoT device is built, security is often considered an afterthought. When you are designing, security should be the topmost priority and consideration. The Telecom regulatory body of India came up with guidelines in 2017 saying that every IoT device manufacturer would need to follow security by design guidelines. It is still a guideline and not a policy. If it becomes a policy, it will go a long way in ensuring IoT security for devices. Finally, when it comes to security, it is not enough to secure the IoT device alone; one needs to secure the network, the applications, and the data on the cloud as well. Hence, a holistic approach is needed to address security from an IoT standpoint. AIM: Has IoT lived up to the hype? Sunil David: No, it hasn’t lived up to the hype. There has been a lot of hype earlier. IoT’s potential has still not been explored fully. There is a lot of work that needs to be done from the IoT solutions providers, industry bodies, etc. to come together and constantly advocate the importance of the positive business impact that IoT technology can generate and how it can lead to better business outcomes – be it reducing costs, increasing revenue, or better customer experience, etc. With the lower cost of IoT devices coming down, our communication networks getting better, loud adoption increasing, and AI being democratised, I see no reason why adoption will not increase. All key stakeholders – government for building the policy and framework, industry bodies, industry and academia – need to collaborate very closely to make this happen. From a consumer standpoint, given that one needs to pay an extra cost to leverage IoT enabled consumer appliances and other IoT enabled gadgets like smart wearables, etc., one needs to see the value that can be derived from using the technology. If the customer sees value and it enhances their convenience and provides a better user experience, they will be willing to pay a premium for the service. (The views expressed by the interviewee is of their own and do not represent that of AT&T)
There is a lot of work that needs to be done from the IoT solutions providers, industry bodies, etc. to constantly advocate the importance of IoT technology
["AI Features"]
["Interviews and Discussions", "IoT", "IoT India"]
Shraddha Goled
2021-09-30T17:00:00
2021
2,013
["Go", "machine learning", "TPU", "AI", "IoT India", "Git", "RAG", "Aim", "analytics", "Rust", "R", "IoT", "Interviews and Discussions"]
["AI", "machine learning", "analytics", "Aim", "RAG", "TPU", "R", "Go", "Rust", "Git"]
https://analyticsindiamag.com/ai-features/iot-has-not-lived-up-to-the-hype-sunil-david-att/
4
10
6
true
true
false
10,045,133
The Ethical Challenges Of AI In Defence
The strength of its military is often an indicator of how powerful a country is. In some of the most developed countries, investment in this sector is the highest. A large part of this investment is utilised to research and develop modern technology such as AI in military applications. AI-equipped military systems are capable of handling volumes of data efficiently and have superior computing and decision-making capabilities. That said, in the case of defence, the implications of every decision have to be weighed in very carefully. Artificial intelligence is still in the adolescent stage, and the practical applications of the technology are often brittle. More often than not, ethical implications of using AI in defence have been raised by policymakers and activists alike. Controversy around AI in defence The chief concern of using AI in defence and weaponry is that it might not perform as desired, leading to catastrophic results. For example, it might miss its target or launch attacks that are not approved, lead to conflicts. Most countries test their weapons systems reliability before deploying them in the field. But AI weapon systems can be non-deterministic, non-linear, high-dimensional, probabilistic, and continuously learning. For testing a weapon system with such capabilities, traditional testing and validation techniques are insufficient. Furthermore, the race between the world’s superpowers to outpace each other has also made people uneasy as countries might not play by the norms and consider ethics while designing weapons systems, leading to disastrous implications on the battlefield. Technical Challenges As defence starts leaning towards technology, it becomes imperative that we evaluate the loopholes of AI-based defence technologies that bad actors might exploit. For example, adversaries might seek to misuse AI systems by messing with training data or figuring out ways to gain illegal access to training data by analysing the specifically tailored test inputs. Furthermore, the AI black box and the resulting lack of explainability would open it up for risk in its application in highly regulated or critical environments. The opponent can craft an attack in a method similar to training machine learning. For example, instead of training the model on a designated dataset, it could be trained on errors to give false results every time it is used. In addition, several other operational risks arise from the reliability, fragility, and security of AI systems. From humanitarian standpoint UN chief António Guterres once said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Another issue is that strategic risks with the possibility of AI increase the likelihood of war globally and further escalate ongoing conflicts. There can be no answer to this question until states reach such a stage of AI. Humanitarians have always advocated against the deployment of such technologies in the field. Despite extensive efforts to ban the technology in the United Nations, it is unlikely that a complete ban can be enforced. The best way forward is to define a set of broad guidelines for its deployment to secure the world. To begin with, AI alone should never be allowed to make judgement calls in matters of arms. There should be human surveillance of its decisions before they are executed in the field. In addition to that, persons entrusted with deploying AI must have a thorough knowledge of this tech. Furthermore, it should be governable. Humans should have sufficient oversight and the ability to disengage a malfunctioning system immediately. “If you want to do strategy planning, then you’re gonna have a mashup of machine learning with, maybe, game theory and a few other elements,” said William Scherlis, director of the Information Innovation Office at the Defense Advanced Research Projects Agency of the United States.
Ethical implications of using AI in defence have been raised by policymakers and activists alike.
["AI Features"]
["AI in defence", "Ethical AI"]
Meenal Sharma
2021-08-03T13:00:00
2021
623
["Go", "machine learning", "artificial intelligence", "programming_languages:R", "AI", "innovation", "Ethical AI", "Scala", "RAG", "Rust", "R", "AI in defence"]
["AI", "artificial intelligence", "machine learning", "RAG", "R", "Go", "Rust", "Scala", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/the-ethical-challenges-of-ai-in-defence/
2
10
0
true
true
true
10,009,462
India’s Leading Organisations Under MeitY Which Are Driving AI For India
Governments around the world have a unifying role in facilitating safe and ethical innovation through the use of artificial intelligence. The past decade, the Indian government has worked on various aspects of AI policy and implementing it. The Indian AI ecosystem in the government currently consists of innovation centres, projects, capacity building, reskilling and policies, all of which is being carried out via organisations under MeitY. The rapid pace of change in the AI and ML space requires an agile regulatory environment. The development and deployment of AI technologies require vast infrastructural resources, computing power as well as network connectivity. There are a number of agencies working under The Ministry of Electronics and Information Technology which are pushing forward India’s AI infrastructure directly or indirectly. The functions of these agencies closely align with making sure that there are plenty of resources to create a healthy technological ecosystem for data-related innovation. Such organisations and units are working closely with MeitY, taking the task of deployment of AI forward. These organisations are also consolidating the efforts made by all stakeholders to drive change in advancing India in AI. In this article, we take a look at very MeitY organisations which are advancing India’s AI stack. National Informatics Centre (NIC) National Informatics Centre (NIC) is India’s leading government agency on data intelligence in government. It has accumulated India’s critical data on the census, weather, stock market, budget, elections and citizen services collected over decades and uses it for generating insights. NIC also formed an open government data platform under the direction of MeitY, which is accessible at data.gov.in. As of today, it has 4,25,000 datasets available in a machine-readable format which has been released in the open domain by various ministries and departments across the government of India. All of these datasets can be consumed through APIs directly. The portal has received around 29 million views so far and 400,000 registered users who have used the data for research, data journalism, analytics and statistical modelling, etc. NIC has around 350 chief data officers working across various government departments who are driving the data journey of their units and releasing government data in an open format which can be used by startups, industry or research or academia. In the last few years, NIC has set up a number of CoEs across India, including CoEs on data analytics, AI and microservices. NIC also does many hackathons and academia on the datasets for developing various apps. Centre for Development of Advanced Computing (CDAC) The Centre for Development of Advanced Computing (CDAC) was formed in 1987, and since then it has built supercomputing systems for India, popularly known as the PARAM series of supercomputers. When it comes to building exascale infrastructure, CDAC under MeitY has been working on a national grid of high-performance computing (HPC) systems. This will provide the computing power needed in India’s AI stack. CDAC is building the next generation of supercomputers allowing firms to run their models on machines with high capabilities. The supercomputing systems and facilities of CDAC are used to solve computationally intensive predicaments across sectors. Its infrastructure has been used by startups, researchers and faculty members across India to push India on a faster track to AI plans. One of the systems which CDAC have designed recently for managing very large scale AI workloads is Param- Siddhi AI, a supercomputer with 100 AI Petaflops and 2.5 million cores. Software Technology Parks of India (STPI) Another important government body under the Ministry of Electronics and Information Technology( MeitY) is Software Technology Parks of India. STPI is an Autonomous Society formed in 1991, with the goal of boosting Software Exports from India. STPI, along with other agencies, has played a crucial role in helping India become a global leader in IT services. STPI in recent years has changed its focus on software products. Regarding the fact that India hasn’t done very well in software product development, MeitY came up with the idea to capitalise on the nation’s talent, IT services industry and the market resolved to make India a software product nation. First, it came up with the National Policy on Software Product 2019 and tasked STPI to make India a software product nation. Given many software products these days utilise artificial intelligence, STPI has opened eight centres of excellence (COE) where AI/ML, computer vision, and automation has the primary focus. The CoEs are working on sectors like fintech, med-tech, agritech, electronics systems development, manufacturing, game tech, autonomous vehicles, etc. and associated fields where AI can be applied. There are CoEs on other technologies which STPI have started as well. IndiaAI The National Artificial Intelligence Portal was launched in 2020. Known as India AI, the portal has been developed by the National Association of Software and Service Companies (NASSCOM) and supported by the National e-Governance Division of MeitY. It is the meeting point for students, entrepreneurs, AI experts, companies, and the government for nationwide sourcing and distribution of best AI ideas and practices. The portal is intended to give information on the complete AI ecosystem covering startups, VC funds, research bodies, big tech companies, and educational institutions. The portal will reinforce the capability of AI for various stakeholders and startups in utilising data-driven innovation. The portal will also disseminate documents, case studies, research reports, datasets, training programs in AI.
Governments around the world have a unifying role in facilitating safe and ethical innovation through the use of artificial intelligence. The past decade, the Indian government has worked on various aspects of AI policy and implementing it. The Indian AI ecosystem in the government currently consists of innovation centres, projects, capacity building, reskilling and policies, […]
["AI Features"]
["India AI", "managing hpc data", "MeitY"]
Vishal Chawla
2020-10-12T10:00:00
2020
887
["India AI", "Go", "API", "artificial intelligence", "AI", "ML", "computer vision", "microservices", "analytics", "MeitY", "GAN", "managing hpc data", "R"]
["AI", "artificial intelligence", "ML", "computer vision", "analytics", "microservices", "R", "Go", "API", "GAN"]
https://analyticsindiamag.com/ai-features/indias-leading-agencies-under-meity-which-are-driving-ai-for-india/
3
10
3
true
false
false
29,526
Philips Launches First Global Startup Collaboration Programme For AI In Healthcare
Philips on Wednesday announced the launch of their first global startup collaboration program involving Philips’ innovation hubs in Cambridge (US), Eindhoven, Bengaluru and Shanghai, focused on the application of artificial intelligence in healthcare. The programme focuses on the application of AI-based clinical decision support tools, such as image interpretation, analysis and integration; and workflow tools like intelligent treatment plans for radiology, ultrasound and oncology. After careful analysis, the most promising 19 start-ups out of 750 applicants were selected for inclusion in Philips’ proven accelerator program for early-stage startup companies. Speaking on the occasion, Srinivas Prasad, CEO Philips Innovation Campus, Bengaluru, said, “The Indian Start-up ecosystem is demonstrating an increasing trend of applications based on deep learning and AI in the healthcare domain. Philips is engaging with entrepreneurs who are developing AI-enabled solutions for improving clinical and operational outcomes. This year, the chosen start-ups are unique and the team at Philips Innovation Campus is committed to helping startups strengthen their value proposition and become successful sustainable businesses.” The startup program at Philips Innovation Campus, Bangalore caters to innovation ecosystems across India, Japan, South East Asia, Australia and New Zealand, Middle East and Turkey. The team screened more than 150 healthcare start-ups that had AI and radiology as part of their proposition and the most promising five start-ups joined the global cohort. They will now be coached and facilitated from Philips Innovation Campus, Bangalore and gain access to some of the best experts from the ecosystem. Alberto Prado, Head of Philips HealthWorks added, “At Philips, we use intelligent technology to improve people’s health across the health continuum – from healthy living and prevention to diagnosis, treatment and home care – while also increasing the efficiency of healthcare delivery. We are already working closely with clinical partners to develop AI-enabled solutions that are grounded in scientific research and validated in clinical practice. This new collaboration program recognizes the role that start-up companies play in bringing breakthrough healthcare innovations to the market.” Also see:
Philips on Wednesday announced the launch of their first global startup collaboration program involving Philips’ innovation hubs in Cambridge (US), Eindhoven, Bengaluru and Shanghai, focused on the application of artificial intelligence in healthcare. The programme focuses on the application of AI-based clinical decision support tools, such as image interpretation, analysis and integration; and workflow tools […]
["AI News"]
["AI (Artificial Intelligence)", "AI Healthcare", "Healthcare Automation", "Philips"]
Prajakta Hebbar
2018-10-24T06:29:32
2018
330
["Philips", "artificial intelligence", "programming_languages:R", "AI", "AI Healthcare", "innovation", "BERT", "Healthcare Automation", "llm_models:BERT", "deep learning", "R", "AI (Artificial Intelligence)", "startup"]
["AI", "artificial intelligence", "deep learning", "R", "BERT", "innovation", "startup", "llm_models:BERT", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/philips-launches-startup-collaboration-programme-ai-healthcare/
2
9
3
false
false
true
10,116,910
Mustafa Suleyman is Now Microsoft’s Problem
In a pivot to his eventful career, Mustafa Suleyman, the Inflection AI and DeepMind co-founder has joined Microsoft to steer its AI initiatives. “​​I’ll be leading all consumer AI products and research, including Copilot, Bing and Edge,” said Suleyman, sharing this surprising career update on X. His career in AI began in 2010, along with Demis Hassabis, Shane Legg. The three of them co-founded DeepMind, an AI research firm focused on developing powerful algorithms. As the head of product at DeepMind’s applied AI division, Suleyman oversaw projects in healthcare and energy. He also co-authored several influential papers, including ‘The kinetics human action video dataset’ and ‘Teaching machines to read and comprehend’. He was also instrumental in Google’s 2014 acquisition of the company for over $500 million. Following the acquisition, he was quietly shuffled to Google from DeepMind in 2020, where he was the VP, AI product management & AI policy. Two years later, Suleyman left Google and started Inflection AI. This company offers a personal AI chatbot, Pi, that can be used as a therapist which offers empathetic responses and has a ‘single mission of making you happier, healthier and more productive’. It isn’t surprising that Mustafa was interested in building something like Pi. Since the age of 17, much before DeepMind, he had co-founded the Muslim Youth Helpline in 2001, which later became one of the largest mental health support services for the community in the UK. It is, however, surprising that he chose to abandon Inflection to join Microsoft when Pi had only recently received its massive funding. Quickly giving up on his dream of building the AI chatbot at Inflection that would help humanity has raised a few eyebrows. “Not a good sign for Inflection.ai,” said Yann LeCun. For Microsoft, though, it is a mind boggling choice to poach the head of a company they’ve invested in. This ‘acqui-hire’ of the top talent from Inflection could be one way to avoid antitrust scrutiny. While Microsoft’s investment gave it a front-row view of Inflection’s progress, it also made Microsoft’s hiring raid look opportunistic. But is Mustafa Suleyman the right fit with Microsoft’s work culture? Who is Mustafa Suleyman? Suleyman grew up in a working-class family, born in London to a Syrian father, who was a taxi driver and an English mother, who worked as a nurse. He attended state schools before studying philosophy at Oxford University but dropped out to build the largest mental health support services for Muslims. It was at Oxford that Suleyman met his future DeepMind co-founder, Hassabis. The duo bonded over their shared interest in AI and its potential to positively impact the world. As the product head of DeepMind, Suleyman played a pivotal role in developing AlphaGo. He, however, left his post after an investigation into complaints about his management style. He has since publicly apologised, stating in an interview, “I really screwed up. I was very demanding and pretty relentless. I remain very sorry for the hurt that people felt there.” Suleyman reflected that the experience “gave me the opportunity to really take a step back and reflect and grow and mature a little bit as a manager and a leader.” He has been working with a coach to improve his management approach. During the short stint at Google, Suleyman turned sceptic about the unchecked growth of AI. He has consistently warned about the dangers of unchecked development in AI and has warned of a possible, “catastrophe of an unimaginable scale”. He also suggested that a pause in development might be necessary in the near future, saying, “I don’t rule it out. And I think that at some point over the next five years or so, we’re going to have to consider that question very seriously.” In his 2023 book ‘The Coming Wave,’ Suleyman argued that biological developments with AI and other burgeoning technologies could allow “a diverse array of bad actors to unleash disruption, instability, and even catastrophe on an unimaginable scale”. Despite his flip-flopping stance on AI’s potential dangers, Suleyman has consistently proposed solutions to manage these risks. Recognised by TIME as one of the 100 Most Influential People in AI, Suleyman may have differed in his assessment of AI’s threat level, but remains focused on pragmatic governance. He has emphasised the need for good institutions and a conceptual framework for thinking about AI, saying, “AI governance must be targeted, risk-based, and modular, rather than one-size-fits-all.” Currently, in his new position as the CEO of Microsoft AI, Suleyman will oversee a significant portion of the tech giant’s AI endeavours. Mikhail Parakhin, CEO of Microsoft’s advertising and web services, and his entire team, including those working on Copilot, Bing, and Edge, will report directly to Suleyman. Additionally, Misha Bilenko, corporate vice president of GenAI at Microsoft, and his team will also fall under Suleyman’s purview. This new set of responsibilities, a contrast to the ones he held at Google, will “double down on innovation”, pausing his efforts in ethical AI. Suleyman’s transition to Microsoft aligns with his conviction that, “The competitive nature of companies and of nation states is going to mean that every organisation is going to race to get their hands on intelligence. Intelligence is going to be a new form of capital.” His choice to join Microsoft could indicate his belief in the company’s potential to win this race, while Satya Nadella is optimistic that Suleyman will navigate the balance between innovation and responsible AI.
Suleyman has been moving quite rapidly from DeepMind, to Google to Inflection and has landed up at Microsoft thanks to Nadella.
["Global Tech"]
["Mustafa Suleyman"]
K L Krithika
2024-03-21T18:01:50
2024
907
["Go", "API", "GenAI", "ELT", "Mustafa Suleyman", "AI", "ETL", "RAG", "Ray", "Rust", "R"]
["AI", "GenAI", "Ray", "RAG", "R", "Go", "Rust", "API", "ETL", "ELT"]
https://analyticsindiamag.com/global-tech/mustafa-suleyman-is-now-microsofts-problem/
3
10
0
true
false
true
10,114,243
Why Did Tesla Build a ChatGPT for Vehicles?
Soon after ChatGPT became an internet sensation, a comparable development was underway at Tesla’s Palo Alto headquarters in December 2022. Dhaval Shroff, an engineer working on the company’s autopilot system, pitched a concept to CEO Elon Musk. Shroff proposed a system similar to ChatGPT but tailored for automobiles. Instead of relying on predefined rules to determine the car’s optimal path, they aimed at using a neural network that learns from extensive training data. This data consisted of millions of examples of human driving behaviour, explained Shroff, a seasoned member of the Tesla team with a decade of experience. Eight months later, Musk experienced an improvement in the performance of a Full Self-Driving (FSD) vehicle compared to the hundreds he had driven earlier. The smoothness and reliability were attributed to the new version, FSD 12, which introduced the new concept. (Source: Elon Musk FSD 12 Livestream) Musk believed that this innovation had the potential to not only transform autonomous vehicles, but also to represent a leap toward artificial general intelligence capable of operating in real-world scenarios. Instead of traditionally relying on hundreds of thousands of lines of code, the new system proposed by Shroff learned to drive by processing billions of video frames depicting human driving behaviour. This approach mirrored the self-training method employed by new LLM chatbots, which generate responses by processing billions of words from human text. Fully Accelerated Tesla is not the only company employing an end-to-end, there is also Comma.ai with OpenPilot, which Bengaluru boy Mankaran Singh used to power his Alto through an old Android phone. The news of his FSD journey in India has attracted attention since auto manufacturers often tell us how much computing power is on board to make it happen. Even Wayve.ai, ventured into some of the toughest streets of London to test its self-driving skills. The team broke some impressive ground. Eight months ago, they released a 9-billion parameter world model that uses video, text, and action inputs to train the systems for on-road behaviours. In May 2022, Wayve collaborated with Microsoft to leverage Azure, the tech giant’s cloud-based supercomputer, for training its neural network. Musk has pointed out a consequential aspect of the end-to-end approach: vehicles no longer receive explicit instructions such as “stop at a red light” or “verify before changing lanes”. Instead, it autonomously discerns these actions by “imitating” behaviours observed in the 10 million videos used during training. This means they’ve been using a dataset of millions of videos and have assessed the drivers on each of these. The machine learning model has been trained to mimic the behaviours of what were deemed as “good drivers”. In theory, this holds huge potential since the models can generalise more effectively when facing unfamiliar scenarios. Essentially, the model can identify the most appropriate behaviour based on its training rather than getting stuck in predefined instructions. Hit a Brake One problem, however, is yet to be overcome. Human drivers, even the most skilled ones, often bend traffic rules. For instance, over 95% of humans tend to roll slowly through stop signs rather than coming to a complete halt. And since the new FSD system is intentionally designed to imitate human behaviour, the head of the National Highway Safety Board is currently investigating if this behaviour could be deemed acceptable for self-driving cars. Moreover, despite a decade-and-a-half of reckless spending and extensive road testing, driverless technology is stuck in the pilot phase. “We are seeing extraordinary amounts of spending to get very limited results,” noted Alex Kendall, founder and CEO of Wayve. This has prompted UK-based firms like Wayve and startups such as Waabi and Ghost to focus heavily on neural networks. Branded as AV2.0, they are optimistic that more competent and cost-effective technology will let them surpass current market leaders. Self-driving cars have made headlines all these years for various high-profile errors that were hard to overlook. Investors have put in over $100 billion into developing autonomous vehicles, amounting to a third of the cost NASA incurred to put humans on the Moon. As of now, one giant leap for humankind is less expensive than a vehicle that can drive itself.
The new AI system learns to drive by processing billions of video frames depicting human driving behaviour.
["AI Features"]
["ChatGPT"]
Tasmia Ansari
2024-02-28T12:00:00
2024
691
["Go", "ChatGPT", "machine learning", "AI", "neural network", "chatbots", "R", "RAG", "Aim", "Azure"]
["AI", "machine learning", "neural network", "ChatGPT", "Aim", "RAG", "chatbots", "Azure", "R", "Go"]
https://analyticsindiamag.com/ai-features/why-did-tesla-build-a-chatgpt-for-vehicles/
3
10
2
false
true
false
10,058,469
The dark side of Web3
Decentralization and interoperability – powered by blockchain tech – are the defining characteristics of Web3. Whether the next iteration of the internet will be truly democratic remains a moot point, a huge section of society is worried about the damage Web3 can wreak on our planet. Cryptomining is energy-intensive, with a bulk of it coming from fossil fuels, particularly coal. The soaring Bitcoin price drives miners to run more and more rigs, leading to increased energy consumption. According to a study by Cambridge University, cryptocurrency mining can consume as much as 121.36 terawatt-hours (TWh) a year— greater than the annual energy consumption of countries like Argentina, the Netherlands, and the United Arab Emirates. Tesla and Artstation face backlash In March 2021, Tesla announced cryptocurrency as a payment option, drawing flak from investors and activists alike. Later, in May, the company cancelled the move. Elon Musk wrote: “We are concerned about rapidly increasing use of fossil fuels for Bitcoin mining and transactions,” claiming that “cryptocurrency is a good idea…but this cannot come at great cost to the environment.” He further said the company won’t be selling any of its own Bitcoin, and will only start using it for transactions once mining cryptocurrencies becomes sustainable. Tesla & Bitcoin pic.twitter.com/YSswJmVZhP— Elon Musk (@elonmusk) May 12, 2021 Similarly, ArtStation—an online marketplace for digital artists—also shelved its plans to launch an NFT platform after facing backlash on social media for dealing in environmentally damaging cryptoart. https://twitter.com/Bleeeach/status/1369089764700217349 Sustainability The crypto community is slowly gravitating towards sustainable forms of energy to make mining less damaging to the environment. CurrencyWorks—a fintech company from Alberta, Canada—is turning oil-waste into environmentally-friendly energy to fuel Ethereum mining. The waste is processed at a plant using a technique called pyrolysis and generates enough electricity to power 200 mining machines. Meanwhile, the company StarkWare, run by Israeli engineers Eli Ben-Sasson and Uri Kolodny, is trying to reduce the carbon impact of cryptocurrency mining by trying to pack more information in each block. Now, the company can accommodate more than a million NFTs in a single block, and big brands like Disney and Marvel have already enlisted StarkWare’s technology for their upcoming NFT launches. The digital artist Beeple aims to be carbon “neutral” or “negative”. He is making up for the energy consumed by his NFTs by investing in renewable energy and conservation projects. KlimaDAO is another environment friendly cryptocurrency. The company is designing a real carbon-backed, algorithmic digital currency—the supply of which will grow incrementally with greater investment in pro-climate projects across the world. The goal is to hasten the price appreciation of carbon assets to incentivise companies to invest in low-carbon technologies. While former Twitter CEO, Jack Dorsey, is skeptical of Web, he has defended Bitcoin in a tweet: “#bitcoin incentivizes renewable energy.” His argument is that Bitcoin mining will offer the financial incentive for companies to accelerate their search for new, innovative ways to transition to renewable energy. You don’t own “web3.”The VCs and their LPs do. It will never escape their incentives. It’s ultimately a centralized entity with a different label. Know what you’re getting into…— jack (@jack) December 21, 2021 #bitcoin incentivizes renewable energy https://t.co/KCe5bwdVs4— jack (@jack) April 21, 2021 Such efforts are already underway. EnviroNFTs is a good case in point. The NFT Series for 100,000,000 mangroves is a collaboration between Regenerative Resources (RRC), Regen Network, Chainlink, and Elevenyellow, to raise sufficient funds to grow 100 million mangroves. This series will emit ~120 tons of carbon, but is expected to sequester 20,000,000 tons of carbon over 25 years, a 160,000:1 ratio of C sequestered to C emitted.
The soaring Bitcoin price drives miners to run more and more rigs, leading to increased energy consumption.
["IT Services"]
[]
Srishti Mukherjee
2022-01-15T11:00:00
2022
599
["Go", "API", "programming_languages:R", "AI", "Git", "BERT", "Aim", "llm_models:BERT", "ViT", "R"]
["AI", "Aim", "R", "Go", "Git", "API", "BERT", "ViT", "llm_models:BERT", "programming_languages:R"]
https://analyticsindiamag.com/it-services/the-dark-side-of-web3/
4
10
1
false
false
false
10,113,074
How Fractal is Leveraging Generative AI for Insurance
The global insurance industry is undergoing a profound transformation powered by a cutting-edge technology: generative AI. Gone are the days of experimental tests; Fractal Analytics, a leader in the field, reveals how this AI revolution is scaling up, reshaping internal operations, and paving the way for a brighter future. “Generative AI is no longer the future, it’s the present,” declares Amarava Roy, Principal Consultant at Fractal. This shift isn’t just hype; it’s driven by the immense potential AI holds to streamline processes and unlock deeper insights. Insurance companies, traditionally seen as tech adoption laggards given the risk management nature of the industry and the regulatory side of it, are embracing this change with open arms. “At the heart of this revolution lies internal efficiency,” Rashid Khan, Engagement Manager at Fractal, explains how document-heavy workflows are being streamlined with AI-powered knowledge management. Underwriters get instant access to critical guidelines, and claims adjusters leverage past case insights, empowering them to make faster, more informed decisions. But the magic extends beyond simple automation. Code generation AI, capable of boosting programmer productivity by 80%, thus empowering coding professionals to advance their skills in an industry where regulatory changes demand frequent updates to pricing models for example. Generative AI aids in code maintenance and updates, ensuring compliance and accuracy. And self-serve analytics solutions built on generative AI are putting powerful insights at the fingertips of everyone, not just data scientists. Khan says that one of the standout areas where generative AI is making a significant impact is in streamlining internal operations. “Traditionally document-heavy and reliant on manual processes, the insurance industry is leveraging generative AI to enhance productivity and efficiency,” he added. In the realm of knowledge management, underwriters now have rapid access to guidelines and policy documents, facilitating quicker decision-making during the critical underwriting process. Similarly, claims adjusters benefit from knowledge bases built on adjuster notes, empowering less experienced professionals by providing insights from past cases. “Insurance industry is very excited about Generative AI” The roles of underwriters and claims adjusters is constantly and rapidly evolving with generative AI. Roy and Khan highlighted the importance of setting clear expectations and reassuring users that generative AI is meant to augment human capabilities, not replace them. Addressing fears and curiosities among employees, especially in a technology-enabled ecosystem, is crucial for successful adoption. One of the things that Khan pointed out was that Generation Z is increasingly not looking for jobs as underwriters and in the insurance industry as a whole. “Generative AI can attract talent from Generation Z, offering an edge to early adopters. However, it is essential to clarify that while it enhances efficiency, it does not replace the core functions and expertise of underwriters and claims adjusters,” Khan added. “Generative AI is not here to replace humans, it’s here to empower them,” emphasises Khan. Fractal’s “Crux,” an in-house AI copilot, exemplifies this philosophy. Crux doesn’t just answer questions; it explains the reasoning behind its answers, fostering understanding and collaboration between humans and AI. Positioned as a bridge between generative AI and business intelligence, Crux provides a conversational interface for users to interact with data. This tool not only provides answers but also explains the reasoning behind them, enhancing the user’s understanding of the insights generated. Yet, with such power comes responsibility! Roy acknowledges the ethical concerns surrounding “hallucination,” where AI models generate inaccurate information. Fractal tackles this head-on with RAG, an approach that verifies information against trusted sources, ensuring data integrity and user trust. This focus on responsible AI isn’t just lip service. Fractal embraces a robust multi-cloud approach, ensuring flexibility and data privacy. They champion open-source language models, allowing for customization and control over algorithms. And their commitment to governance guarantees ethical implementation, mitigating risks, and building trust. This journey doesn’t end with internal optimisations. As Roy points out, “the focus is on driving customer value ethically and securely.” From personalised risk assessments to faster claims processing, the future of insurance promises a seamless customer experience powered by responsible AI. All the tech Fractal has a cloud agnostic approach and does not depend on any specific cloud provider for the same. Khan said that when ChatGPT was released, it was obvious for them to rely on Microsoft Azure for OpenAI services. But with the increasing number of models and companies, they have experimented with AWS and Google Cloud as well. Furthermore, Khan said organisations are adopting a multi-cloud strategy, choosing cloud providers based on existing infrastructure, preferences, and strategic partnerships. The use of open-source language models adds flexibility and transparency to Generative AI implementations, allowing for customization and control over algorithms. He also focused on the fact that open-source language models have been crucial for ensuring that their goals in the insurance industry work perfectly for ensuring data privacy and flexibility with the cloud providers. Data privacy and security are also paramount concerns, along with potential biases in the data used for training generative AI models. Adherence to ethical standards is crucial for building trust with customers and upholding the industry’s reputation. To solve the hallucination problem, Fractal has been implementing RAG for fetching information from specific sources, instead of relying on it. “We have been using RAG and it has been working pretty well for us,” added Khan. For this, Khan said that there is no specific solution provider they focus on and decide the provider as per specific use cases. The company is also working to figure out how hallucinations can be reduced even further by employing agents in their frameworks. Looking Ahead: The Future of Generative AI in Insurance As the insurance industry continues its journey with Generative AI, the focus remains on security, ethics, and internal productivity. The technology is not just a novel addition but a transformative force shaping day-to-day operations. The transition from experimentation to scaling indicates a future where AI seamlessly integrates into the insurance workflows, contributing to enhanced efficiency, informed decision-making, and a more empowered workforce. “It is important to emphasise that across industries, organisations are experimenting with generative AI, each striving to drive customer value ethically and securely. The focus on responsible AI, keeping humans at the centre, and setting clear expectations emerged as critical considerations for successful adoption,” Roy added. This is not just a tech revolution; it’s a cultural shift. Fractal’s roadmap includes building a robust “Generative AI Center of Excellence” to fuel experimentation and rapid scaling. By empowering Insurance workforces with human-centric AI solutions, they’re paving the way for a future where insurance thrives on efficiency, informed decisions, and a deeply human touch. Governance is also identified as a crucial layer to ensure responsible and ethical AI implementation. The need for a comprehensive governance structure involving technical, legal, and leadership perspectives was emphasised to address technical, regulatory, reputational, and revenue risks. In conclusion, Fractal provides a comprehensive view of the transformative journey of Generative AI in the insurance sector. From internal empowerment and ethical considerations to technology choices and the roadmap for adoption, the insights shared highlight the industry’s commitment to leveraging AI responsibly. As generative AI continues to evolve, the insurance sector stands on the cusp of a technological revolution that promises enhanced efficiency, informed decision-making, and a workforce empowered by human-centric AI solutions.
“Generative AI is no longer the future, it’s the present,” declares Amarava Roy and Rashid Khan from Fractal Analytics.
["AI Highlights"]
["Fractal", "Fractal AI", "Gen AI in Insurance", "Generative AI"]
Mohit Pandey
2024-02-16T14:00:00
2024
1,204
["ChatGPT", "OpenAI", "AI", "AWS", "ML", "RAG", "Fractal", "Aim", "Gen AI in Insurance", "analytics", "generative AI", "Generative AI", "Azure", "Fractal AI"]
["AI", "ML", "analytics", "generative AI", "ChatGPT", "OpenAI", "Aim", "RAG", "AWS", "Azure"]
https://analyticsindiamag.com/ai-highlights/how-fractal-is-leveraging-generative-ai-for-insurance/
3
10
2
true
true
false
10,117,444
Salesforce Chief Ethicist Deems Doomsday AI Discussions a ‘Waste of Time’
At some point over the past year, you are likely to have engaged in conversations about the prospect of superintelligent AI systems taking over human workers. Or perhaps even more alarmingly, AI ushering in a world reminiscent of science fiction doomsday scenarios. Paula Goldman isn’t with you on this. The chief ethical and humane use officer at Salesforce believes such discussions are a total ‘waste of time’. AI systems are already in the copilot phase and are only going to get better over time. “With each advancement in AI, we’re continually refining our safeguards and addressing emerging risks,” she told AIM in an exclusive interview on the sidelines of TrailblazerDX 2024, Salesforce’s developers conference. The field of AI ethics is not a recent development, Goldman said, it’s been in progress for decades. Issues such as accuracy, bias, and ethical considerations have long been studied and addressed. “In fact, our current understanding and solutions are built upon this foundation of past research and development. While predicting the future capabilities of AI is challenging, our focus should be on continuously enhancing our safeguards to manage potential risks effectively, ensuring we’re equipped to handle even the most advanced AI technologies,” she added. Big Fan of the EU AI Act As AI becomes increasingly ubiquitous, the focus has shifted to its ethical implications, prompting governments and lawmakers to introduce stringent policies regarding its use. The European Union (EU) became the first jurisdiction in the world to introduce the AI Act, the first of its kind, to regulate the technology, which, many lawmakers in the EU believe, could pose potential harm to society if left unregulated. While different jurisdictions are approaching AI regulation differently, including India, Goldman, in fact, stated that she is a big fan of the EU’s AI Act. “While there’s ongoing debate about regulating AI models, what’s truly crucial is regulating the outcomes and associated risks of AI, and that’s the general approach the EU AI takes,” Goldman said. The Act categorises AI systems into four risk levels – unacceptable risk, high risk, limited risk, and minimal or no risk. It focuses on identifying and controlling the highest-risk outcomes, such as fair consideration in job applications or loan approvals, which significantly impact people’s lives. “Moreover, the EU applies standards not only to those creating the models but also to the apps built on them, the data used, and the companies utilising these products. This comprehensive approach, treating it as a layered process, is often overlooked but is essential for effective regulation.” However, with regulation comes the fear of hampering innovation, stifling creativity, and impeding the pace of technological advancement. Nonetheless, Goldman believes that AI regulation is urgently important. “These regulations should be established through democratic processes and involve multiple stakeholders. I am proud of the efforts being made in this regard and emphasise the significance of regulations that transcend individual companies,” she said. Human at the Helm At TrailblazerDX, Goldman and her colleagues, which includes Silvio Savarese, chief scientist at Salesforce, stressed on the importance of building trust in AI among consumers. Savarese even stated that the inability to build consumer “trust could lead to the next AI winter”. Goldman too, along with her colleagues emphasised the critical need to establish transparency and accountability in AI systems to foster consumer trust and prevent potential setbacks in AI adoption. “At Salesforce, we believe trusted AI needs a human at the helm. Rather than requiring human intervention for each AI interaction, we’re crafting robust controls across systems that empower humans to oversee AI outcomes, allowing them to concentrate on high-judgement tasks that demand their attention the most,” Goldman said. Salesforce’s approach has been to empower its customers by handing over control, acknowledging that they are best positioned to understand their brand tone, customer expectations, and policies. “For instance, with the Trust Layer, we plan to allow customers to adjust thresholds for toxicity detection according to their needs. Similarly, with features like Retrieve Augmented Generation (RAG), customers can fine-tune the level of creativity they desire in AI-generated responses. “Additionally, the incidents concerning AI ethics underscore the importance of government intervention in establishing regulatory frameworks, as these issues may vary across different regions and cultures. Hence, AI regulation by governments is deemed crucial,” she added. Safeguarding is a Balancing Act Moreover, as companies ship their AI products, it also becomes critical for companies to work with their customers to ensure ethical and responsible use and eliminate risks. “At Salesforce, we release products when we deem them ready and responsible, but we also maintain humility, recognising that technology evolves continuously. While we rigorously test products internally from various angles, it’s during pilot phases with customers that we truly uncover potential issues and areas for improvement,” she said. According to Goldman, this iterative process ensures that products meet certain thresholds before release, and the company continues to learn and enhance them in collaboration with its customers. “It’s about striking a balance between confidence in our products and openness to ongoing refinement.”
Focus should be on ensuring that we’re equipped to handle even the most advanced AI systems.
["AI Features"]
["Interviews and Discussions", "Salesforce"]
Pritam Bordoloi
2024-04-01T15:38:07
2024
833
["Go", "programming_languages:R", "AI", "innovation", "AI ethics", "RAG", "Aim", "ViT", "Salesforce", "Rust", "R", "Interviews and Discussions"]
["AI", "Aim", "RAG", "R", "Go", "Rust", "ViT", "AI ethics", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/salesforce-chief-ethicist-deems-doomsday-ai-discussions-a-waste-of-time/
2
10
0
false
false
false
10,019,661
Guide to Torchmeta- A Meta-Learning library for PyTorch
Torchmeta is an open-source meta-learning library built on top of Pytorch deep learning framework. The objective of Torchmeta is to allow easy benchmarking and reproduce the existing pipelines/ research work in meta-learning and make it accessible to larger communities. Torchmeta was first presented in a research paper called Torchmeta- A meta-learning library for PyTorch. The authors are Tristan Deleu, Tobias Würfl,  Mandana Samiei,  Joseph Paul Cohen, Yoshua Bengio. This project is supported and tested by the Montreal Institute for Learning Algorithms(MILA). Torchmeta is inspired by OpenAI Gym(archive), which helped Reinforcement Learning’s progress, with access to multiple environments under a unified interface. Torchmeta provides data-loaders for most of the standard datasets for few-shot classification and regression. It also includes extensions of PyTorch called meta-modules, to simplify the creation of models compatible with classic meta-learning algorithms that sometimes require higher-order differentiation. Torchmeta is fully compatible with torchvision and PyTorch’s DataLoader. Requirements & Installation Python 3.6 or abovePyTorch 1.4 or aboveTorchvision 0.5 or above Install Torchmeta via pip. !pip install torchmeta DataLoaders for few-shot learning Torchmeta automates the creation of each meta-training dataset. The data loaders in torchmeta are fully suitable with data components of PyTorch such as Dataset and DataLoader. This library provides a collection of datasets corresponding to classic few-shot classification and regression problems from the meta-learning literature. Few-shot Regression Most of the few-shot regression problems are simple regression having a function(y=ax+b) to give out input values. Torchmeta provides an object called MetaDataset from which meta-training sets are being inherited. Each dataset(that is inherited) corresponds to a specific set of parameters for that specific function. We can then create the dataset by sampling all the known parameters in a particular range to feed it to the function. This  library currently contains 3 toy problems: Sine waves (Finn et al., 2017)Harmonic functions (Lacoste et al., 2018)Sinusoid & lines (Finn et al., 2018) A Simple regression task, based on sinusoids, is shown below. It is to instantiate the meta-training set for the sine waves problem import torchmeta torchmeta.toy.Sinusoid(num_samples_per_task=10, num_tasks=1000000, noise_std=None, transform=None, target_transform=None, dataset_transform=None) You can check the full documentation here. Few-shot Classification For few-shot classification problems, the creation of each dataset follows two-step: First, N classes are sampled from a large collection of candidates and then k examples are chosen per class. These steps are automated by torchmeta under an object called CombinationMetaDataset(from MetaDataset).The library currently contains following few-shot image classification problems: Omniglot (Lake et al., 2015, 2019)Mini-ImageNet (Vinyals et al., 2016, Ravi et al., 2017)Tiered-ImageNet (Ren et al., 2018)CIFAR-FS (Bertinetto et al., 2018)Fewshot-CIFAR100 (Oreshkin et al., 2018)Caltech-UCSD Birds (Hilliard et al., 2019, Wah et al., 2019)Double MNIST (Sun, 2019)Triple MNIST (Sun, 2019) An example of how to instantiate the meta-training is shown below: import torchmeta dataset = torchmeta.datasets.MiniImagenet("data", num_classes_per_task=5, meta_train=True, download=True) Training and Testing datasets splits It is important to divide the dataset into a training and testing set for evaluation and meta-optimization. One thing to ensure that these train sets and test sets should not contain common instances. For this, Torchmeta introduces a wrapper over the datasets called Splitter to split the dataset. Shown below is an example of splitting the dataset via Torchmet. import torchmeta dataset = torchmeta.datasets.MiniImagenet("data", num_classes_per_task=5, meta_train=True, download=True) dataset = torchmeta.transforms.ClassSplitter(dataset, num_train_per_class=1, num_test_per_class=15, shuffle=True) Meta DataLoaders The objects generated in Few-shot Regression & Classification can be iterated over to generate datasets. These datasets are PyTorch Dataset objects, and as such can be included as part of any standard data pipeline (combined with DataLoader). Most meta-learning algorithms operate better on batches of tasks. Torchmeta divides the dataset into batches with the help of MetaDataLoader and those batches can be iterated over. # Helper function, dataset = torchmeta.datasets.helpers.miniimagenet("data", shots=1, ways=5, meta_train=True, download=True) dataloader = torchmeta.utils.data.BatchMetaDataLoader(dataset, batch_size=16) for batch in dataloader: train_inputs, train_labels = batch["train"] # Size (16, 5, 3, 84, 84) & (16, 5) print('Train inputs shape: {0}'.format(train_inputs.shape)) # (16, 25, 1, 28, 28) print('Train targets shape: {0}'.format(train_labels.shape)) # (16, 25) Advanced example of Torchmeta In this part, we will add all of the sections discussed above in DataLoaders. from torchmeta.datasets import Omniglot from torchmeta.transforms import Categorical, ClassSplitter, Rotation from torchvision.transforms import Compose, Resize, ToTensor from torchmeta.utils.data import BatchMetaDataLoader dataset = Omniglot("data", # Number of ways num_classes_per_task=5, # Resize the images to 28x28 and converts them to PyTorch tensors (from Torchvision) transform=Compose([Resize(28), ToTensor()]), # Transform the labels to integers (e.g. ("Glagolitic/character01", "Sanskrit/character14", ...) to (0, 1, ...)) target_transform=Categorical(num_classes=5), # Creates new virtual classes with rotated versions of the images (from Santoro et al., 2016) class_augmentations=[Rotation([90, 180, 270])], meta_train=True, download=True) #split the data into train and test dataset = ClassSplitter(dataset, shuffle=True, num_train_per_class=5, num_test_per_class=15) #creating batches from dataset dataloader = BatchMetaDataLoader(dataset, batch_size=16, num_workers=4) Meta-Learning Module Models in PyTorch are created from basic components called modules and each basic module represents a layer in the neural network containing both the computational graph and its parameters. However, some meta-learning algorithms require high-order differentiation to update the parameters via backpropagation. Torchmeta also provides huge modules called MetaModules(similar to nn.module in PyTorch) for easy implementation of meta-learning algorithms and gives you an option to provide new parameters as an additional input. Metamodule treats these new parameters as a part of the computational graph and backpropagation works as expected. Point to be noted that ith no additional parameters, Torchmeta backpropagation works in a similar way to that of PyTorch with no additional parameters. The figure below shows the MetaLinear module of Torchmeta with and without additional parameters. The first figure shows the initialization of the MetaLinear module. The second figure shows the MetaLinear module’s flow in a default manner and the third figure shows the flow of MetaLinear module with additional parameters. Given below is the example of MetaModule(base class). These modules accept additional argument params in their forward method. The architecture of Neural Network via MetaModule is shown below. #import the required libraries and Meta modules from torchmeta import torch.nn as nn from torchmeta.modules import (MetaModule, MetaSequential, MetaConv2d, MetaLinear) class Model(MetaModule): def __init__(self, in_channels, num_classes): super(Model, self).__init__() #MetaSequential is similar to nn.Sequential #A sequential container. #Modules will be added to it in the order they are passed in the constructor. #like in here MetaConv2D is passed as convulational layer and then a ReLU, MaxPool. self.features = MetaSequential(MetaConv2d(in_channels, 64, 3), nn.ReLU(), nn.MaxPool2d(2)) #MetaLinear is similar to torch.nn.Linear #Applies a linear transformation to the incoming data self.classifier = MetaLinear(64, num_classes) def forward(self, inputs, params=None): features = self.features(inputs, params=self.get_subdict(params, 'features')) logits = self.classifier(features.view((inputs.size(0), -1)), params=self.get_subdict(params, 'classifier')) return logits Conclusion In this article, we have discussed Torchmeta and its parts like DataLoader, MetaModule. To learn more about Torchmeta, you can check the examples available in the repository of the project, as well as this implementation of MAML(MAML article) for a more detailed showcase of all the features of Torchmeta. Colab Notebook Torchmeta Demo Official code, docs & Tutorials are available at: GithubResearch PaperDocumentationVideo Tutorial You can check other articles related to Meta-Learning here.
Torchmeta is an open-source meta-learning library built on top of Pytorch deep learning framework. The objective of Torchmeta is to allow easy benchmarking and reproduce the existing pipelines/ research work in meta-learning and make it accessible to larger communities. Torchmeta was first presented in a research paper called Torchmeta- A meta-learning library for PyTorch. The […]
["Deep Tech"]
[]
Aishwarya Verma
2021-02-04T12:00:00
2021
1,144
["OpenAI", "AI", "neural network", "PyTorch", "ML", "Colab", "Python", "deep learning", "few-shot learning", "R"]
["AI", "ML", "deep learning", "neural network", "OpenAI", "PyTorch", "Colab", "few-shot learning", "Python", "R"]
https://analyticsindiamag.com/deep-tech/guide-to-torchmeta-a-meta-learning-library-for-pytorch/
4
10
0
true
true
true
10,092,606
Reddit, Stack Overflow Chase Fool’s Gold in Generative AI Rush
Everyone is going after generative AI these days, from big tech to IT companies to tech influencers, and now online communities. Last week, Reddit announced changes to its new API that will now start restricting the content pipeline used to train AI models by big-tech companies like Microsoft, Google, and OpenAI. This thought out move will now enable Reddit to put the fuel, the content, for chatbots like ChatGPT or Bard behind paywall. But, this begs a question: why the sudden shift towards monetisation, though? Reddit chief Steve Huffman recognises the importance and value of the corpus of the data that the community platform hosts. And interestingly, Reddit is planning an initial public offering (IPO) this year. Since most of its revenue comes from advertising, the company’s plan to monetise on the generative AI landscape with the most valuable offering it has, is a smart move. “We don’t need to give all of that value to some of the largest companies in the world for free,” Huffman told The New York Times. The current restriction of Reddit’s data API is just for big-techs which are building AI chatbots using LLMs. The data API has been available in a structured form for developers since 2008. Unlike unstructured data that is available on the internet through web-scraping, Reddit’s API allows developers to research and build moderation and other tools by providing “data dumps”. The company says that it will still allow free-access to the Reddit data API for developers. Following Reddit’s footsteps, the ‘LLM-obsessed’ Stack Overflow also announced that it is planning to begin charging large AI developers for access to its programming driven community questions. Stack Overflow chief Prashanth Chandrasekar told Wired that he was very supportive of Reddit’s approach. “Community platforms that fuel LLMs absolutely should be compensated for their contributions so that companies like us can reinvest back into our communities to continue to make them thrive,” explained Chandrasekar. Reddit and Stack Overflow have not yet released the exact pricing details for access to their data APIs. But with the recent charge of $42,000 per month for accessing 50 million tweets by Musk, it is possible that these two platforms will also charge somewhere around that number. Chandrasekar said that companies that are building LLMs are violating the terms of service of the platform. Even though companies can use the data to train models freely, the content posted by users on the platform falls under a Creative Commons licence, which means it needs proper attribution to where the data came from, in this case to the questions and answers of the specific users. This is not possible in the case of LLMs and is therefore clearly a violation. This is similar to how Musk accused Microsoft and OpenAI of illegally using Twitter data and stopping the access. Sailing Against the ‘Generative AI’ Tides In a most absurd behaviour, Stack Overflow previously had banned posting of chatbot generated answers. But later, the company announced that it is planning to integrate generative AI services within the community. Now with putting the data behind a paywall, the community is clearly trying to surf on the generative AI waves, or stopping it from rising higher. Chandrasekhar said that for ensuring that future chatbots perform better than the current ones, it is essential that they are trained on evolving and progressing data. Fencing off valuable data might deter AI training and slow improvements in LLMs. He believes that proper licensing of the data API will accelerate the development of high-quality LLMs. Similarly, publishers have also been wary about the usage of their website for training AI chatbots. According to the Washington Post, Google’s Bard uses data from Wikipedia, New York Times, The Guardian, and a lot more websites in its CommonCrawl Database. It is quite possible that Wikipedia might also put up some walls behind the usage of its data for AI since it has been seeking donations for the last few years. Jimmy Donal Wales, CEO of Wikipedia, believes that generative AI could actually help improve the online encyclopaedia. On the flip side, Discord has announced no plans for modifying its API offerings, and are going to remain free. Swaleha Carlson, the spokesperson of the company said the API is provided under the terms that forbid AI training anyway. When it comes to Reddit, the situation might be tricky. The company mostly has a very healthy relationship with Google and Microsoft. The search engines “crawl” the community platform’s pages for indexing information in the search results. This has been boding well for Reddit as its pages appear higher in the search results. The dynamics is clearly a little different when it comes to data gobbling LLMs. Now that the company is putting them up behind a paywall, that too for big AI makers like Google and Microsoft, it might run into a situation where the search engines stop crawling the community platform’s pages for search results. This might result in platforms like Reddit and Stack Overflow losing on the revenue they currently generate through visitors and advertisers. Everyone is chasing the generative AI’s fool’s gold (If data is the gold, data API’s are the fool’s gold). When it comes to community platforms like Stack Overflow and Reddit, the move of monetising on data API’s has a high possibility of backfiring. At the same time, this could be best bet that they can make.
Will Wikipedia follow in the footsteps of Reddit, Stack Overflow, and other platforms by limiting access to its data for AI purposes?
["AI Features"]
["AI (Artificial Intelligence)", "AI Chatbot", "ChatGPT", "Elon Musk", "Generative AI", "GPT-4", "OpenAI"]
Mohit Pandey
2023-05-02T15:00:00
2023
900
["Go", "ChatGPT", "API", "OpenAI", "AI", "AI Chatbot", "chatbots", "IPO", "GPT-4", "Elon Musk", "GPT", "generative AI", "Generative AI", "R", "AI (Artificial Intelligence)"]
["AI", "generative AI", "ChatGPT", "OpenAI", "chatbots", "R", "Go", "API", "GPT", "IPO"]
https://analyticsindiamag.com/ai-features/reddit-stack-overflow-chase-fools-gold-in-generative-ai-rush/
2
10
0
false
false
true
69,343
Top 8 Python Tools For App Development
Python is one of the popular languages among data scientists and developers because of its availability of the number of libraries and tools. According to the TIOBE Programming Community index for July 2020, Python language is in the third position among the top 20 programming languages used by skilled engineers around the globe. In one of the surveys by AIM, 53.3% of data scientists prefer this language as it helps them build specific analytics capabilities and data science skills. In this article, we list down the top 8 Python tools one can use for app development. (The list is in alphabetical order) 1| BeeWare About: BeeWare is a collection of tools and libraries for developing and distributing native applications in Python. The suite of tools and libraries works together to help a developer write cross-platform native GUI Python applications. BeeWare includes the following: – Toga, which is a Python native, OS native, cross-platform GUI toolkit.Briefcase, which is a tool for packaging Python projects as distributable artefacts that can be shipped to the end-users.Rubicon ObjC- It is a library for working with Objective C libraries on iOS and macOS using Python code.Rubicon Java, which is a library for working with Java libraries using Python code.Pre-compiled builds of Python that can be used on platforms where official Python installers aren’t available. Know more here. 2| Bottle About: Bottle is a fast and simple micro-framework for small web applications. It is distributed as a single file module and has no dependencies other than the Python Standard Library. It offers request dispatching with URL parameter support, a built-in HTTP Server, adapters for many third party WSGI/HTTP-server, etc. and with no dependencies other than the Python Standard Library. Know more here. 3| CherryPy About: CherryPy is an object-oriented web framework in Python. It allows the users to develop web applications in a similar way they would develop any other object-oriented Python programs. Some of the features of this framework are: – Easy to run multiple HTTP servers at once.A powerful configuration system for developers and deployers alike.A flexible plugin system.Built-in tools for caching, encoding, sessions, authentication, static content, and many more.Built-in profiling, coverage, and testing support.Runs on Python 2.7+, 3.5+, PyPy, Jython and Android. Know more here. 4| Django About: Django is an open-source, high-level web framework in Python that encourages rapid development and clean, pragmatic design. It is used for backend web applications that are based on Python language. Some of its features include- Django was designed to help users take applications from concept to completion in a faster manner.The tool exerts security seriously and assists the developers to avoid various common security mistakes. Know more here. 5| Falcon About: Falcon is a reliable, high-performance Python web framework for building large-scale app backends and microservices. Falcon apps work with any WSGI or ASGI server, and it runs under CPython version 3.5+ and PyPy version 3.5+. Some of the features are mentioned below:- Highly-optimised, extensible codebase.Falcon performs intuitive routing via URI templates, REST-inspired resource classes, etc.It provides easy access to headers as well as bodies through request and response classes.Allows snappy unit testing through WSGI helpersDRY request processing via middleware components as well as hooks. Know more here. 6| Flask About: Flask is one of the most popular Python web application frameworks. It is a lightweight WSGI web application framework. The framework is designed with the ability to scale up to complex applications. Flask offers suggestions but doesn’t enforce any dependencies or project layout. Know more here. 7| Kivy About: Kivy is an open-source Python library for rapid development of applications that make use of innovative user interfaces, such as multi-touch apps. It is cross-platform, GPU accelerated as well as business-friendly. Kivy depends on many Python libraries, such as GStreamer, PIL, Cairo, among others. Know more here. 8| Pyramid About: Pyramid is a small, fast web framework for Python 2 and 3. It is designed to create web applications easier. Pyramid provides only the core tools needed for nearly all web applications: mapping URLs to code, security, and serving static assets, for instance, files like JavaScript and CSS. Pyramid encourages standard Python development practices with packaging tools, virtual environments, logging, and so on. Know more here.
Python is one of the popular languages among data scientists and developers because of its availability of the number of libraries and tools. According to the TIOBE Programming Community index for July 2020, Python language is in the third position among the top 20 programming languages used by skilled engineers around the globe.  In one […]
["AI Trends"]
["App Development", "business analysis tools", "django python", "Python", "python frameworks", "python tools", "simple python project"]
Ambika Choudhury
2020-07-09T18:17:26
2020
701
["data science", "Go", "AI", "simple python project", "R", "business analysis tools", "RAG", "App Development", "Python", "Aim", "django python", "python tools", "analytics", "microservices", "JavaScript", "python frameworks"]
["AI", "data science", "analytics", "Aim", "RAG", "microservices", "Python", "R", "JavaScript", "Go"]
https://analyticsindiamag.com/ai-trends/top-8-python-tools-for-app-development/
3
10
1
false
false
false
10,088,625
Meet Tesla Optimus Clone
Recently, AI robotics startup Figure has finally made the public release of the world’s first commercially available general-purpose humanoid robot Figure O1, the prototype of which bears a strikingly close resemblance to Tesla’s robot Optimus. Figure 01 will have the power to acquire new knowledge and skills over time, which enables it to take on more complex tasks and operate in a wider range of environments. It can interact with its environment, which means that it is equipped with sensors and other devices that allow it to perceive and respond to its surroundings. This enables it to perform tasks such as navigating through a warehouse or picking up objects in a manufacturing plant. Meet Figure – the AI Robotics company building the world's first commercially viable autonomous humanoid robot.We spent the last 9 months assembling our world-class team and designing our Alpha build – now we're ready to introduce you to Figure 01. pic.twitter.com/pas6rgncTW— Figure (@Figure_robot) March 2, 2023 The purpose of this humanoid robot is to address labour shortages in industries such as manufacturing, logistics, warehousing, and retail by providing an efficient and adaptable workforce. By deploying these robots into these sectors, companies can reduce their reliance on human labour, increase productivity, and improve their bottom line. Meanwhile, at the ‘Tesla Investor Day’ event, chief Elon Musk recently showed a video of Optimus aka Tesla bot which Musk said he is planning to use in Tesla factories and eventually open for public purchase. Optimus was first unveiled to the public in 2021. Very impressive progress from @TeslaAIBot in just a few months! At the same time a staged demo is easy – dozens of teams have gotten this far.The hard part is getting a humanoid to be useful & efficient in a real application which no one has done. pic.twitter.com/MC2dmiDV2w— Simon Kalouche (@simonkalouche) March 1, 2023 Figure’s Founding Story Founded in 2022 by Brett Adcock, the Founder of electric vertical takeoff and landing company Archer Aviation and online job recruitment platform Vettery with the aim of revamping the robotics sector. According to the chief, millions of jobs are considered risky in today’s workforce, causing unprecedented labour shortages. In order to boost productivity and maintain economic growth, automation is required. The workforce will generate more fairly priced goods and services than ever before thanks to their integration of this technology. Source: Figure “We believe general-purpose humanoid robots have far more potential than single-purpose robots, which are currently ubiquitous within the field,” he added. While initially, the activities that Figure’s humanoids would perform will be inflexible and monotonous, he believes that software developments in robot learning will eventually allow tasks to be executed far better than by humans. As for the future of Figure, Adcock has great confidence in the potential of the technology, claiming that it will impact the most significant industry globally, contributing to research in AI and robotics in the hopes of promoting a positive AI future for humanity. The team at Figure consists of 40 industry leaders with more than 100 years of collective experience in AI and humanoid technologies, including members of the Boston Dynamics, Tesla, IHMC, GoogleX, Cruise, and Apple SPG professional networks. With 20 years of experience in humanoid technology from IHMC, including his leadership in DARPA’s Humanoid Robotics Competition, Dr Jerry Pratt is Figure’s chief technology officer. The team also has former Apple engineer Michael Rose and Google roboticist Gabe Nelson as chief scientists. The Figure team finished building their full-scale humanoid robot in just six months, and they intend to start testing it soon. Figure is currently hiring across various sectors. Growing Need for Humanoid Robots With the regular breakthroughs in AI, the market for humanoid robots is expected to experience substantial growth with the market value of humanoid robots estimated to reach $3.9 billion in 2023, with a compound annual growth rate (CAGR) of 52.1% from 2017 to 2023.  The market for humanoid robots is predicted to grow at a CAGR of 63.5% between 2022 and 2027. Bipedal robots are predicted to have the highest CAGR among all types of humanoid robots during the forecasted period. This growth is mainly attributed to the rapid advancements in the capabilities of humanoid robots and their expanding range of applications. According to a report by Goldman Sachs, the market for humanoid robots is expected to value $6 billion or more in the next 10 to 15 years, which might help to meet 2% of the world’s need for elderly care by 2035 and 4% of the US manufacturing labour deficit by 2030. The paper also projects a “blue-sky” scenario in which the market could touch $154 billion by 2035 and close up to 53% of the gap for senior carers and anywhere between 48% and 126% of the labour shortfall. There are still obstacles to be solved, such as boosting movement and agility, extending battery life, and lowering production costs. India’s Big Bets India is among the top 15 countries in the world with the highest number of recorded robot installations and has produced some interesting humanoid robots. These include Manav, India’s first 3D-printed humanoid robot that can perform tasks such as walking, talking and dancing by listening to voice commands. HDFC Bank’s IRA and Mitra are customer-service robots designed to assist customers with banking and healthcare-related tasks respectively. Vistara Airlines’ RADA and Kerala police headquarters’ RoboCop are customer-assistance robots that provide information and entertainment options. INDRO is India’s tallest humanoid robot capable of carrying 150 kg of payload, while KEMPA is a customer-assistance robot deployed at Kempegowda International Airport. Finally, AcYut, developed by undergraduate students at BITS Pilani, is India’s first indigenous robot. Read more: Most Popular Robotics Stories of 2022
The new humanoid robot is the current talk of the town.
["AI News"]
["figure", "humanoid robot", "robot"]
Shritama Saha
2023-03-03T15:05:13
2023
948
["Go", "API", "startup", "programming_languages:R", "AI", "robot", "RPA", "automation", "Aim", "ViT", "figure", "R", "humanoid robot"]
["AI", "Aim", "R", "Go", "API", "ViT", "automation", "RPA", "startup", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/say-hello-to-figure-o1-the-game-changing-humanoid-robot-of-the-future/
3
10
5
true
false
true
10,052,590
What Are The Major Challenges Of Data Unification
Managing corporate data often necessitates numerous software tools, including customer relationship management (CRM) systems, email marketing platforms, and enterprise resource planning (ERP) systems. Each of these programmes uniquely collects data. Additionally, third-party data exists; but to make sense of the seemingly unending data stream and make data-driven decisions. Data unification is the process of consuming data from several operating systems or sources and consolidating it into a single source through transformations, schema integrations, deduplication, and general record cleaning. One can see various benefits from data unification, including an improved client experience, increased account retention rates, and more income. First, however, enterprises must overcome obstacles to achieve unified data. The Challenges with Data Unification Consider the numerous applications utilized by your organization to understand better the primary problems associated with data unification. Each one uniquely collects data. Now consider attempting to consolidate all of your organization’s data into a single master source. To give a clearer understanding of what this process comprises, below is a high-level description from Michael Stonebraker’s white paper on The Seven Tenets Of Scalable Data Unification: Data is ingested, most commonly from company operational data systems.For example, conducting data cleansing, -99 is frequently a code meaning “null,” some data sources may contain old customer addresses.Performing conversions, for example, from euros to dollars or from airport code to city name.Integrating schemas, for example, “salary” in one system is referred to as “wages” in another.Deduplication (entity consolidation) is performed when, for example, an individual is “Mike Stonebraker” in one data source and “M.R. Stonebraker” in another.Classification or other advanced analytics, for example, classifying expenditure transactions to determine where a company spends money. Exporting consolidated data to a single or more downstream systems As one can see, data unification is a complicated process, which is why the vast majority of enterprises today are experiencing a data mastering problem. However, enterprises must overcome obstacles to achieve unified data. Problem – Creating a Single Source of Data Managing corporate data typically entails the use of multiple software solutions. For instance, you may utilize a customer relationship management (CRM) system, email marketing tools, or enterprise resource planning (ERP) systems. Each of these programmes uniquely collects data. Then there’s third-party data or data obtained from sources other than your organization, which is used for validation, updating, and enrichment. The problem is to centralize everything and make it accessible to those who require it. Solution: Adopt a customer data platform that consolidates and organizes data from disparate sources into a single source of truth. Problem – Data Cleaning Customer data also presents another challenge: ensuring its accuracy. Data unification entails more than simply consolidating data on a single platform. Data quickly becomes stale and out of date. Requesting that the employee’s fact verify all client data stored on the systems is time consuming, costly, and creates a possibility for error. Solution: One can avoid this issue with a customer data platform, as it automatically updates (and adds) information to increase accuracy. Additionally, it discovers duplicates during data consolidation. Eliminating Data Silos One of the benefits of unified data is that it promotes cross-departmental collaboration. On the other side, silos can make it more difficult for a business to accomplish goals involving several divisions’ collaboration. For instance, it is critical for the sales, customer service, and marketing departments to collaborate to increase client retention rates. When there is a connection across departments, revenue targets will continue to be elusive. Solution: Reconnect the departments by implementing a customer data platform that eliminates data silos and liberates your data for use by all departments. Conclusion Each year, the volume of data in most organizations doubles, compounding data concerns and increasing the difficulty of managing databases. In addition, organizations are increasingly having difficulty unifying data across systems such as e-commerce platforms, customer relationship management, warehouse management, finance, product management, resource planning, and electronic point of sale (EPOS), to mention a few. To derive true value from data analytics, organizations should consider aggregating data from diverse systems into a cohesive picture that spans the entire organization.
Data unification is a highly complex process that is one of the most significant difficulties facing many large businesses today.
["AI Features"]
["Data Analytics"]
Dr. Nivash Jeevanandam
2021-10-29T16:00:00
2021
676
["Go", "programming_languages:R", "AI", "data-driven", "programming_languages:Go", "Scala", "GAN", "analytics", "programming_languages:Scala", "Data Analytics", "R"]
["AI", "analytics", "R", "Go", "Scala", "GAN", "data-driven", "programming_languages:R", "programming_languages:Scala", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/challenges-of-data-unification/
2
10
3
false
true
false
10,067,466
Why the high accuracy in classification is not always correct?
Classification accuracy is a statistic that describes a classification model’s performance by dividing the number of correct predictions by the total number of predictions. It is simple to compute and comprehend, making it the most often used statistic for assessing classifier models. But not in every scenario accuracy score is to be considered the best metric to evaluate the model. In this article, we will discuss the reasons not to believe in the accuracy performance parameter completely. Following are the topics to be covered. Table of contents About ClassificationAbout Classification accuracyScenarios where accuracy failsAlternatives for accuracyExample of accuracy failure Let’s start with understanding the failure of the accuracy parameter to capture the actual performance of the machine learning algorithm. About Classification The Classification method is a Supervised Learning approach that uses training data to identify the category of fresh observations. The algorithm in Classification learns the pattern from a given dataset or observations and then classifies additional observations into one of many classes. Classes can also be referred to as targets/labels or categories. In contrast to regression, the outcome variable of Classification is a category rather than a continuous value, such as “Yes or no”, “0 or 1”, and so on. Because the Classification method is a Supervised learning approach, it requires labelled input data, which implies it comprises input and output. Are you looking for a complete repository of Python libraries used in data science, check out here. About classification accuracy Predicting a class label from instances in a problem area is what classification predictive modelling is all about. Classification accuracy is the most frequent parameter used to assess the effectiveness of a classification prediction model. Because the accuracy of a predictive model is often high (over 90%), it is usual to characterise the model’s performance in terms of the model’s error rate. Classification accuracy is achieved by first employing a classification model to predict each sample in a test dataset. The predictions are then compared to the known labels for the test set examples. Accuracy is then determined as the proportion of accurately predicted examples in the test set divided by all predictions made on the test set. Accuracy= Correct predictions/Total predictions In contrast, the error rate may be computed by dividing the total number of inaccurate predictions made on the test set by the total number of predictions made on the test set. Error rate= Incorrect predictions/Total predictions Since accuracy and error rate are complementary, they could always be computed one from the other. Accuracy = 1- Error rate Error rate = 1- Accuracy Scenarios where accuracy fail When the distribution of instances to classes is skewed, then accuracy fails to capture the actual performance of the algorithm. Consider a binary unbalanced dataset with a class imbalance of 1:100 which means for every case of the minority class, there will be 100 examples of the majority class. In this sort of challenge, the majority class symbolises “normal,” while the minority class represents “abnormal,” such as a flaw, or a fraud. A strong performance in the minority class will be chosen above a strong performance in both groups. A model that predicts the majority class for all cases in the test set will have a classification accuracy of 0.99, matching the average distribution of major and minor examples in the test set. Many machine learning models are built on the premise of balanced class distribution, and they frequently learn basic rules that are either explicit or implicit. It’s the same as always predicting the majority class, resulting in an accuracy of 0.99, but in practice doing no better than an untrained majority class classifier. The accuracy reports a correct result; the point of failure is the practitioner’s perception of high accuracy ratings. Instead of rectifying incorrect intuitions, different measures are commonly used to characterise model performance for unbalanced classification issues. Alternatives for accuracy The objective of evaluating classification models is to determine how closely the classification recommended by the model corresponds to the actual categorization of the case. There are several measures for evaluating the model’s performance depending on the technique of observation. Confusion Matrix A confusion matrix is not a measure for evaluating a model, but it does give information about the predictions. It is necessary to understand the confusion matrix in order to understand other classification metrics such as accuracy and recall. The confusion matrix goes beyond classification accuracy by displaying the accurate and wrong (i.e. true or false) predictions for each class. A confusion matrix is a 2×2 matrix in the case of a binary classification problem. If there are three separate classes, the matrix is 3×3, and so on. Analytics India Magazine True Negative (TN) is the proportion of valid forecasts that are negativeFalse Positive (FP) is the frequency of inaccurate guesses that occur in positive cases False Negative (FN) is the number of inaccurate guesses that occur in bad situations True Positive (TP) is the frequency of positive examples with correct forecasts Precision and Recall Precision is the proportion of positive events that are genuinely positive. It assesses “how helpful the classifier’s results are.” Precision assesses how accurate our model is when the forecast is correct. Precision=True Positive/(True Positive + False Positive) Otherwise, a precision of 90% means that when our classifier flags a customer as fraud, it is truly fraud 90% of the time. Positive forecasts are the emphasis of precision. It shows how many optimistic forecasts have come true. Another approach to look at the TPs is via the lens of recall. Recall is the proportion of true positive events that are marked as such. It assesses “how complete the outcomes are,” or what percentage of real positives are projected to be positive. Precision=True Positive/(True Positive + False Negative) Actual positive classifications are the objective of recall. It reflects how many of the positive classifications the model accurately predicts. ROC Curve and Area Under the Curve (AUC) The ROC graph is another method for evaluating the classifier’s performance. The ROC graph is a two-dimensional graphic that shows the false positive rate on the X axis and the true positive rate on the Y axis. In many situations, the classifier includes a parameter that may be modified to increase genuine positive rates at the expense of raising false positive rates or to decrease false positive rates depending on the declining value of actual positive rates. Each parameter setting provides a par value for a false positive rate and a positive actual rate, and the number of such pairings may be utilised to describe the ROC curves. The following are the characteristics of a ROC graph. The ROC curve or point is independent of the class distribution or the cost of mistakes.The ROC graph comprises all of the information included in the error matrix.The ROC curve is a visual tool for measuring the classifier’s ability to properly identify positive and negative examples that were mistakenly categorised. In many situations, the area under the one ROC curve may be employed as a measure of accuracy, and it is known as measurement accuracy based on the surface. Example of accuracy failure Let’s have a look at when the accuracy performance parameter fails to capture the actual performance on a dataset. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score,precision_score,recall_score,plot_confusion_matrix,confusion_matrix from sklearn.model_selection import train_test_split import warnings warnings.filterwarnings('ignore') Reading the data, preprocessing df=pd.read_csv('https://raw.githubusercontent.com/analyticsindiamagazine/MocksDatasets/main/healthcare-dataset-stroke-data.csv') df_utils=df.dropna(axis=0) df_utils[:8] Analytics India Magazine df_new=pd.get_dummies(df_utils,drop_first=True) df_new.shape Encoding all the categorical values for the learner. Let’s analyse the target column and check the balance of the data. sns.countplot(df_utils['stroke']) plt.show() Analytics India Magazine Analytics India Magazine This plot clearly shows that the data is highly imbalanced. Let’s fit our logistic regression learner and check the performance of this data. X=df_new.drop('stroke',axis=1) y=df_new['stroke'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42) lr=LogisticRegression() lr.fit(X_train,y_train) y_pred=lr.predict(X_test) accuracy=np.round(accuracy_score(y_test,y_pred),2) precision= np.round(precision_score(y_test,y_pred),2) recall= np.round(recall_score(y_test,y_pred),2) Analytics India Magazine So, the scores say that our model has performed very well as the accuracy score is 0.95, the precision is 0.50 and the recall is 0.01. Let’s compare the test and prediction with the help of a plot. fig, axes = plt.subplots(1, 2, figsize=(10,5)) sns.countplot(y_test,ax=axes[0]) axes[0].set_title('observed') sns.countplot(y_pred,ax=axes[1]) axes[1].set_title('predicted') plt.show() Analytics India Magazine But the plot tells a difference: the learner predicted a negligible amount of ‘1’s. This is due to the fact that the data was imbalanced. Let’s use a resampling technique to mitigate this problem. In this article, we are going to use the oversampling method known as  Synthetic Minority Oversampling Technique (SMOTE). This practise produces duplicate instances in the minority class, despite the fact that these examples provide no new information to the model. Instead, new instances may be created by combining old ones. from imblearn.over_sampling import SMOTE smt = SMOTE(k_neighbors=5, random_state=42) X_train_res, y_train_res = smt.fit_resample(X_train, y_train) print('After SMOTE, the shape of train_X: {}'.format(X_train.shape)) print('After SMOTE, the shape of train_y: {} \n'.format(y_train_res.shape)) print("After SMOTE, counts of label '1': {}".format(sum(y_train_res == 1))) print("After SMOTE, counts of label '0': {}".format(sum(y_train_res == 0))) Analytics India Magazine Here it could be observed that the total number of data points is been balanced in both the categories (0,1). Let’s plot a comparison between before and after resampling. Analytics India Magazine From the plot, we can observe that the resampling technique worked and the minority class was resampled. We will be only resampling the training part because this part will be used for training and the test part should always be unchanged. lr_res=LogisticRegression() lr_res.fit(X_res,y_res) y_pred_res=lr_res.predict(X_test) accuracy_res=np.round(accuracy_score(y_test,y_pred_res),2) precision_res= np.round(precision_score(y_test,y_pred_res),2) recall_res= np.round(recall_score(y_test,y_pred_res),2) Now all the scores are being stored, create a dataframe to store the before and after results of the resampling so that we can have a better understanding of the variations. df_scores=pd.DataFrame([[accuracy, precision,recall], [accuracy_res, precision_res, recall_res]], index=['before', 'after'], columns=['accuracy', 'precision','recall']) df_scores Analytics India Magazine There is a difference in the accuracy score of the model and from 0.95 it has fallen down to 0.81, and the recall score has increased. Conclusions Accuracy and error rate is the standard measures for characterising classification model performance. Because practitioners built intuitions on datasets with an equal class distribution, classification accuracy fails on classification tasks with a skewed class distribution. With this hands-on article, we have understood that high accuracy in classification problems is not always better. References Link to the above codeKnow more about the evaluation of classification
The high accuracy of classification model could be misleading.
["AI Trends"]
["Classification", "confusion matrix", "Imbalance data", "Precision and Recall"]
Sourabh Mehta
2022-05-21T10:00:00
2022
1,722
["data science", "Classification", "NumPy", "machine learning", "Precision and Recall", "TPU", "AI", "RAG", "confusion matrix", "Seaborn", "analytics", "Imbalance data", "Matplotlib", "Pandas"]
["AI", "machine learning", "data science", "analytics", "Pandas", "NumPy", "Matplotlib", "Seaborn", "RAG", "TPU"]
https://analyticsindiamag.com/ai-trends/why-the-high-accuracy-in-classification-is-not-always-correct/
4
10
0
true
true
false
41,363
Altair Hosts Top Industry Leaders To Discuss Role Of AI In Product Life Cycle
Introduction, growth, maturity and decline — every professional who has read any management book can tell you that these are the four pillars of a product life cycle. But as emerging technologies like artificial intelligence, machine learning, data analytics and internet of things are becoming an integral part of organisations, the very product life cycle is on the cusp of evolution. To discuss this ground-breaking phenomenon, Altair, a global tech company that provides software and cloud solutions in the areas of product development, high-performance computing and data intelligence, recently held their annual Though Leadership programme with the theme: Artificial Intelligence and the Future of Product Life Cycle. The event revolved around the discussion about how and why engineering applications are successfully employing the use of ML to give products a competitive edge. Industry leaders from noted organisations such as Bosch, Mercedes-Benz, GE and Aptiv, among others, discussed how AI and ML are changing the way products are being designed and how organisations are now looking to leverage the transformative power of these emerging technologies. Altair who has worked with all the above-mentioned companies is a noted provider of enterprise-class engineering software enabling innovation, reduced development times, and lower costs through the entire product lifecycle from concept design to in-service operation. Their simulation-driven approach to innovation is powered by their integrated suite of software which optimises design performance across multiple disciplines encompassing structures, motion, fluids, thermal management, electromagnetics, system modelling and embedded systems, while also providing data analytics and true-to-life visualisation and rendering. Manu Saale, managing director and CEO at Mercedes-Benz Research & Development India Pvt. Ltd, said, “James was just asking me that if I had ever dreamt two years ago that I’d be at this conference talking about AI, I’d have said no… This digital tsunami, which is driven by AI, is now revolutionising the automotive industry, after changing the faces of other sectors like e-commerce, IT and medicine, among others.” Saale added that for Daimler, AI had redefined the business model in many ways and that they fully expected to see their future product portfolio significantly changed by AI. Sharing his organisation’s experience in the emerging tech world, Alok Nanda, CEO at GE India Technology Centre and CTO at GE South Asia, said, “Our motivation is two-fold — increase in productivity and the potential for growth. The products that will now come out will be powered by digital insight and they’d be much more profitable and would create new and innovative revenue opportunities.” Nanda added that GE firmly believed that over the next 5-10 years (if not earlier), there won’t be any product which will not be touched by AI in some way or the other. Altair chairman and CEO James R Scapa The Indian arm of $396 million global software firm Altair, is one of the leaders in its space and also ranks in the top 3 markets in Asia. James R Scapa, the chairman and CEO at Altair talked about smart business and product design. He said that the four overarching key points that Altair was inculcating were: Global evolution towards smart, connected everything Drive for an increased variety of products with higher quality and better aesthetics Massive exploration of ideas driving the need for advanced HPC and cloud The convergence of data intelligence and simulation Scapa added, “We have customers spanning across sectors like automotive, aerospace, rail, shipping to industrial machinery industry and electronics and consumer goods, creating an unbelievably wide variety of products and developing technologies. Our goal is to help them design better and ease their decision-making process. We at Altair have always been all about applying simulations and optimisation because that was a big differentiator for us. But over the years we have realised that artificial intelligence and machine learning are going to be a crucial component of that journey — not just in design, but throughout the product life cycle.”
Introduction, growth, maturity and decline — every professional who has read any management book can tell you that these are the four pillars of a product life cycle. But as emerging technologies like artificial intelligence, machine learning, data analytics and internet of things are becoming an integral part of organisations, the very product life cycle […]
["Deep Tech"]
["Altair", "HPC Data Management Software", "hpc data management system", "leader of ai", "managing hpc data"]
Prajakta Hebbar
2019-06-26T09:22:15
2019
649
["Go", "hpc data management system", "artificial intelligence", "machine learning", "AI", "ML", "Git", "HPC Data Management Software", "Altair", "RAG", "Aim", "analytics", "managing hpc data", "R", "leader of ai"]
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "Aim", "RAG", "R", "Go", "Git"]
https://analyticsindiamag.com/deep-tech/altair-hosts-top-industry-leaders-to-discuss-role-of-ai-in-product-life-cycle/
2
10
4
true
false
false
10,061,478
Airtel acquires strategic stake in Singapore-based blockchain startup Aqilliz
Bharti Airtel has acquired a strategic stake in blockchain technology startup Aquilliz as part of its Startup Accelerator Program. The Indian network carrier plans to deploy Aquilliz’s advanced blockchain in its adtech, digital entertainment and digital marketplace businesses. “Blockchain technology is maturing and we see its application across areas such as Adtech, Creator Economy, and Loyalty Programs. We are thrilled to have Aqilliz join our Startup Accelerator Program and be part of Airtel’s digital innovation factory,” said Adarsh Nair, CEO of Airtel Digital. Singapore-based Aquilliz has developed a patented hybrid blockchain platform, Atom, to integrate differential privacy and federated learning on a distributed digital ledger. “We are extremely excited to be a part of Airtel’s digital innovation play and bring this first of its kind blockchain technology to India. Aqilliz’s patented technology will enable Airtel to capture and carry this value exchange in the form of consent and provenance across the digital supply chain. We look forward to working closely with the team at Airtel,” said Gowthaman Ragothaman, founder and CEO of Aqilliz. LastDecember, Airtel launched its ‘Airtel India Startup Innovation Challenge’ initiative along with Invest India to encourage young startups to build solutions around 5G, IoT, cloud communications, digital advertising and digital entertainment.
Aquilliz has developed a patented hybrid blockchain platform, Atom.
["AI News"]
["AI Startups", "Mergers and Acquisitions"]
Poulomi Chatterjee
2022-02-24T15:52:47
2022
204
["federated learning", "Go", "programming_languages:R", "AI", "innovation", "Git", "RAG", "differential privacy", "Mergers and Acquisitions", "R", "AI Startups", "startup"]
["AI", "federated learning", "differential privacy", "RAG", "R", "Go", "Git", "innovation", "startup", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/airtel-acquires-strategic-stake-in-singapore-based-blockchain-startup-aqilliz/
2
10
3
false
false
false
10,166,730
Perplexity Crosses $100 Million in Annualised Revenue
Perplexity, the AI search engine startup has crossed $100 million in annualised revenue. CEO Aravind Srinivas took to a post on LinkedIn to announce the same. He revealed that the company reached the figures 20 months after the launch of Perplexity Pro. He also added that Perplexity has achieved 6.3x year-over-year growth despite being ‘highly under monetised’. Perplexity offers a free plan, along with a $20 monthly Pro plan with expanded access to features. Besides, it also offers an enterprise plan that costs $40 per month per seat for companies under 250 employees. Perplexity also allows developers to integrate its Sonar AI models with APIs. The company is also reportedly in talks to raise funds between $500 million and $1 billion, valuing the company at $18 billion. This doubles Perplexity’s valuation of $9 billion in the previous funding round last December, after raising $500 million. Perplexity has been backed by NVIDIA, SoftBank Group, and Amazon founder Jeff Bezos. The company also made its in-house AI model Sonar available to all Pro users on the platform, along with all the other popular models. Now, users with the Perplexity Pro plan can make Sonar the default model via settings. Sonar is built on top of Meta’s open-source Llama 3.3 70B. It is powered by Cerebras Inference, which claims to be the world’s fastest AI inference engine. The model is capable of producing 1200 tokens per second. Earlier, the model was available via API for developers. Over the last few months, startups building AI enabled products have showcased remarkable growth with small member teams in quick time. Recently, Cursor, an AI enabled coding platform reached $100 million in ARR (annual recurring revenue) in just 21 months since its inception. Its growth from $1 million to $100 million in ARR took just around 12 months, making it the fastest growing SaaS of all time. Last year, MidJourney, an AI image generator, reached $200 million in ARR with just 11 employees. Similarly, Bolt.new, another AI coding platform, has crossed $30 million in ARR with just 20 employees, in just a little over 4 months and has registered 3 million users.
CEO Aravind Srinivas says the company has achieved 6.3x YoY growth despite highly under monetising its product.
["AI News"]
["AI (Artificial Intelligence)", "Perplexity"]
Supreeth Koundinya
2025-03-26T21:34:04
2025
355
["API", "funding", "Perplexity", "Cerebras", "AI coding", "AI", "programming_languages:R", "Aim", "llm_models:Llama", "R", "AI (Artificial Intelligence)", "startup"]
["AI", "Aim", "R", "API", "startup", "funding", "Cerebras", "AI coding", "llm_models:Llama", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/perplexity-crosses-100-million-in-annualised-revenue/
3
10
2
false
false
false
10,000,710
India Steps Up Research In Particle Acceleration, Check Out The Most Powerful Indian Institutes
The last time particle accelerators in India made news was after the discovery of Higgs Boson, better known as God’s Particle in 2012 that prompted a spate of development in particle physics. Reportedly, in 2001, Bhabha Atomic Research Centre took the lead in commissioning a Large Hadron Collider. As India steps up to play a bigger role in particle acceleration, we list down the best and active particle accelerators in India that are making incredible contributions in the field of physics, and even in different sectors. 1.India-based Neutrino Observatory (INO) INO, based in Theni district in Tamil Nadu, is a project that is aimed at building a world-class underground laboratory, and involves leading institutes such as TIFR, BARC, IMSc, SINP, VECC, HRI and IOP as contributors. It is one of the biggest particle physics experiments undertaken in India. The project is expected to be completed by 2019, and is going to create a full-fledged underground science laboratory for studies in Physics, Biology, Geology and Hydrology. Once completed, its main magnetised iron calorimeter (ICAL) experiment will include the world’s largest magnet which will be four times larger than the 12,500-tonne magnet in the Compact Muon Solenoid detector at CERN. INO Site. Image credits: INO. Goal: The main goal of INO is to study neutrinos. The study of neutrinos has gained a lot of momentum. The controversy of neutrino being massless or having certain mass, and certain mixing parameters are some of they key interests in studying neutrinos. ICAL is designed to address some of these open problems in a unique way. 2.Variable Energy Cyclotron Centre (VECC) VECC is a unit of Department of Atomic Energy, Government of India. It is also one of the partner institutions of Homi Bhabha National Institute, and has active collaborations with CERN, BNL, FAIR, TRIUMF, RIKEN, GANIL and DUBNA.Their project has three components, Superconducting Electron Linac, high power actinide target and superconducting heavy ion linear accelerators, built in collaboration with TRIUMF laboratory, Canada. It has two clotrons namely K130 cyclotron and K500 superconducting cyclotron and the research activities revolve around them. Image credits: VECC. Goal: VECC is involved with beam delivery programmes of K130 and K500 cyclotron. It also aims to contribute to running beam delivery programmes for experimentalists in nuclear physics, nuclear chemistry, atomic physics and condensed matter physics. 3.Indus-2 Indus-2 is an initiative of Raja Ramanna Centre for Advanced Technology (RRCAT), Indore. Indus-2 is a synchrotron radiation source which is a booster cum storage ring and an improvement of Indus-1, which is another synchrotron radiation source by RRCAT. The lattice is designed to give low beam emittance and high brightness. It is one of the most important projects under process at RRCAT. The radiation source of Indus-2 is at an advanced stage of construction. Goal: The synchrotron accelerator will accelerate electrons to generate X rays. It will provide radiation from bending magnets. 4.Pelletron Accelerator (IUAC) Image source: IUAC. Pelletron is an electrostatic accelerator which was installed in 1990, by the Inter University Accelerator Centre (IUAC). This accelerator is a tandem Van de graaf type of an accelerator which has unique features like compressed geometry, accelerator tubes for higher terminal voltage, offset and matching quadruples for charge state. Goal: Pelletron’s main objective is to contribute to heavy ion accelerator research in India. 5.Pelletron Accelerator (TIFR-BARC) APJ Abdul Kalam visit to LINAC in 2010. Image source: TIFR. Another pelletron accelerator made by a joined collaboration of TIFR and BARC, has been serving as a major facility for heavy ion accelerator based research in India since its commissioning in 1988. The experimental community consists of scientists and students from research centres and universities within and outside the country as well. The accelerator delivers beams ranging from proton to iodine. The development of the superconducting LINAC is a major milestone in the accelerator technology in our country. Most of the critical components of the LINAC booster, the first superconducting heavy ion accelerator in India, have been designed, developed and fabricated indigenously. More than 130 Ph.D. theses and over 700 publications in refereed international journals including 19 publications in Physical Review Letters have resulted from the research activities in this laboratory. Goal: The project aims for heavy ion accelerator based research in India and has been delivering beams since the start of its operation. 6.Electron Accelerators by BARC BARC has set up an Electron Beam Centre (EBC) in Navi Mumbai that will house two electron accelerators. At present, the accelerator components are being assembled and tested. The building is functional along with all its utilities and the labs. On the accelerator front, sub-systems of electron guns, gun modulator, prototype Linac cavity, vacuum pumps, control consoles, power supplies and microwave power source are being assembled and tested in their respective labs. The accelerators with such high powers of 3 MeV and 10 MeV are being designed and built for the first time in the country. Goal: Electron accelerators have many applications, some of which include radiation processing of materials, improving quality of products, sterilization of disposable medical products, food preservation and storage. With this project, BARC is planning to help in these applications. In Conclusion India has shown considerable progress in in the area of research and these accelerators, along with many more to come, prove that we are moving towards positive slope of progress versus time graph. We have also participated in the world famous Large Hadron Collider (LHC) and also participating with Fermi National Accelerator Laboratory in Proton Improvement Plan (PIP-II). To date, we lack big accelerators like LHC yet, but we are making developments with small accelerators. Upcoming years will tell how much more discoveries these accelerators can being to us.
The last time particle accelerators in India made news was after the discovery of Higgs Boson, better known as God’s Particle in 2012 that prompted a spate of development in particle physics. Reportedly, in 2001, Bhabha Atomic Research Centre took the lead in commissioning a Large Hadron Collider. As India steps up to play a […]
["AI Features"]
["Advanced Analytics"]
Disha Misal
2018-12-28T17:02:33
2018
944
["Go", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Ray", "Aim", "ViT", "GAN", "R", "Advanced Analytics"]
["AI", "Aim", "Ray", "RAG", "R", "Go", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/india-steps-up-research-in-particle-acceleration-check-out-the-most-powerful-indian-institutes/
3
10
0
true
false
true
10,096,369
Google is Counting Years Instead of Qubits
We still have to come to a conclusion if quantum computing is ever going to be useful or is it just a failed cause. While this happens, big-tech companies are pushing towards achieving quantum supremacy, without realising if it would be beneficial for speeding up systems beyond classical computers or not. Along these lines, Google has made a huge claim. The company recently announced that it has created a quantum computer that is almost 47 years ahead of its competitors as it performs calculations within seconds that top-performing supercomputers take years to perform. Google is ready to say that they are over classical computing and are ready to move to quantum computing as the new processor has 70 operational qubits. Sebastian Weidt from Universal Quantum said, “This is a very nice demonstration of quantum advantage. While a great achievement academically, the algorithm used does not really have real world practical applications though.” It highlights how we need to get to the utility of quantum computing, where the thousands of qubits actually deliver value to society. History of tall claims The past hasn’t been great for Google and quantum computing – a history of tall claims. In a declaration made four years ago, Google asserted that it had become “quantum supremacy,” a pivotal milestone indicating that quantum computers had surpassed their conventional counterparts. At that time, the company claimed that its Sycamore supercomputer performs abstruse calculations in 200 seconds, that others would do in 10,000 years. The company had also claimed its supremacy back from IBM by saying that it had a working 72-qubit processor in 2018, when IBM had a 50-qubit processor. During that period, Google faced opposition from competitors who contended that the disparity between their machine and traditional supercomputers was being exaggerated. IBM called out Google’s claim saying that the claim of doing something that classical computers can’t is still unmatched. Now, Sergio Boixo, the principal scientist of Google Quantum AI, acknowledged in a written communication that the Google team was aware that their advantage may not remain sustainable for an extended period. “In our 2019 publication, we stated that classical algorithms would make advancements,” Boixo explained. However, he emphasised that they do not believe this classical methodology can match the progress of quantum circuits in 2022 and the years ahead. Realising the quantum dream Google is supposedly making this claim come true now. In April, Google published another research paper titled “Phase Transition in Random Circuit Sampling” presenting a more advanced quantum device, intended to settle the ongoing debate. Comparing to the 2019 Sycamore that consisted of 53 qubits, the fundamental units of quantum computers, the latest-generation device incorporates 70 qubits. In the research paper co-authored by Google Quantum AI and its partners, the company’s AI division has announced positive outcomes from their experimental endeavours in Random Circuit Sampling (RCS). The authors highlight RCS as the most promising candidate for demonstrating capabilities beyond classical systems, as it maximises the propagation of quantum correlation at high speeds. To elaborate, RCS involves randomly selecting gates within a defined and efficient quantum circuit, which generates samples based on the resulting distribution of outputs. Employing this methodology enabled the Google team to discern significant phases in the conducted experiments, arising from the interplay between quantum dynamics phenomena and noise. Although the recent experiments conducted by Google Quantum AI mark a significant advancement in the field of quantum computing, the team acknowledges that further efforts are necessary. In their paper, the team states, “Despite the accomplishments attained thus far with RCS, the identification of practical applications for near-term noisy quantum processors continues to present a significant challenge.” Catching up Google is not the only one who is in the quantum race though. IBM, the first name that pops in the, is possibly miles ahead. But the other tech-giant, Microsoft, has been making big claims about quantum supremacy as well. Microsoft recently shared its roadmap for building a quantum supercomputer. In the virtual event, Azure Quantum: Accelerating Scientific Discovery, Satya Nadella said the company’s goal is to make the discovery quicker. For this, the company has announced its roadmap for building one using the topological qubits, which the company has been working on for some years now. Krysta Svore, Microsoft’s VP of advanced quantum development, told TechCrunch that even though there are a lot of milestones ahead, it will take less than a decade for the company to build its quantum supercomputer, which would be able to perform one million quantum operations per second. Moreover, last year Microsoft made a major breakthrough in quantum computing. Researchers developed a new type of Majorana-based qubit, which is a fundamental component of quantum supercomputers. These qubits are more stable and easier to scale up resulting in the need for fewer of them. Clearly, the road ahead for quantum is still a long one. For the big-tech, it comes with competition. Everyone can keep making claims and combining qubits, but Google should possibly start making use of its quantum technology, instead of comparing it with others, and claim it supremacy then.
Everyone can keep making claims and combining qubits, but Google should possibly start making use of its quantum technology, instead of comparing it with others.
["AI Trends"]
["Google", "quantum computers", "qubits"]
Mohit Pandey
2023-07-05T15:18:57
2023
844
["Go", "TPU", "qubits", "cloud_platforms:Azure", "AI", "programming_languages:R", "R", "ML", "RPA", "quantum computers", "Aim", "Google", "Azure"]
["AI", "ML", "Aim", "Azure", "TPU", "R", "Go", "RPA", "cloud_platforms:Azure", "programming_languages:R"]
https://analyticsindiamag.com/ai-trends/google-is-counting-years-instead-of-qubits/
4
10
0
false
false
true
33,607
15 Best Machine Learning Tools For ML Enthusiasts in 2024
Machine learning has grown to be one of the hottest job markets in India with tech giants and startups poring billions to this emerging field. Given the slew of opportunities that it has opened up, both fresh IT graduate and experienced enthusiast are reaching out to learn more about coding and various programming languages to set a better foot in the ML field. In the midst of this buzz, there are numerous non-programmers who don’t exactly know how to code and yet want to delve into machine learning and stay abreast of this field. In this article, we list 15 such machine learning tools for those who have a rough hand in programming. (The list is in alphabetical order) 1. Amazon Lex This service can be used for building conversational interfaces such as chatbots into any application using voice and text. You can easily build, test and deploy your chatbots directly from the service. How It Works This service provides advanced deep learning functionalities of automatic speech recognition for the conversion of speech to text, and NLP to recognise the intent of the text, enabling you to build highly engaging user experiences and lifelike conversational experiences. 2. Auto-WEKA This is a data-mining tool that performs combined algorithm selection and hyper-parameter optimisation over the classification and regression algorithms that are being implemented in WEKA. How It Works When a dataset is given, this tool explores the hyperparameter settings for several algorithms and recommends the most preferred one to the user that gives good generalisation performance. 3. BigML BigML is a comprehensive machine learning platform that provides a selection of machine learning algorithms to solve the real world problems by applying a single, standardised framework. How It Works This platform covers classification, regression, time series forecasting, cluster analysis, anomaly detection, topic modelling, and association discovery to facilitate unlimited predictive applications for various fields like agriculture, aerospace, healthcare, food, etc. 4. Data Robot This is an automated machine learning platform by Kagglers to build and deploy accurate machine learning models for all levels of enthusiasts within a fraction of time. How It Works It enables the users to build and deploy highly accurate machine learning models by automatically detecting the best data pre-processing. It can employ encoding, scaling, text mining, etc. When the dataset is very large, it uses distributed algorithms to scale up the dataset. 5. Driverless AI Driverless Ai is an artificial intelligence platform for automatic machine learning. The aim is to achieve the highest predictive accuracy in a shorter amount of time by end-to-end automation. It runs on commodity hardware and is designed to make use of GPUs, multi-GPU workstations, etc. How It Works This platform automates difficult machine learning workflows like feature engineering, model validation, tuning, selection as well as deployment. The model pipelines like feature engineering and models are exported as Python modules and Java standalone scoring artifacts. 6. Datawrapper This is an open source platform that helps you generate visualisations like interactive graphs, maps, charts from your data within a short time. No design skills or code is required for it. How It Works The functionality in Datawrapper is provided by plugins. It works in three simple steps. Firstly, copy your data and paste it to the live-updating charts, then visualise it by customising and choosing the types of the charts and maps and finally, publish the ready-made chart as an image or pdf. 7. Fusioo This is a database application where you can build tools you need. It gives you the freedom to create your own app to track, manage and share information without writing a single code. How It Works The steps are really easy. First, you create an app and name it according to your projector whatever you wish. Then, the next step is to create the Field that you need to track and finally a dashboard will be created for your apps. You can customise it by charts, lists, etc. 8. Google Cloud AutoML Google Cloud AutoML is a suite of machine learning products that train high-quality custom machine learning models with minimum effort by leveraging Google’s state-of-the-art transfer learning and Neural Architecture Search technology. How It Works It provides simple GUI for the users to train, evaluate, improve and deploy models based on your data. The data can be stored in the cloud storage. To generate a prediction on your trained model, just use the existing Vision API by adding a custom model. 9. IBM Watson Studio This platform provides you tools for a hassle-free work with your own data to build and train models at scale with a faster optimisation. It helps to accelerate the machine learning workflows that are required to infuse artificial intelligence into your business or projects. How It Works The working process is easy and simple, You just have to go with the flow. First, choose a project type from the options provided, then define your project and store it into the cloud. Then you can customise by choosing several options like connect to a GitHub repository, link to a service, etc. and use it according to your project. 10. Microsoft Azure Machine Learning Studio This is a browser-based machine platform that has a visual drag-and-drop environment that requires zero coding knowledge. It can be used by anyone regardless of the level of their skills. How It Works Firstly, you need to import your dataset from an excel sheet, etc. The data cleaning and other necessary pre-processing steps are performed. The data is split into training and testing sets and the built-in algorithms are applied to train the model and finally, your model will be scored, and you will get the predictions. 11. ML Jar It is a human-first platform for machine learning that provides a service for prototyping, development and deploying pattern recognition algorithms. How It Works It includes three simple steps to build an accurate machine learning model. First, you need to upload the data with a secure connection, then training and tuning are done on many machine learning algorithms and the best match will be selected according to your data. Finally, use the best models for predictions and share your results. 12. Paxata Paxata is an organisation that provides visual guidance, algorithmic intelligence, and smart suggestions, uses spark for enterprise data volumes, automatic governance, etc. How It Works The working process is simple here like you can use a wide range of sources to acquire data, performs data exploration using powerful visuals, performs data cleaning using normalisation of similar values using natural language processing, makes pivots on data, combining data frames by SmartFusion, etc. 13. Rapid Miner This is an open sourced tool that helps in prediction modelling. How It Works It creates predictive models by using automated machine learning and data science best practices in just four clicks. This tool automatically analyses data to identify common quality problems like missing values. Then the best model for your data will be optimised by using multiple machine learning algorithms. The feature engineering is automated that lets you choose a balanced model and the predictive model is created. 14. Tableau This has proved to be the most popular business intelligence and visualisation tool in the present scenario. You can create graphs, charts, maps, etc. within a short span of time. How It Works Various data sources can be connected in this tool and it has multiple options to represent data in different views, creating sets, applying filters, generating trend lines, forecasting, etc. You can deploy data drilling tools and explore various data that are available without any coding knowledge. 15. Trifacta Trifacta provides as free stand-alone software that offers an intuitive GUI for performing data cleaning. How It Works This software takes data as input and evaluates a summary with multiple statistics by column and for each column, it recommends some transformations automatically. The data preparation can be done by various options present in the software like discovering, structure, cleaning, enriching, etc.
Machine learning has grown to be one of the hottest job markets in India with tech giants and startups poring billions to this emerging field. Given the slew of opportunities that it has opened up, both fresh IT graduate and experienced enthusiast are reaching out to learn more about coding and various programming languages to […]
["AI Trends"]
["machine learning pattern recognition python"]
Ambika Choudhury
2019-01-16T12:46:16
2019
1,326
["data science", "machine learning pattern recognition python", "machine learning", "artificial intelligence", "AI", "chatbots", "ML", "RAG", "NLP", "Aim", "deep learning"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "NLP", "data science", "Aim", "RAG", "chatbots"]
https://analyticsindiamag.com/ai-trends/15-machine-learning-tools-for-ml-enthusiasts-to-hone-their-skills/
4
10
4
true
true
true
24,797
India Needs Formal Codification Of Data Privacy Rights, Says Subramanyam Sreenivasaiah Of AscentHR
Keeping with the theme of Data Privacy this May, Analytics India Magazine caught up with Subramanyam Sreenivasaiah, founder, president and CEO of AscentHR. The company provides customised HR solutions to their clients and businesses. The noted company is a hybrid framework of customised technology solutions provided supported by an efficient and tightly integrated services layer. Sreenivasaiah is also a corporate lawyer and a fellow member of the Institute of Company Secretaries of India, with close to two decades of experience in financial, legal, tax, business and management. He has worked in these areas in various corporates as a passionate professional before venturing into entrepreneurship by setting up AscentHR in the year 2002. He has been instrumental in enhancing the company’s value chain, from payroll and benefits administration, to a full-service HR solutions company covering multiple geographies. Analytics India Magazine: Why do you think there’s a dire need for data piracy law in India? Subramanyam Sreenivasaiah: Data Privacy is very critical today given its omnipresent character across human lives. Privacy has been recognised traditionally as a ‘right’ across all the countries and with data around human life assuming significance, given its adoption and usage, data privacy is highly relevant — even more so in a vast population like India. Though the constitution does not patently grant privacy as a fundamental right, Courts in India have considered privacy as a right while reading into freedom of expression or life and personal Liberty under Article 19 and Article 21 respectively. The current judicial scrutiny of Aadhar by the Supreme Court also appears veering towards granting a fundamental right to privacy including data privacy. AIM: How can India provide adequate protection for electronically-transferred data? SS: India currently does not have a law that protects data privacy adequately. We have the Information Technology Act which leans towards how to treat a violation of privacy by a person holding such data with civil and criminal consequences. India needs a formal codification of data privacy rights and methods for data protection. Globally the trend has been to create policy or procedure towards that and India is no exception. AIM: What can be the lessons learned from Facebook data leak? SS: Misuse of data supplied by an individual is a serious concern and this happened under the nose of such a large organisation as Facebook which professed Free Internet is appalling. The biggest challenge in a globalised IT Infrastructure would be ways of identifying the liability of person holding data, particularly when such data is held beyond the physical boundaries of the country in which such data element is created. In early commerce high sea sales used to a favourite practice owing to lack of regulations around the transaction. Should a country take the initiative of restricting the data storage within its boundaries, it will weaken the system and hamper efficiencies of scale. Therefore, it is a delicate balance of how to impose restrictions while ensuring data protection and privacy. AIM: With a lot of data flooding in, what are some of the best practices that companies can take to ensure that no data is misused? SS: PII or personally identifiable information is what requires the most attention under Data protection regulations and new norms in managing such sensitive data have begun arising. For example, the General Data Protection Regulation (GDPR), which is likely to be followed up with each country or a cluster of countries adopting similar practices. Corporations who have access to such PII data are very conscious of methods of storage and processes for which such data is applied or used. Global standards in Information Security and Data protection would drive this practice. In addition countries must address the need of recognition of data privacy as a right to prevent misuse with adequate penal consequences AIM: What are steps taken by the company to ensure data protection of its clients/users? SS: As of now, the regulations are not stringent but the practice is driven by standards on Information Security and Data Privacy. In certain geographies penal consequences in not adhering to newly announced practices around data protection For example, the GDPR in Europe is extremely high with penal consequence being four percent of the revenues of such corporation for any breach.
Keeping with the theme of Data Privacy this May, Analytics India Magazine caught up with Subramanyam Sreenivasaiah, founder, president and CEO of AscentHR. The company provides customised HR solutions to their clients and businesses. The noted company is a hybrid framework of customised technology solutions provided supported by an efficient and tightly integrated services layer. […]
["AI Features"]
["data privacy India", "Interviews and Discussions"]
Srishti Deoras
2018-05-22T10:42:11
2018
706
["programming_languages:R", "AI", "data privacy India", "RAG", "BERT", "Aim", "llm_models:BERT", "analytics", "GAN", "R", "Interviews and Discussions"]
["AI", "analytics", "Aim", "RAG", "R", "BERT", "GAN", "llm_models:BERT", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/india-needs-formal-codification-of-data-privacy-rights-says-subramanyam-sreenivasaiah-ascenthr/
2
9
1
false
true
false
36,851
Infosys’ Techtonic Shift Towards $10 Million AI-Focussed Fund Has Its Roots In Early Adoption Of Emerging Tech
What is the one key takeaway from the recent news about Indian IT bellwether Infosys investing $10 million in an AI-focused fund targeting startups coming out of the University of California, Berkeley? It is their mission to expand its AI product portfolio and deliver value to its customers and stakeholders. The Indian tech giant has entered into an agreement with the House Fund, a pre-seed and early stage AI-focused focused venture capital firm that will invest in alumni startups, faculty and students from Berkeley. The news was made public in a company filing statement. The move is in line with Infosys aggressive push into AI and automation through acquisitions and partnerships, that began under Sikka’s leadership and has intensified over the last two years. In fact, up until last year, the company was actively scouting for AI startups that have “beta versions of their products up and running”. A 2018 report from HBL indicated Infosys strategic plan of pushing AI adoption through investment or partnership. According to the report, the tech major will run PoCs with start-ups to understand the feasibility of the solution whether it should be put into production and taken to market or not. Another “techtonic” shift came in the form of key AI appointment of former IPsoft director John Gikopoulos joined the Indian consulting major in August 2018 as the Global Head of AI and automation business. The senior leadership appointment made news for being one of the few top hires from outside Infosys, that is largely known for tapping into its own leadership bench for key appointments. Gikopoulos, who has had previous stints at McKinsey, is tasked with helping clients craft an AI and automation strategy. 2018 also saw frenetic activity in the acquisition and merger space with Infosys forming a joint venture with Singapore’s state fund Temasek to increase its footprint in South East Asia with solutions in the area of AI, automation, advanced analytics and cloud. Under the agreement, the software major invested a whopping $8.7 million to buy a 60% stake in the JV entity, while Temasek will hold the rest. Vishal Sikka architected the real AI shift at Infosys Infosys pivot to AI began when Vishal Sikka took over the reins as CEO and made AI and automation the key pillar for bolstering its core capabilities and drive innovation forward for its customer base and stakeholders. This was, clearly one of the ways to influence how the software giant moves forward with strategic plans. Under his leadership, Infosys ramped up its investment in AI, machine learning capabilities to drive strategic value and speed up the customers’ digital transformation journey. While many software giants took the integration route – baking AI capabilities into existing verticals, Sikka decided to build AI as an independent unit, in order to market it better to clients and stakeholders. Under his stewardship, the company expanded its AI-led frameworks: In 2016, Infosys launched Mana, a knowledge-based AI platform, that with the Aikido service offerings, exponentially lowers the cost of maintenance for both physical and digital assets and enables customers gain insights insights from the data. The Mana platform is part of Infosys Aikido framework that helps companies undertake non-disruptive transformation of their existing landscapes Taking the success of Mana and AssistEdge – the robotic process automation (RPA) solution forward, the company later launched its AI platform Nia in April 2017. Billed as a unified, flexible, and modular platform, Nia enables a wide set of industry and function-specific solutions and allows customers to build custom experiences to suit their business needs Also, in November 2017, Infosys, the second-largest software exporter from India aggressively expanded its AI testing service portfolio and other initiatives that included chatbots and blockchain In addition to this, the tech major also created several Machine Learning-specific use cases, that focused on test case optimization and defect prediction. It also explores how to test chatbots and validate the responses of a chatbot, a report from NelsonHall indicated Giving real value to clients with AI & ML One of the new initiatives to come out of the Indian bellwether is autonomous testing, an analyst report indicates. Pegged as a bold ambition, if successful, Infosys will be the first to come out with an autonomous testing approach and the initial test use cases are based on a web crawler. In this case, the web crawler is tasked with scanning website pages and picking up errors like broken links, HTML-related errors and 404 errors. The web crawler will create paths/transactions across one or several screens/webpages, and then create Selenium-based test scripts for these paths/transactions. The initial use cases will be around simple transactions such as user login or order-to-pay in an online store. Way Forward To build the next intelligent enterprise, consulting and tech giants Wipro, TCS, HCL are doubling down on AI initiatives to meet the needs of modernization and deliver more value to clients and stakeholders. Billed as historically slow to adapt to new technologies, Indian IT bellwethers are now locked in an AI race to position themselves with next-gen capabilities that clients and businesses seek and expand their solutions to maximise value. And this is where AI will play a pivotal role in driving digital re-engineering and delivering value and innovation.
What is the one key takeaway from the recent news about Indian IT bellwether Infosys investing $10 million in an AI-focused fund targeting startups coming out of the University of California, Berkeley? It is their mission to expand its AI product portfolio and deliver value to its customers and stakeholders. The Indian tech giant has […]
["IT Services"]
["Infosys"]
Richa Bhatia
2019-03-26T05:37:57
2019
874
["API", "machine learning", "Infosys", "AI", "chatbots", "ML", "Git", "ViT", "analytics", "GAN", "R"]
["AI", "machine learning", "ML", "analytics", "chatbots", "R", "Git", "API", "GAN", "ViT"]
https://analyticsindiamag.com/it-services/infosys-techtonic-shift-towards-10-million-ai-focussed-fund-has-its-roots-in-early-adoption-of-emerging-tech/
4
10
6
true
true
false
61,518
Cocktails, Math & Machine Learning: The Fascinating Journey Of Kaggle Master Arthur Llau
For this week’s ML practitioner’s series, Analytics India Magazine got in touch with Arthur Llau. Arthur is a Kaggle master, who is currently ranked in the top 100 on the global leaderboard that hosts more than 1,30,000 participants. He is a mathematician from heart, who happened to run into machine learning. We bring to our readers Arthur Llau’s fascinating journey into the world of data science. Early Days A lifelong Parisian, Arthur Llau has a dual masters degree in Theoretical Mathematics (Probability) and in Statistics & Machine Learning from Sorbonne University campus of Université Pierre & Marie Curie. As part of his thesis, he worked on neural style transfer, a relatively new field. At college, Arthur worked as a barman while he scribbled math problems on the side. Though a mathematician from heart, Arthur’s tryst with machine learning only began when one of his friends introduced him to computer vision and the statistical aspect of machine learning. Mixing drinks, and tussling with mathematics and machine learning is how he spent most of his student days. Today, he competes with machine learning experts from across the globe on the grandest stage—Kaggle. Currently, Arthur is a Senior Data Scientist at Flowlity, an innovative startup that deals with optimization & synchronization of supply chain management. As a senior data scientist, Arthur works on-demand-sales forecasting, inventory level optimization, safety-stock recommendation, and also with graphs for supply chain synchronization. He also teaches Data Science applications to industrial problems at the Sorbonne Universités. As a mathematician, it was a hard job at first, then a passion. Even with a background in mathematics and statistics, Arthur still found the transition to machine learning quite challenging. The most challenging part, admits Arthur, was to understand how to apply theoretical methods to real-world problems. When we inquired Arthur on why we see a lot of Europeans in Kaggle and the machine learning field in general, he untangled this mystery by nonchalantly revisiting the history of Europe, especially his country France. He reminded us of great mathematicians taking the examples of Poisson and Gallois. Mathematics is a vital part of French culture and history. There has always been a big love story between maths and French people. The culture of inculcating mathematics is valid till this day and it becomes almost natural to turn towards domains such as machine learning. He also reminded us that there was no secret to his machine learning mastery. All he did was read, learn, practice and repeat. Life As A Kaggler His initial interest in machine learning competitions was sparked when one of his professors tasked him to participate in a contest that had a problem statement framed by the prestigious Institut Henri Poincaré and a big company; a Kaggle-like a contest, in which Arthur ended up winning two of them, outperforming professional data scientists. He wanted to continue this momentum at a higher level and what can be better than Kaggle! So far, Arthur has participated in more than 80 competitions of which he has won two gold, 12 silver and 14 bronze medals. Though he is top 100 at the global level, he still considers there is a long way to go to the top. It takes a lot of time, a lot of reading, imagination and obstinacy. For beginners, Arthur recommends exploring the data and finding what is not evident. “…and don’t hesitate to try classic methods. Trial and error is a great motto,” confided Arthur. Insisting on the importance of data exploration, Arthur doubled down on implementing the metrics right, performing a couple of validation schemes, setting up a baseline and sticking to it. What I learn in Kaggle, I apply it sometimes in my work, and this is important for me to do my job very well. When asked about how significant Kaggle was for his career, Arthur heaped praise on its community and the variety of contests that he gets to participate in. Not only that, but he firmly believes that Kaggle experience adds a great deal to his learning curve, and that, learning is still his primary goal. Tools & Tricks Of A Master Arthur Llau revealed that he had spent around 4-8 hours per day for over a month for the contests that fetched him gold. Arthur believes that being a top Kaggler is a full-time job. Talking about the resources required for a typical competition, Arthur says that a basic laptop would sometimes suffice. However, sometimes he rents some GPU in Google cloud platform with Kaggle vouchers, depending on the competition. With regard to languages, Arthur prefers Python and sometimes C++ for doing operational research tasks. And, when it comes to algorithms, Arthur expressed his delight for boosting methods, such as xgboost, catboost and lightgbm. He switches between Keras and PyTorch framework while using a handful of very useful libraries like albumentations for image augmentation, eli5 and lofo for feature selection, and Missingno and seaborn for visualization; Imblearn, when imbalanced data. For parameters optimization, Arthur prefers Optuna and skopt for the Bayesian module. Here is what Arthur’s toolkit looks like: Hardware: MBPro(2019, 16GB, i7) or i7,32GB + 1070Ti or GCP.Language: Python and C++Framework: Keras and PytorchAugmentation library: albumentationsFeature selection library: eli5 and lofoVisualization: Missingno and seabornImbalanced data: imblearnParameter optimization: Optuna and skopt The availability of many libraries and frameworks has made the job of a data scientist easy. Deep learning algorithms could now be called by writing a single line of code on Python. Even complex mathematical operations are wrapped up as libraries. The democratization of ML has drawn in a lot of people, and somewhere down the line, few people have started falling prey to vanity metrics such as leaderboard rankings and are venturing into malpractices. Especially in Kaggle, Arthur laments that cheating can happen in many forms. In the kernel part, he explains, there is a lot of copycat kernel (EDA/ensembling) just craving for points/medals. There are a lot of multiple account users as well who leak information across, and there have been instances where an entire class of students (~20 ppl) using more or less the same solution and winning a medal in a particular competition. When asked how to identify these mal practitioners, “Make an ML model,” quipped Arthur. That said, he holds the Kaggle community in high regards, and he has made a lot of friends over the years. While he will continue to experiment with Kaggle contests, he hopes that there will be original challenges like the 2018 trackML challenge. Closing Thoughts Arthur predicts reinforcement learning to be a big thing going forward, but he is a bit sceptical as there is still a long way to go in getting basic predictions right, like in sales or doing object recognition. When asked about the overwhelming hype around AI, Arthur quipped that it is not artificial intelligence, it is artificial stupidity, quoting famous researcher Youshua Bengio. It is stupid to think that doing only MOOC and using autoML can tackle all kind of problems. AutoML is excellent at solving basic tasks with good performances, continued Arthur, but it cannot be used to solve complex problems. The problem of AutoML is also the blackbox effect, which can lead to explainability issues in front of customers. Reiterating on the importance of practice for beginners, Arthur advises one to look at Kaggle as a playground rather than a battlefield, and to experiment a lot. He was also positive about the fact that aspirants can land a data science job with Kaggle in their portfolio, if combined with consistent practice. However, he also warns of the dangers of inflating Kaggle success, as there is a vast difference in problem-solving at the industry level. The data we typically get in the field is not as clean as in Kaggle. You can’t have magic or leaks or funny tricks in data science problems at work; you need to find other good methods. A significant difference, observes Arthur, is information extraction needed for the job; also, there is a lot more discussion with field experts to make a good modelization of the problem which is not required in Kaggle. Understanding any algorithm will eventually boil down to math mostly, and Arthur insists on having a good grasp of fundamentals. A student for life, Arthur admits that he has been fortunate enough to have exceptional teachers throughout his student life who have helped him become what he is today. That said, a great book is equal to many excellent teachers – if not exceptional – and Arthur recommends everyone to read the following books, which he considers to be classics : « The Elements of Statistical Learning » by Tibshirani, Hastie and Friedman« Pattern Recognition » by Bishop and « ML: A probabilistic perspective » by Murphy
For this week’s ML practitioner’s series, Analytics India Magazine got in touch with Arthur Llau. Arthur is a Kaggle master, who is currently ranked in the top 100 on the global leaderboard that hosts more than 1,30,000 participants. He is a mathematician from heart, who happened to run into machine learning. We bring to our […]
["AI Features"]
["Automl", "Data Science", "deep learning application examples", "Interviews and Discussions", "Kaggle", "learn ai", "Machine Learning", "machine learning optimization", "machine learning pattern recognition python", "multiple classification statistics"]
Ram Sagar
2020-04-13T13:00:20
2020
1,459
["machine learning pattern recognition python", "computer vision", "deep learning", "data science", "artificial intelligence", "PyTorch", "analytics", "Data Science", "machine learning", "AI", "ML", "Machine Learning", "Automl", "Kaggle", "Keras", "multiple classification statistics", "machine learning optimization", "deep learning application examples", "learn ai", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "computer vision", "data science", "analytics", "PyTorch", "Keras"]
https://analyticsindiamag.com/ai-features/kaggle-interview-arthur-llau/
4
10
2
true
true
true
10,039,120
How This ML Model Assesses Stress & Strain On Materials From Their Images
Massachusetts Institute of Technology (MIT) researchers have developed a deep learning model to estimate the stresses and strains on materials from their images. “Our end-to-end deep learning model predicts physical fields like stress or strain directly from the material microstructure geometry, and reaches an astonishing accuracy not only for predicted field data but also for derivative material property predictions,” the researchers said. For this project, the researchers have worked with composite materials, including soft and hard components in various random geometrical arrangements. For decades, engineers have relied on physical laws to understand the stresses and strains on materials. At an industrial scale, running simulations using computer-aided engineering (CAE) software to gauge the strength of materials is time-consuming and costly. MIT researchers have built a computer vision and machine learning technique to calculate properties of a material from its image in quick time. Zhenze Yang, a PhD student in the department of material science and engineering at MIT, led the project. He said the new approach could enable faster design prototyping and material inspections. How does it work The paper, Deep Learning Model to Predict Complex Stress and Strain Fields in Hierarchical Composites,’ by Yang, and Chi-Hua Yu and Markus J Buehler, explained how to use Generative Adversarial Neural Network or GANs and convolutional neural networks or CNNS to solve complex material/engineering problems. The MIT researchers trained the network with thousands of paired images — one showcasing a material’s internal microstructure subject to mechanical forces, while the other depicting the same material’s colour-coded stress and strain values. Using game theory, the network iteratively figured out the relationship between the geometry of a material and its resulting stresses. The computer can predict deformations, stresses, strains, etc. “That’s the breakthrough. Otherwise, you would need to code the equations and ask the computer to solve partial differential equations,” said Buehler. For example, the image below showcases the deep learning approach in predicting physical fields, given different input geometrics. The left-side graphic shows a varying geometry of the composite in which the soft material is expanding. In contrast, the right-side figure highlights the predicted mechanical field corresponding to the geometry in the left figure. Source: MIT Researchers at Facebook AI and Google have also developed machine learning techniques to solve advanced mathematical equations such as integration, first-order and second-order differential equations and partial differential equations for various applications. Citing aeroplanes, Buehler said there are multiple materials like glue, metal, polymer etc. It becomes highly complex to solve them using existing methods as they have various parameters, scales and factors in determining the solution. “If you go the hard way — Newton way — you have to walk a huge detour to get to the answer,” said Buehler. MIT researchers claimed that their network is adept at dealing with multiple parameters. It processes information through a series of ‘convolutions,’ which analyses the image at large scales. That’s why these neural networks are a perfect fit for describing material properties, said Buehler. The researchers claimed that its fully trained model rendered successful stress and strain results using a series of close-up images of the microstructure of various soft composite material. Also, the network was able to capture the micro details and singularities like cracks and other deformities. The graphic below shows the simulated failures in a material by a machine-learning based approach without solving governing equations of mechanics. Red represents a soft material, white depicts a fragile material, and the green represents a crack. Source: MIT Applications MIT researchers said the technique saves time and money and also give nonexperts access to material calculations. For instance, architectures or product designers can test the feasibility of their ideas before passing the project along to an engineering team. “That’s a big deal,” said Buehler. Further, mechanics and inspectors across manufacturing, aerospace and other industries can diagnose potential problems using this technique by simply taking a picture of the material they are inspecting. Once the model is trained, the network can run instantaneously on consumer-grade computer processors.
Massachusetts Institute of Technology (MIT) researchers have developed a deep learning model to estimate the stresses and strains on materials from their images. “Our end-to-end deep learning model predicts physical fields like stress or strain directly from the material microstructure geometry, and reaches an astonishing accuracy not only for predicted field data but also for […]
["AI Features"]
["Machine Learning", "machine learning models", "MIT research"]
Amit Naik
2021-04-29T10:00:00
2021
667
["Go", "machine learning", "AWS", "AI", "neural network", "Machine Learning", "machine learning models", "computer vision", "RAG", "Aim", "deep learning", "MIT research", "R"]
["AI", "machine learning", "deep learning", "neural network", "computer vision", "Aim", "RAG", "AWS", "R", "Go"]
https://analyticsindiamag.com/ai-features/how-this-ml-model-assesses-stress-strain-on-materials-from-their-images/
3
10
0
false
true
true
10,167,059
Taiwan’s Chipmaker UMC Denies Reports of Merger with GlobalFoundries
GlobalFoundries, a US-based contract semiconductor manufacturer, and United Microelectronics Corporation (UMC), Taiwan’s second-largest chipmaker, were reportedly in talks regarding the possibility of a merger. According to the report by NIKKEI Asia on Monday, this potential deal was happening as America sought to strengthen its semiconductor supply chain amid rising tensions in the Taiwan Strait and increasing competition from China. However, UMC has denied the speculations. According to United Daily News, UMC on Tuesday clarified that the company would not respond to any market reports, and only emphasised that “there is no merger going on at present”. The merger would have created a larger US-headquartered company with manufacturing operations across Asia, the US, and Europe, as per previous reports. The aim was to ensure America’s access to mature chips, which account for over 70% of global semiconductor demand. The latest speculation comes after the previous round of talks between the two companies two years ago did not materialise. The US government has been encouraging Taiwanese companies to boost chip production in the country, but UMC has previously ruled out building or buying facilities there due to high costs. A merged entity could prioritise research and development investments in the US, potentially becoming an alternative to Taiwan Semiconductor Manufacturing Co (TSMC), the world’s leading chipmaker. UMC serves many top chip developers, including Qualcomm, NVIDIA, MediaTek, NXP and Infineon. GlobalFoundries has not responded to requests for comment. As of now, Taiwan holds about 44% of the global mature chip market, while China controls 31%, and the US holds about 5%. Industry leaders noted that any deal would likely face regulatory scrutiny from Taiwan and China. TSMC’s additional $100 billion investment in the US has already caused public concern about a potential weakening of Taiwan’s flagship chip industry. China also previously blocked Intel’s attempt to acquire Tower Semiconductor, indicating potential hurdles for cross-border semiconductor deals.
“There is no merger going on at present.”
["AI News"]
["Mergers and Acquisitions", "Taiwan"]
Sanjana Gupta
2025-04-01T18:47:03
2025
311
["Go", "Taiwan", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Aim", "Mergers and Acquisitions", "R"]
["AI", "Aim", "RAG", "R", "Go", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/taiwans-chipmaker-umc-denies-reports-of-merger-with-globalfoundries/
2
7
3
false
false
false
17,564
Opera Embraces Artificial Intelligence, Rolls Out AI Powered News Feed For iPhone Users In India
Opera, global tech firm recently announced the launch of a new version of its most popular mobile browser app, Opera Mini, for iPhone users, stuffed with artificial intelligence. The AI news engine aims to bring out the latest and most insightful news to the user without any efforts on setting it up. It would be available for iPhone users in India in regional languages such as Gujarati, Hindi and Tamil, apart from English. The company claims that the revamped user interface loads four times faster than previous version. “Opera’s AI news engine uses real-time intelligence ranking, powered by machine learning (G.B.D.T. algorithm) and deep learning (D.N.N. deep neural network)”, the company mentioned on its blogpost. The company further explained that once the user starts engaging with news content, it begins defining a unique user profile by accumulating news categories and publisher domains the user clicks on. The news engine then analyzes user’s interest through a deep learning model. “The more the user engages with the newsfeed, the more in-tune the content becomes for the user within the “For You” section”, it said. It also includes feedback from local editor teams and the editorial team can monitor the AI generated news feed to identify and remove fake news as quickly as possible. “The goal is to provide each user the ability to get their optimal content based on their interest, which is constantly evolving,” said Cuautemoc Weber, Head of Global Accounts & Content, Opera Software. Apart from India, the Opera AI news engine has been rolled out in countries like Ghana, Kenya, Indonesia, Nigeria, South Africa, Tanzania and the United States.
Opera, global tech firm recently announced the launch of a new version of its most popular mobile browser app, Opera Mini, for iPhone users, stuffed with artificial intelligence. The AI news engine aims to bring out the latest and most insightful news to the user without any efforts on setting it up. It would be […]
["AI News"]
[]
Srishti Deoras
2017-09-06T12:42:34
2017
270
["Go", "machine learning", "artificial intelligence", "programming_languages:R", "AI", "neural network", "programming_languages:Go", "Aim", "deep learning", "R"]
["AI", "artificial intelligence", "machine learning", "deep learning", "neural network", "Aim", "R", "Go", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/opera-embraces-artificial-intelligence-rolls-ai-powered-news-feed-iphone-users-india/
3
10
0
false
false
false
69,730
Interview – Harikrishna R, Co-founder and Director at Klar Systems
Harikrishna R. is a co-founder and director of Klar Systems, where he designs and builds cool new gadgets, applications and platforms for the IoT era. Prior to Klar Systems, Harikrishna was with Texas Instruments for over 15 years where he helped design digital signal processors for multimedia applications. Harikrishna holds two bachelor’s degrees, one in Electrical and Electronics Engineering, and the second in Computer Science and Engineering, both from the Birla Institute of Technology and Science, Pilani. [dropcap letter=”AIM”]Analytics India Magazine: Tell us something about your journey in the IoT industry. [dropcap letter=”HR”]Harikrishna R.: Up until two years ago, I worked for Texas Instruments, helping build digital signal processors, and I have long noticed the proliferation of connected devices, well before the term “IoT” came into widespread use. The turning point for me, though, was the launch of the Nest Thermostat in 2011.   Here was a device that gave concrete shape to all the possibilities of IoT — control and automation, sure, but also machine learning and automated control — all wrapped up in a very pleasing package, with an intuitive interface. I was convinced that the time for mass adoption of IoT technology had come. So in 2014, together with two other friends, we started Klar Systems to build IoT platforms and devices. AIM: Would you like to share your views on the Indian IoT Industry? HR: Broadly, I notice three strands. The most exciting for me is the large number of companies building innovative gadgets, mostly in the consumer space. These companies are changing how products are designed and built — they use crowd-funding to raise money, rapid prototyping technologies such as 3D printing to create proofs-of-concept, and work in small cross-functional teams to quickly bring innovative new products to market. There are also many companies that want to use IoT technology to solve large infrastructural problems in areas such as agriculture, manufacturing, healthcare and urban infrastructure management. And finally, there are the large IT services companies who are gearing up for IoT-related business engagements. Most of them have one or more initiatives explicitly targeting this emerging vertical. AIM: Would you like to share about how Klar Systems is contributing to the IoT industry? HR: We are building a suite of innovative new gadgets for home automation. We have launched the first of these, zmote, just a few months ago. We are also simultaneously working to create platforms and frameworks for use in IoT. There are many aspects of this technology that are common across all IoT devices — things like device management, service discovery, security — and it makes sense to solve these at the platform level. Finally, we are partnering with Jigsaw Academy to create a comprehensive introductory IoT course. I particularly enjoyed creating this since it gave me a chance to revisit the foundations of IoT technology and see how much has changed over just the last few years — how much simpler it has gotten to build fairly complex, yet robust, embedded systems and connect it to custom infrastructure in the cloud. AIM: Would you like to highlight the benefits that IoT would bring to people and  organizations in India? HR: IoT is just the next logical evolutionary step in information technology. As such, I believe it will become the engine that powers the next phase of growth for the entire IT sector. As a nation, I think IoT presents us with yet another opportunity to leapfrog the developed world in technology adoption just as happened with cellular telephony and mobile internet. IoT can help us use available resources more efficiently, improve our management of public infrastructure, and overall achieve a higher quality of life at a lower cost. And I think we are making very good progress — take for example the traction behind the Smart Cities project. AIM: How important is data security with IoT growing rapidly? HR: Given the number of connected devices we expect to have, and the breadth of how they will be deployed and used, it is hard to overemphasize the importance of security. Since many of these devices will have actuators that can alter things in the physical world, a security breach means more than just data loss. So it is very important that each of these devices are designed to be secure, their firmware is kept up-to-date, and the fallout of any possible breach is contained by redundant systems. The good news is that the IoT industry is already keenly aware of the importance of security. Unlike in the early days of the internet, when security was often just an afterthought, in today’s world, security is given deep consideration even at the product design stage. AIM: What are the most significant challenges you see in the IoT space? HR: A lot of commentators see the lack of standards as the most significant challenge we face. Certainly this is an important aspect — easy interoperability between various devices and systems from different vendors can drive adoption, which in turn drives vendors to design for better interoperability, leading to a virtuous cycle.   However, I don’t see this as a big blocking factor: standards can evolve naturally on top of open platforms, such as the web. Ecosystems may also develop around specific semi-open platforms. Take, for example, what happened with Apps built around Android and iOS platforms. The second challenge I would highlight is regulation. This is very important in many areas like self-driving vehicles, delivery drones, use of the RF spectrum, payment systems built on blockchain technology, and so forth. We need to strike the right balance between too much and too little regulation to spark growth and adoption of IoT technologies. Privacy is also a major issue. IoT can unlock a lot of valuable data about consumer behaviour. This can be used for a variety of purposes, many of which will ultimately benefit consumers. However, people are uncomfortable with their data being used in ways that they can’t control, rightly so in my opinion, and we need to evolve mechanisms that give people control over their own data. Here’s the talk by Harikrishna at Cypher 2016
Internet of Things is seeing an exponential growth and definitely has a long way to go. One of the biggest technological waves, IoT technology, is set to bring a radical change in the way we live our lives and the way business functions. So we thought of hearing it from the people themselves who are contributing to this change. We recently interviewed one of the leading contributor’s to the IoT field Harikrishna R.
["AI Features"]
["Interviews and Discussions"]
AIM Media House
2016-10-03T07:23:30
2016
1,019
["Go", "API", "machine learning", "AI", "Git", "automation", "Aim", "analytics", "GAN", "R", "Interviews and Discussions"]
["AI", "machine learning", "analytics", "Aim", "R", "Go", "Git", "API", "GAN", "automation"]
https://analyticsindiamag.com/ai-features/interview-harikrishna-r-co-founder-director-klar-systems/
2
10
3
true
true
false
65,325
How AntWorks Is Helping Clients Use Its AI & RPA Platform – ANTstein
Artificial Intelligence has created tremendous value for businesses via automation and deep insights from data. Businesses today are combining the best of RPA, AI and ML to create a real-time intelligence workforce. In this context, we have seen the rise of intelligent platforms used by several companies, and one of them is ANTstein from AntWorks, an AI and intelligent automation company co-founded by Asheesh Mehra in 2015. Mehra had formerly been the Head of information of Infosys’ BPO operations in the Asia Pacific, Japan and Middle East regions. ANTstein is the industry’s first Integrated Automation Platform (IAP), powered by Fractal Science. The multitenancy solution allows for maximum bot utilization, understands all types of data, and provides businesses with a one-stop solution for data curation and building, deploying and managing an AI-enabled smart digital workforce. The company received $15 million in investments from SBI Holdings in 2018, after which, it became a subsidiary of SBI Holdings, headquartered in Singapore. To know more, Analytics India Magazine got in touch with Asheesh Mehra to understand how his company is leveraging various automation tools for clients all over the globe. Here are edited excerpts from the interaction: AIM: What are the various AI technologies that your company leverages? Asheesh Mehra: AntWorks ANTstein SQUARE is the industry’s first Integrated Automation Platform (IAP) that gives enterprises the capability to process all types of data. This all-in-one solution can be used for data curation, in the process, creating, deploying and managing an AI-powered digital workforce. Businesses, across many sectors that are reliant on repeatable processes can benefit greatly from implementing intelligent automation and AI. IAP leverages the potential of fractal science technology, which helps companies solve the unstructured data challenge. Organisations will benefit from this because smaller data sets mean higher accuracy, less infrastructure and quicker training time. With unstructured data slated to make up 80% of the world’s data by 2025, we will witness a paradigm shift towards automation tools, like integrated automation platforms (IAP), backed by fractal science, for business processes in organisations for dealing with vast unstructured data. AIM: Can you elaborate on Fractal Science- technology that empowers AntWorks products? Asheesh Mehra: We have developed our own data ingestion engine using a fundamentally different science called fractal science. Our mission, from the very beginning, was that enterprises should be able to deal with all types of data in the back office. For example, the OCR technology cannot read a KYC document because it contains a signature, handwriting and a photograph, and hence, RPA solutions backed by OCR are not allowing business to achieve the scalability options they desired. AntWorks’ Integrated Automation Platform ANTstein can understand structured and unstructured data for processes. The ANTstein platform is not only unique from a capability perspective, but also from the science that has been used to develop it. Fractal science is a science of patterns and self-similarity, whereas neural science is about absolute character recognition. For example, if one had to train the neural science engine to recognise an apple, it would need to be trained on recognising a small apple, a medium apple, a large apple, and an extra-large apple for the neural engine to recognise an apple. On the other hand, the fractal science that AntWorks has used allows one to train the engine on one apple only. Fractal science uses the principle of pattern recognition, and the pattern of an extra small apple and the pattern of an extra-large apple is the same. AIM: Please elaborate on the current solutions and tech stack that AntWorks is offering? Asheesh Mehra: AntWorks’ portfolio of plug-and-play solutions and hosted services enables organisations to maximise their automation opportunities across the enterprise. Our current solutions encompass a Process Discovery, which helps identify high-value automation opportunities, analyse user productivity gaps and optimise every process to maximise business efficiency. Then we have Image Enhancer which detects, auto corrects and enhances the quality of any image document to increase and throughput for digital transformation. After that, we have Queenbot – the RPA component of ANTstein – that helps enterprises build, operate and manage their digital workforce effectively and with ease. AntWorks’s full-stack, Integrated automation platform (IAP) is helping enterprises address the most complex automation opportunities and scale across the enterprise. AIM: What are the innovative features of ANTstein SQUARE that can help enterprises in their processes? Asheesh Mehra: The ANTstein SQUARE platform uses machine learning simultaneously to help enterprises take the seamless automation journey. The platform addresses all data types, including unstructured data, which today, only ANTstein can do. ANTstein SQUARE brings in features which helps organisations realise the true power of a seamless automation journey. With ANTstein SQUARE solution, the focus has now shifted from digitisation to automated data curation. AIM: What is the company’s expansion plans and road map in the coming years? Asheesh Mehra: We have increased our global presence across Australia, Canada, Dubai, France, India (Bangalore, Chennai, Mumbai), Japan, Philippines, Singapore, the UK and the US. Our global headcount is now more than 500 people – nearly triple from 2018. Our research and development is being done out of India. We also have plans to open development centres in countries like Israel, Amsterdam and Latin America region. We will continue to support organisations by leveraging AI for enhancing customer satisfaction and for uncovering the large unstructured data challenge existing across enterprises.
Artificial Intelligence has created tremendous value for businesses via automation and deep insights from data. Businesses today are combining the best of RPA, AI and ML to create a real-time intelligence workforce.  In this context, we have seen the rise of intelligent platforms used by several companies, and one of them is ANTstein from AntWorks, […]
["AI Features"]
["how ai works", "Interviews and Discussions"]
Vishal Chawla
2020-05-18T10:00:00
2020
892
["how ai works", "Go", "machine learning", "artificial intelligence", "AI", "ML", "Scala", "RAG", "Aim", "analytics", "R", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "Aim", "RAG", "R", "Go", "Scala"]
https://analyticsindiamag.com/ai-features/how-antworks-is-helping-clients-use-its-ai-rpa-platform-antstein/
3
10
3
false
true
false
2,397
Experts Speak: Recruitment in Analytics
Ganesh S, CEO at Dun & Bradstreet Technology and Data Services We recruit people from various streams like economics, mathematics, statistics, operations research, computer science etc. While recruiting, I look for deduction ability, logical thinking, and communication skills apart from subject knowledge. Our selection procedure involves multiple stages to assess core skills, practical experience and potential to grow. [divider top=”0″] Manas Agarwal, Co-Founder and CEO at Affine Analytics We recruit both freshers as we as experienced analytics professionals. While we do assess them for their problem solving and reasoning abilities, we also test them on their attitude towards learning new things. As a matter of fact post joining we take our recruits through a process of unlearning and disillusionment where they start challenging all they have learnt in maths only organizations. I suggest the analytics professionals to be curious and always try to test things before accepting them are truth. [divider top=”0″] Anees Merchant, Associate Principal Analytics at eClerx It may sound cliché but when I am hiring from B School colleges the only thing I look for the Right Attitude and Aptitude. If I look at lateral hires, I am looking for a person to manage volatility and whether the candidate is nimble with an entrepreneur bend of mindset. Regular skillsets aspects such technical, domain and platform are evaluate but are not the decision making pointers. [divider top=”0″] Divya Krishnan, Head of Analytics at Position2 While we look for people with domain experience for senior positions. We find most success with finding smart freshers or folks with 1-2 years experience and training them on the job. We look for people who have an eye for detail, can work with numbers and can translate business requirements into analytic problems.
Ganesh S, CEO at Dun & Bradstreet Technology and Data Services We recruit people from various streams like economics, mathematics, statistics, operations research, computer science etc. While recruiting, I look for deduction ability, logical thinking, and communication skills apart from subject knowledge. Our selection procedure involves multiple stages to assess core skills, practical experience and […]
["IT Services"]
[]
Дарья
2013-01-03T14:20:49
2013
289
["programming_languages:R", "AI", "analytics", "GAN", "R"]
["AI", "analytics", "R", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/it-services/experts-speak-recruitment-in-analytics/
2
5
1
false
false
false
10,069,303
Amazon Prime video: The little search engine that couldn’t
Amazon Prime has over 200 million subscribers globally. In India, it has nearly 22.3 million subscribers. Many streaming services had teething troubles like bad interfaces or a poor search engine. Over time, these OTT platforms iron out the kinks. Surprisingly, this does not appear to be the case for Amazon, one of the biggest companies in the world with unlimited resources at its disposal. An application engineer at Oracle, AB Satyaprakash, pointed out the shortcomings of Prime Video’s search function. Interestingly, Amazon accounts for 54 percent of all product searches on the internet and has one of the best recommendation systems and search engines in the business. However, Amazon Prime Video–available in nearly 200 countries– has a bad search engine. To make matters worse, Prime Video’s clunky UI is a real pain in the neck. Priorities The Prime model is unique and, in many ways, hard to emulate. Amazon has truly built a moat with its loyalty programme. The OTT platform is a part of the bigger offering; Prime subscribers also get access to a music streaming service and one-day delivery at a throwaway price. For Amazon, the whole idea of Prime could be customer retention. In 2016, more than 90 percent of Amazon’s Prime subscribers in the US renewed for a second year. For an e-commerce company, a retention rate of more than 30 percent is top-drawer. In other words, Amazon can afford to overlook the complaints about the terrible search engine on Prime video. Meanwhile, for a company like Netflix, offering subscribers a good search experience is key to its business: A bad search experience could lead to the subscriber not renewing their account, and drive the churn rate. Algorithm Jeff Bezos once said that if we have 4.5 million customers, we shouldn’t have one but 4.5 million stores. Translation: Delivering highly personalised and curated products and services based on the users’ tastes and preferences is the utmost priority for Amazon. However, it would be wrong to say that Amazon is not bothered. On the contrary, a lot of work has been done over the years, and Amazon has spent a lot of money on improving its recommendation engine. In fact, Amazon has been using algorithms for recommendations since 1998. Around 35 percent of Amazon’s sales are driven by its recommendation engine. However, the algorithm behind Amazon Prime Video is somehow not up to scratch. And that’s not a recent trend. Users have been bringing up the issue for a while now on social media and Amazon forums. I simply do not understand Amazon. It should be the biggest failure. One example. You go to Amazon Prime TV, you search for an actor (like Will Smith), this is the result it gives you.I’m sorry, how can anyone create a search result this bad? pic.twitter.com/iAm6sKZFT3— Thomas Baekdal (@baekdal) October 12, 2019 Algorithms play an important role in keeping the viewers hooked. The success of companies such as Netflix and Spotify is driven by their superior recommendation engines. Yearly, Netflix saves billions of dollars by recommending the right content. In 2006, Netflix offered prize money of £1 million to anyone who could improve their algorithm. Every algorithm is different, and not all recommendation algorithms are the same. For example, Netflix uses a different methodology to recommend movies and TV shows to its users compared to Amazon. Netflix uses fuzzy matching, which means incorrectly spelt queries will not impact the results much. Fuzzy matching makes the user’s search experience more intuitive. That does not seem to be the case for Amazon Prime Video. Content overload? The content on Netflix is properly categorised and tagged according to its genre, casts, directors etc. Meanwhile, Amazon Prime leaves a bad impression. Though Prime has more content than Netflix, Amazon has not been able to leverage it because of its poor UX, UI and search features. Amazon Prime has one of the largest streaming libraries. It must come as a surprise for many, but most of the content on Amazon Prime is user uploaded. So anybody who owns the distribution right to a film would be able to upload it on Prime for free. According to a 2020 report by the Wallstreet Journal, almost two-thirds of the titles on Amazon Prime Video are uploaded by users. The content on Prime is substantially higher than on Netflix, Hulu or HBO Max. As of December 2019, Amazon Prime had 65,504 distinct titles on its platform, compared to Netflix’s 7177. The content overload could be another reason for the bad search experiences.
In 2019, Amazon Prime had 65,504 distinct titles on its platform, compared to Netflix’s 7,177.
["Global Tech"]
[]
Pritam Bordoloi
2022-06-17T13:00:00
2022
754
["Go", "ELT", "programming_languages:R", "AI", "recommendation systems", "programming_languages:Go", "RAG", "ai_applications:recommendation systems", "R"]
["AI", "RAG", "recommendation systems", "R", "Go", "ELT", "programming_languages:R", "programming_languages:Go", "ai_applications:recommendation systems"]
https://analyticsindiamag.com/global-tech/amazon-prime-video-the-little-search-engine-that-couldnt/
4
9
1
true
true
true
10,105,028
Why OpenAI is Eyeing India
OpenAI and India’s collaborative ties has gained momentum with the company having plans to soon start an Indian office. From appointing Indian-origin leaders in advisory roles to bringing OpenAI executives to India, the company’s expansion plans in the country will be a fruitful one for both the parties. Rishi Jaitly, who has held executive positions including the position of Vice President at Twitter, will assume the role of a senior advisor at OpenAI to guide the company through India’s AI policy and regulatory environment. Furthermore, OpenAI executives Anna Adeola Makanju , global head of Public Policy, James Hairston, and Jaitly recently met MoS for Electronics and Information Technology, Rajeev Chandrashekar. From left to right: Rishi Jaitly, Rajeev Chandrashekhar, Anna Adeola Makanju, and James Hairston. Source: X In the backdrop of Global Partnership on Artificial Intelligence Summit (GPAI) that happened a few days ago, where India’s stance on aggressive AI expansion and development was evident, the OpenAI executive’s meeting with the minister fortified the company’s plan to be a prominent AI competitor in the Indian landscape. India : The Precious Market As per recent statistics, ChatGPT has over 180 million users, with the US contributing to 10.81% of total users, followed by India with 9.08% users. Being the 2nd highest market for ChatGPT, India’s contribution to the OpenAI market is already well placed. Big tech companies, such as Google, are already choosing India as a destination for building their biggest offices outside the US. With favourable business conditions, and progressive policies, India has been an attractive market. By setting up an Indian office, OpenAI will be able to capitalise on the same. Third in Line London was the first office outside the US which OpenAI opened in June, followed by another in Dublin, three months later. Going by the recent developments, OpenAI will open its third office in India in a couple of months. Interestingly, OpenAI CEO Sam Altman had an extensive world tour this year, visiting a number of countries in an attempt to build strategic relationships with world leaders and even be the face of ChatGPT. At that time, he had also visited India and met Prime Minister Narendra Modi, possibly laying the groundwork for future expansion plans. OpenAI is also set to host its first developer conference in Bengaluru in January. This will be OpenAI’s second developer conference after hosting its maiden one in November. OpenAI VP of engineering, Srinivas Narayan, is expected to visit leaders and developers at the conference. India’s LLM Moment in the Making Having an office in India, will also benefit India’s status on the global AI race too. Furthermore, if an office comes up here, it would mark the first OpenAI office in Asia, even surpassing Japan, which was touted to have the next office, after Altman met Japan’s PM and even discussed promoting AI models that will entail Japan’s culture. In the recently concluded GPAI, the country has committed to building its own AI models, and initiatives are already on track. Recently, KissanAI launched Dhenu 1.0, an agriculture large language model which is bilingual and comprehends English, Hindi and Hinglish queries. Founded by Pratik Desai, KissanAI caters to the agriculture market which is expected to hit $451.59 billion by 2028. Dhenu 1.0 processes 3,00,000 instruction sets in both English and Hindi. Language-specific models have been on the rise with a number of companies following suit. Bangalore-based Sarvam AI that raised $41 million in Series A funding, released OpenHathi-Hi-v.01, a Hindi LLM which was built on Llama2-7B. CoRover.ai has recently partnered with Google Cloud to launch BharatGPT, a generative AI conversational bot catering to the Indian market. The model will support over 14 languages in text, voice and video interactions. India’s foray into Indic-language models, and building datasets specific to India, has been ongoing. AI4Bharat, and Bhashini, have been other projects that have been building Indic-level datasets. An unlikely automotive player, Ola, also unveiled an India-centric AI model called Krutrim AI which powers an AI chatbot similar to ChatGPT. It is said to understand 22 Indian languages and generate text in 10 languages. Not just in India, but on the global front as well, there are demographic-specific models which are competing with OpenAI. UAE’s Jais, an Arabic LLM and A171, an AI company that will build on Falcon model, a proprietary model of Abu Dhabi’s Technology Innovation Institute. China is also catching up. With the recent Baidu’s Ernie 4.0 LLM model, which is bilingual, with English and Chinese, OpenAI’s competitor market is only diversifying. It is evident that OpenAI’s push to enter the Indian market comes at a time when the country is likely going to witness a surge of India-specific models, which would fare better in an Indian subcontext.
OpenAI decides to enter India at a time when India-focused AI models are on the rise
["Global Tech"]
["AI4Bharat", "Bengaluru", "Bhashini", "ChatGPT", "Falcon", "Jais", "japan", "kissanai", "Krutrim", "Ola", "OpenAI", "Rajeev Chandrashekar", "Sam Altman", "sarvam ai", "TII", "uae"]
Vandana Nair
2023-12-16T10:00:00
2023
787
["kissanai", "API", "sarvam ai", "Jais", "Ray", "TII", "R", "Rajeev Chandrashekar", "japan", "ChatGPT", "Sam Altman", "artificial intelligence", "Go", "Ola", "Krutrim", "AI", "generative AI", "AI4Bharat", "Bengaluru", "uae", "OpenAI", "GPT", "Bhashini", "Falcon"]
["AI", "artificial intelligence", "generative AI", "ChatGPT", "OpenAI", "Ray", "R", "Go", "API", "GPT"]
https://analyticsindiamag.com/global-tech/why-openai-is-eyeing-india/
3
10
2
false
true
false
10,130,455
Bangalore Startups Don’t Need HR
The startup founders of Bangalore are a completely different breed altogether. A few weeks ago, we spoke about the rise of Chief Everything Officers, the founders who assumed the role of every employee in a company. They made coffee, could code the next billion-dollar app, secure funding, find an apartment, and whatnot. Not to forget, human resources management (HRM), a key organisational function that founders are more than happy to take up. In an interview with Tucker Carlson, Pavel Durov, the founder and CEO of Telegram, revealed that he was the only product manager the company had! Notably, the company has a billion users and a 30-member tech team. “I still come up with most of the features and still work with every engineer and designer… Because I enjoy it,” said Durov. When asked about the size of his HR department, Durov said, “Zero” because Telegram decentralised it. “We have a separate platform for that and we select the best of the best engineers from competitions,” explained Durov. “We don’t need the HR department to find super talented engineers,” he added. Although Telegram is a Russian company, the same trend can be seen catching up in several startups in Bangalore. “No HR, one product manager and 30 devs banging their heads with MacBooks but we call it ‘Third Wave Coffee’,” said Shravan Tickoo, the founder of Rethink Systems. A blog post by Ria Shroff Desai from Blume Ventures also explained why a lot of early-stage startups do not need an HR as that is the time for “moving fast and breaking things”. Dara Khusrowshahi, the CEO of Uber, believes that at the beginning of a startup’s journey, you need people to be daring like pirates, and become a navy later on. 10x Founders However, running a company this way is not easy. Shweta Jain, product owner at Fictiv, said that numerous factors affect the work efficiency within such companies and this cannot work for startups that rely on remote work. “Consider the complexity with the same setup that could arise if the employees are from different time zones, cultures, and speak different languages,” she said, adding that it could lead to decreased productivity. This also questions the need for HR or product managers within a company of a small size. “The product managers, even though I may get bricks for this, hardly add much value and very often I see that they are just acting as glorious postmen and women,” said a user on the LinkedIn post. Although, this type of setup can be risky for companies who don’t have a founder with a vision on the product. Such self-reliance is giving rise to a lot of founders or ‘solopreneur’ who are ready to do a lot of work by themselves for a business run by a single person. When it comes to Bengaluru, the hustle is real with everyone trying to build a startup. “Startup addiction is in the air of Bangalore (sic),” said a user of LinkedIn. And what could be better than your co-founders being your flatmates, or maybe just you tackling multiple roles. This has given rise to founders and CEOs who are willing to multitask to fulfil the demands of their startups. HR to be Replaced? With the advent of generative AI tools for marketing and analytics, many companies have integrated them within their teams for improved productivity. With products like Leena AI and Zoho Recruit, which are helping companies assist, or in some cases, completely automate the hiring process is proving to be a game changer for founders who want to keep a lean team. Last year, big-tech companies laid off many in HR roles citing reasons of upskilling the department as AI was able to handle a majority of their jobs. Moreover, the 9-to-5 jobs are also coming closer to an end with the rise of the gig-based-economy, as predicted by Reid Hoffman. There seems to be less importance given to HR as there would be no regular employees in the future, which simultaneously could also be problematic for companies that are trying to retain any type of talent. A lot of companies have already started outsourcing the hiring process to recruitment companies. This would also give rise to the one-person billion-dollar companies. Another prediction is that by 2034, one in three professionals will operate multiple micro-businesses. The passion economy will give rise to unexpected millionaires. This could also possibly give rise to the first billion-dollar business built by one person with the help of AI. Given the culture of Bangalore, it is very likely that the first one-person billion-dollar company would come out of the city, and it would not need the HR department, as there would be no one to hire, or fire. Alas! The HR jokes would also come to an end.
No HR, one product manager, and 30 devs banging their heads with MacBooks but we call it “Third Wave Coffee”.
["AI Startups"]
["bangalore"]
Mohit Pandey
2024-07-29T12:14:28
2024
802
["Go", "funding", "programming_languages:R", "AI", "ViT", "analytics", "generative AI", "GAN", "R", "bangalore", "startup"]
["AI", "analytics", "generative AI", "R", "Go", "GAN", "ViT", "startup", "funding", "programming_languages:R"]
https://analyticsindiamag.com/ai-startups/bangalore-startups-dont-need-hr/
2
10
3
true
false
false
10,119,336
AWS Announces General Availability of Amazon Q
Amazon Web Services, Inc. (AWS) today announced the general availability of Amazon Q, the most capable generative artificial intelligence (AI)-powered assistant for accelerating software development and leveraging companies’ internal data. The chatbot is available in three forms: Amazon Q for developers, Amazon Q for businesses, and Amazon Q apps. Amazon Q not only generates highly accurate code, it also tests, debugs, and has multi-step planning and reasoning capabilities that can transform (e.g., perform java version upgrades) and implement new code generated from developer requests. The chatbot also makes it easier for employees to get answers to questions across business data such as company policies, product information, business results, code base, employees, and many other topics by connecting to enterprise data repositories to summarize the data logically, analyze trends, and engage in dialog about the data. Today, AWS is also introducing Amazon Q Apps, a new and powerful capability that lets employees build generative AI apps from their company’s data. Employees simply describe the type of app they want, in natural language, and Q Apps will quickly generate an app that accomplishes their desired task, helping them streamline and automate their daily work with ease and efficiency. To learn more about Amazon Q, visit aws.amazon.com/q. “Since we announced the service at re:Invent, we have been amazed at the productivity gains developers and business users have seen. Early indications signal Amazon Q could help our customers’ employees become more than 80% more productive at their jobs, and with the new features we’re planning on introducing in the future, we think this will only continue to grow,” Dr. Swami Sivasubramanian, vice president of Artificial Intelligence and Data at AWS said.
The chatbot is available in three forms: Amazon Q for developers, Amazon Q for businesses, and Amazon Q apps.
["AI News"]
["AWS"]
Pritam Bordoloi
2024-04-30T18:32:48
2024
277
["artificial intelligence", "AWS", "AI", "cloud_platforms:AWS", "ML", "RAG", "ViT", "generative AI", "R", "Java"]
["AI", "artificial intelligence", "ML", "generative AI", "RAG", "AWS", "R", "Java", "ViT", "cloud_platforms:AWS"]
https://analyticsindiamag.com/ai-news-updates/aws-announces-general-availability-of-amazon-q/
2
10
2
false
false
false
59,417
Why No-Code & Low-Code May Soon Become The New Tech Trend In Business Software
No code development platforms (NCDPs) are enabling programmers and non-programmers to build application software by using graphical user interfaces and configuration rather than conventional computer coding. Such platforms suddenly have risen in popularity as enterprises have to manage the fast growth of the mobile workforce and a small number of skilled software developers. In fact, Gartner predicts that low code application building, which included no code as well, would constitute more than 65% of all app development functions by the year 2024, with about 66% of big companies using a minimum of four low code tools and platforms. No/Low Code Software Platforms: What’s The Objective? NCDPs are applied to satisfy the requirements of companies that want to automate or digitise processes with cloud-based mobile apps. No-code tools are frequently created keeping in mind the requirements of enterprise users, different from regular IT and developer teams. But those platforms are not uniform in how they are marketed but rather differ in their functionality, integrations, and business use cases and specific applications such as business automation and seamlessly integrate ERP workflows. This turn in focus is intended to further the advancement of the software development cycle where getting IT teams to work on building business apps may take up excessive resources, funds and time, all of which may be hard to come by in comparison to market demands. How Is No Code Software Implemented In The Enterprise? No code development platforms usually use business-scale APIs to connect particular business systems and workflows while combining a working band of user functionality. Every day businesses can create filters and information queries to enable immediate customisation. They can then utilise APIs to combine data from different sources or applications smoothly. Drag and drop widgets or separate components may be visually arranged to create new apps or configure organisational workflows. Using templated user-interface and drag and drop development functions for web forms, workflows, and data analytics will allow business operators to come up with applications and productive ideas. Companies Providing No Code and Low Code Business Tools To highlight that no-code tools are on the rise, here are some of the rising companies and their solutions- AppSheet is a startup purchased by Google in January 2020, a startup that provides a no-code development platform (NCDP). The company enables people to build mobile and web apps by making use of data sources such as Office 365, Google Drive, DropBox, and other cloud-focused spreadsheet and database platforms. AppSheet could be for a wide variety of enterprise cases such as project management, customer relationship management, and worker reporting. Salesforce Lightning: While Google acquired AppSheet recently, other big tech companies such as Salesforce are also taking no-code seriously. Given the company has a wide product portfolio of business apps already in place, a no-code platform called Lightning just fits in perfectly. The Lightning Platform gives out-of-the-box means to automate enterprise workflows, and help users connect with external applications via APIs, provide layouts, widgets and more. The Lightning Platform also gives simple means to build customised business logic, and create entire apps utilising just clicks with tools like App Builder, Community Builder, and others. The Lightning Component Framework helps develop digital components which utilise Apex on the server-side and JavaScript on the client-side. Users can drag and drop components in Lightning App Builder to create desktop and mobile apps with comfort. Microsoft PowerApps: With a dedicated focus on business users, Microsoft wouldn’t want to stay out of the no-code and low code race. Their product PowerApps was released January 2017 as a Platform as a Service which enables users and teams to quickly roll out low-code apps using pre-built templates, drag-and-drop simplicity, and quick deployment. It provides developers with the tools to seamlessly extend app capabilities with Azure Functions and custom connectors to proprietary or on-premises systems. On April 2 2020, there will be a new launch showcasing the latest Power Apps innovations at the Microsoft Business Applications Virtual Launch Event. Airtable is a user-centric spreadsheet app which gives a very simple way to develop custom applications and needs no coding skills. Airtable was founded in 2012 and is a spreadsheet-database hybrid, with the features of a database but can be used as a spreadsheet. The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as ‘checkbox’, ‘phone number’, and ‘drop-down list’, and can reference file attachments like images. Users can easily create a database, set up column types, add records, link tables to one another, and collaborate. Airtable received $52 million in new funding, and according to the company, it wants to compete against giants such as Google and Microsoft in the business apps segment by democratising app development for all enterprise users. SnapBoard is a Y Combinator supported company which started when founder Calum Moore decided to manage all of his apps and tools from a single dashboard without using any coding. SnapBoard enables users to connect and control a broad range of apps and platforms in a singular, customizable dashboard. Users can design boards that work as in-house software tools without taking the product or engineering team required for the project. Snapboard has more than 50 apps accessible on the Snapboard platform, including MailChimp, Google Analytics, Shopify, Dropbox, MongoDB, MySQL, Trello, Zendesk and others. What Will Be The Impact and Benefits? The shift from conventional business software to a lean no-code development methodology may also impact IT leaders and tech departments. According to analysts, the usage of no-code tools may put IT in a more governance-centric and supervisory role rather than dynamic software programming and debugging. The potential benefits of utilising an NCDP tool is that with web access and practical business intelligence one can become an application developer, which is transformational for enterprise productivity. On the other hand, IT folks have pointed out that business users inept at debugging code may create further challenges for tech teams, given the sensitivity around cybersecurity and business theft events caused by software bugs. Nevertheless, NCDPs have been anticipated as the next wave in programming and techniques for rapid app development, which could be revolutionary for the software world. According to analysts, the rise of no-code platforms is similar to the time before the coming of computer operating systems, which impacted personal computing with the help of GUIs. Prior to the current operating systems, computers could only be accessed by technologists and people versed with languages such as DOS. Overview The evolution of programming languages like Java, C and Python incorporated superimposing layers of abstraction to mask the complications at the back of programming systems, which made it very helpful for developers to build apps. And now experts believe no-code coding is only natural to progress in the field of software evolution.
No code development platforms (NCDPs) are enabling programmers and non-programmers to build application software by using graphical user interfaces and configuration rather than conventional computer coding. Such platforms suddenly have risen in popularity as enterprises have to manage the fast growth of the mobile workforce and a small number of skilled software developers. In fact, […]
["AI Features"]
["ai lessons for beginners", "Business Intelligence", "enterprise ai", "how to implement business intelligence", "java project ideas", "Software", "Software Development", "Why is Python so Popular"]
Vishal Chawla
2020-03-23T13:00:00
2020
1,128
["how to implement business intelligence", "AI", "MongoDB", "ML", "Software Development", "R", "Why is Python so Popular", "RAG", "Python", "ai lessons for beginners", "analytics", "SQL", "JavaScript", "enterprise ai", "Business Intelligence", "java project ideas", "Software", "Azure"]
["AI", "ML", "analytics", "RAG", "Azure", "MongoDB", "Python", "R", "SQL", "JavaScript"]
https://analyticsindiamag.com/ai-features/why-no-code-low-code-may-soon-become-the-new-tech-trend-in-business-software/
3
10
4
true
false
false
10,170,322
Google Goes After Apple, OpenAI and Meta With New AI Products
At its annual Google I/O developer conference in California, Google made it clear that AI is now central to everything it builds. Gemini 2.5 lies at the heart of new product features, developer tools, and infrastructure improvements. From real-time 3D video calls using Google Beam to smarter coding assistants and AI-powered search, Google is advancing AI integration across its ecosystem, prioritising speed, efficiency, and support for developers. Gemini Replaces Google Assistant, Powers Real-Time Interactions The tech giant officially replaced the Google Assistant with Gemini 2.5, which now acts as the intelligence layer across productivity tools, cameras, and more. Gemini 2.5 Flash and Gemini 2.5 Pro introduce advanced text-to-speech capabilities via native audio out, offering control over voice style, accent, and pace. It supports single and multi-speaker output across 24 languages. A standout feature, Gemini Live, combines the camera, voice, and web access to deliver real-time, contextual answers—an evolution of last year’s Project Astra. Gmail also sees deeper integration, with Personalised Smart Replies allowing users to generate more natural responses. CEO Sundar Pichai said the feature even helps him respond to friends he might otherwise ignore, calling it “a way to be a better friend.” New AI Plans and Developer Tools Google rebranded its AI subscriptions. The $20/month AI Premium plan is now AI Pro, while a new top-tier AI Ultra plan launches at $250/month, exceeding OpenAI’s $200 ChatGPT Pro offering. Under the hood, Gemini 2.5 Pro now leads in benchmarks like WebDev Arena and LMArena. The model has been enhanced with LearnLM for education-focused use cases and a new Deep Think experimental mode that enables advanced reasoning on complex tasks such as USAMO and MMMU. For developers, Google added thought summaries for easier debugging, thinking budgets to balance latency and cost, and new SDK support for open-source agent frameworks via MCP. Google introduced Gemma 3n, a mobile-first model optimised for phones, tablets, and laptops, developed in collaboration with Qualcomm and Samsung. Available now in early preview, it will soon integrate with Gemini Nano across Android and Chrome. AI Tools That Code, Design, and Animate Google introduced Jules, a new AI coding assistant, now in public beta worldwide. Using Gemini’s advanced reasoning, Jules helps developers write and fix code faster and easier, expanding Google’s AI-for-coding capabilities beyond what OpenAI’s Codex or Cognition’s Devin currently offer. Google also announced Stitch, an AI tool designed to streamline the creation of user interfaces and front-end code. Stitch uses natural language and image inputs to generate UI designs and corresponding code rapidly. Users can describe their desired app or website in plain English, specifying layout, colour schemes, or other preferences, or upload sketches and wireframes. Stitch then produces multiple design variants for exploration.In creative tools, Google launched Imagen 4, which enables photorealistic image generation, while Flow lets users type scenes and characters to create AI-generated video clips. Meanwhile, Veo 3 adds realism and physics-aware animation to AI videos. AI Search and AR Wearables Search now features an AI Mode —essentially a chatbot embedded in search—to assist with complex queries. Adding on to this, the tech giant also introduced a new shopping experience integrated into Search’s AI Mode, combining Gemini AI capabilities with Google’s Shopping Graph, which contains over 50 billion product listings refreshed hourly. Google expanded its virtual try-on technology to allow shoppers to see how clothes look on themselves by uploading a full-length photo. Google is also working on mixed-reality glasses under the Android XR umbrella, showing off floating text, AR maps, and translations during I/O. It has partnered with Gentle Monster and Warby Parker, while Samsung’s Project Moohan headset is slated for release later this year. Rebranded as Google Beam, the updated version of Project Starline turns 2D video calls into real-time 3D experiences using six cameras and AI. Beam tracks head movements very accurately at 60 frames per second. It is made for business use and runs on Google Cloud. Beam will be available to selected customers later this year, with HP and Zoom as partners. AI for the Real World Pichai closed the event with two real-world AI initiatives. Fire Sat, an upcoming satellite network, will help detect wildfires early. Wing, Google’s drone delivery service, was used to deliver supplies during Hurricane Helen and continues to expand its capabilities.
Google rebranded its AI subscriptions. The $20/month AI Premium plan is now AI Pro, while a new top-tier AI Ultra plan launches at $250/month, exceeding OpenAI’s $200 ChatGPT Pro offering.
["Global Tech"]
["Google"]
Siddharth Jindal
2025-05-21T09:54:19
2025
708
["Go", "ChatGPT", "API", "TPU", "OpenAI", "AI", "ML", "Gemma 3", "Google", "R", "Gemini 2.5"]
["AI", "ML", "ChatGPT", "OpenAI", "Gemini 2.5", "Gemma 3", "TPU", "R", "Go", "API"]
https://analyticsindiamag.com/global-tech/google-goes-after-apple-openai-and-meta-with-new-ai-products/
3
10
1
false
false
false
10,005,559
Converting An Image To A Cartoon Using OpenCV
Computer vision is one of the hottest fields in Artificial Intelligence with a wide variety of applications. OpenCV is the most popular library used in computer vision with a lot of interesting stuff. If you want to start your journey in computer vision you can start from learning OpenCV. It is easy to understand and implement by everyone. In this article using OpenCV, let’s have fun with converting normal images into cartoons. We will cover the following steps in this article to convert the image to cartoon:- Importing librariesReading the input imageDetecting edges in the imageConverting into grayscale & applying the medium blurCartoonifying the image Converting Image to Cartoon Using OpenCV Now, let us proceed step-by-step. Step-1: Reading the libraries Here we are importing the required libraries. If you are working in Google Colab then we need to import google.colab.patches. #Importing required libraries import cv2 import numpy as np from google.colab.patches import cv2_imshow Step-2: Reading the image In this step, we will read the image. We have download an image if Virat Kohli from Google Image and will try to perform our experiment on this image. #Reading image img = cv2.imread("/content/virat.jpeg") from skimage import io io.imshow(img) As we can see that the input image read by OpenCV is being shown as a BGR (Blue-Green-Red) image so we need to convert it to the RGB (Red-Green-Blue). #Converting to RGB img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) io.imshow(img) Step-3: Detecting edges Here we are going to detect the edges in the image using adaptive thresholding methods. #Detecting edges of the input image gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray = cv2.medianBlur(gray, 5) edges = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9) io.imshow(edges) Step-4: Cartoonifying image In this step, we will be cartoonifying the image using bilateral filter method. #Cartoonifying the image color = cv2.bilateralFilter(img, 9, 250, 250) cartoon = cv2.bitwise_and(color, color, mask=edges) Step-5: Final Output (Cartoon Image) Finally, we will visualize the final output io.imshow(cartoon) The transformation from input to output Conclusion In the above demonstration, we converted a normal image into a cartoon by implementing a few lines of code using computer vision techniques. we shall have great fun using computer vision techniques. The complete code of this implementation is available on AIM’s GitHub repository. Please go through this link to find the notebook.
Computer vision is one of the hottest fields in Artificial Intelligence with a wide variety of applications. OpenCV is the most popular library used in computer vision with a lot of interesting stuff. If you want to start your journey in computer vision you can start from learning OpenCV. It is easy to understand and […]
["Deep Tech"]
["OpenCV"]
Prudhvi varma
2020-08-25T18:00:26
2020
377
["NumPy", "artificial intelligence", "TPU", "AI", "computer vision", "OpenCV", "Colab", "Ray", "Aim", "R"]
["AI", "artificial intelligence", "computer vision", "Aim", "Ray", "Colab", "OpenCV", "NumPy", "TPU", "R"]
https://analyticsindiamag.com/deep-tech/converting-an-image-to-a-cartoon/
3
10
0
true
false
false
10,164,604
Indian Tech Industry to Reach $300 Billion Revenue by FY26, Nasscom Report
On Monday, Nasscom revealed that the Indian technology industry is moving towards achieving $300 billion in revenue by the financial year 2025-26 (FY26). It further highlighted that in the current fiscal year (FY25), the sector is set to add at least 126,000 new jobs, bringing the total workforce to 5.8 million. The report further stated that in FY25, the industry strengthened its position as the global technology and innovation hub. The sector is expected to witness resilient growth in FY25, with revenue (including hardware) estimated to reach $283 Bn (5.1% y-o-y growth), an addition of nearly $14 Bn over last year. Additionally, the domestic technology sector is moving towards the $60 Bn mark, growing at 7.0% y-o-y to reach $58.2 Bn. Key growth drivers include the increasing use of enterprise software and cloud solutions and a 21% rise in data centre capacity, which has attracted more investments. Artificial intelligence (AI) adoption is gaining momentum, with Indian tech firms focusing on long-term partnerships to develop scalable AI solutions. The Nasscom Annual Enterprise CXO Survey 2025 predicts a strong growth trajectory for the technology sector in Calendar Year 2025 (CY25), driven by increased spending on digital transformation, particularly in AI-powered solutions. About 82% of CXOs expect to raise their digital investments by more than 5% compared to CY24. For technology service providers, the Financial Year 2025-26 (FY26) is anticipated to bring higher business growth. About 77% of companies in the Nasscom Annual Tech Services CXO Survey 2025 forecast increased technology spending. Expanding digital adoption, emerging markets, and rising AI-driven demand will fuel this growth. Besides, only 45% of service providers expect an increase in recruitment compared to FY25. Another major shift in global economic patterns in the current fiscal (FY25) has been a year of strategic resilience, with segments such as engineering R&D and global capability centres (GCCs) driving growth for the technology industry in India. In this regard, Nasscom chairperson Sindhu Gangadharan stated that increased AI implementation, the emergence of Agentic AI disrupting business models, and the increasing maturity of GCCs as hubs for value and transformation are driving the industry shifts. Additionally, e-commerce is expanding rapidly, with a 35% annual growth rate. The sector’s gross merchandise value (GMV) is expected to approach $200 billion soon. The digital economy now contributes about 12% to India’s GDP, with digital public infrastructure adding an extra 1%.With businesses increasingly focusing on digital transformation, experts stress the need for companies to invest in building resilient organisations and enhancing digital trust to sustain long-term growth in the tech industry.
The domestic technology sector is moving towards the $60 Bn mark, growing at 7.0% y-o-y.
["AI News"]
["GCC"]
Shalini Mondal
2025-02-25T20:59:10
2025
423
["API", "agentic AI", "artificial intelligence", "GCC", "AI", "digital transformation", "Scala", "Git", "Rust", "GAN", "R"]
["AI", "artificial intelligence", "agentic AI", "R", "Rust", "Scala", "Git", "API", "GAN", "digital transformation"]
https://analyticsindiamag.com/ai-news-updates/indian-tech-industry-to-reach-300-billion-revenue-by-fy26-ey-report/
3
10
4
false
false
false
59,496
Top Recent Research Papers On Time Series Modelling
Time series models predominantly, over the years, have focussed on individual time series via local models. This changed with the popularisation of deep learning techniques. This was also supported by the increase of temporal data availability, which led to many deep learning-based time series algorithms. Due to their natural temporal ordering, time-series data are present in almost every task that is registered, taking into account some notion of ordering. From electronic health records and human activity recognition to acoustic scene classification and cyber-security, time series is encountered in many real-world applications. Here are a few top works that improved the way we do time series modelling using deep learning: Diverse Beam Search Year: 2016 By Virginia Tech and Indiana University, USA Beam search (BS) is widely used as an approximate inference algorithm to decode output sequences from neural sequence models. BS explores the search space in a greedy left-right fashion retaining only the top-B candidate, resulting in sequences that differ only slightly from each other. To overcome this problem, the authors propose a Diverse Beam Search. DBS decodes a list of diverse outputs by optimising for a diversity-augmented objective. Moreover, these gains are achieved with minimal computational or memory overhead as compared to the beam search. The experiments were carried out on image captioning, machine translation and visual question generation using both standard quantitative metrics and qualitative human studies. The results show that this method consistently outperformed BS and previous techniques. Distributed and Parallel Time Series Feature Extraction Year: 2016 By: Karlsruhe/ University of Auckland/ University of Freiburg Feature selection is very challenging, especially for time series classification, for which each label or regression target is associated with several time-series and meta-information simultaneously. This work presents an efficient, scalable feature extraction algorithm for time series, which filters the available features in an early stage of machine learning pipelines with respect to their significance for the classification or regression task while controlling the expected percentage of selected but irrelevant features. The proposed algorithm combines established feature extraction methods with a feature importance filter. It has low computational complexity and can work with only limited domain knowledge available. ShallowRNN Year: 2019 By: Microsoft To induce long-term dependencies, and yet admit parallelisation, ShallowRNN was introduced. In this architecture, the first layer splits the input sequence and runs several independent RNNs. The second layer consumes the output of the first layer using a second RNN, thus capturing long dependencies. Furthermore, the authors show that for time-series classification, this technique leads to substantially improved inference time over standard RNNs without compromising accuracy. For example, we can deploy audio-keyword classification on tiny Cortex M4 devices (TinyML), which was not possible using standard RNN models. Multivariate LSTM-FCNs Year: 2018 By: University Of Illinois, Chicago, USA Over the past decade, multivariate time series classification has received great attention. We propose transforming the existing univariate time series classification models, the Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention LSTM-FCN (ALSTM-FCN), into a multivariate time series classification model by augmenting the fully convolutional block with a squeeze-and-excitation block to further improve accuracy. These models outperform most state-of-the-art models while requiring minimum preprocessing. The proposed models work efficiently on various complex multivariate time series classification tasks such as activity recognition or action recognition. Furthermore, the proposed models are highly efficient at test time and small enough to deploy on memory-constrained systems. SOM-VAE Year: 2019 By: ETH, Zurich, Switzerland Representation learning in the context of time series data is usually difficult to interpret. Their non-intuitive nature comes from their high dimensional nature, which is not suitable for human understanding. To address this problem, a new representation learning framework was proposed by the researchers at ETH, Zurich — self-organising maps and variational autoencoders. This framework allows one to learn discrete representations of time series. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organising map algorithm that is more performant than the original. GluonTS Year: 2019 By: AWS Introduced by cloud giant, Amazon web services, Gluon Time Series is a library for deep-learning-based time series modelling. It simplifies the experimentation with time series models for forecasting or anomaly detection. It has all necessary components for quickly building new models, for efficiently running and evaluating model accuracy. Check more models and leaderboards here.
Time series models predominantly, over the years, have focussed on individual time series via local models. This changed with the popularisation of deep learning techniques. This was also supported by the increase of temporal data availability, which led to many deep learning-based time series algorithms. Due to their natural temporal ordering, time-series data are present […]
["AI Trends"]
["forecasting", "RNN", "Time Series"]
Ram Sagar
2020-03-23T16:00:55
2020
723
["Go", "machine learning", "TPU", "AWS", "AI", "ML", "Scala", "deep learning", "Time Series", "anomaly detection", "RNN", "forecasting", "R"]
["AI", "machine learning", "ML", "deep learning", "anomaly detection", "AWS", "TPU", "R", "Go", "Scala"]
https://analyticsindiamag.com/ai-trends/top-time-series-model-state-of-the-art/
3
10
0
false
true
true
1,694
IoT India Leaders Outlook 2017
In March 2017, we conducted a survey of IoT leaders in India to learn their views and plans with respect to their companies, and their outlook for growth of IoT industry. Based on that survey, we bring to you the annual IoT India Leaders Outlook study for this year. The 2017 IoT India Leaders outlook survey, which is the focus of this study, conducted a total of 33 online interviews in the month of March 2017, among key decision makers in IoT industry in India. The purpose of the study is to gain clearer insight into the confidence level of business leaders from Indian IoT industry and to provide these executives with information regarding the perspectives and actions of their peers. Overall, the responses have been extremely optimistic and sentiment looked positive. As IoT move even further past the just being a buzz towards real adoption by enterprises, their leaders’ optimism about the growth and demand continues to climb for the second year in a row. Looking out over 2016, these leaders are significantly more optimistic about the IoT in India. Business Confidence 97% respondents expect the demand of IoT at their organization to increase over the next 12 months. Thus, there is extremely high level of confidence in the industry currently. Next, we asked the leaders to rate on a scale of 1-10, if they are confident on IoT being a key focus area for organizations globally over next 12 months – 33% respondent gave a perfect 10/10 score. 33% respondent gave a score of 7 or lesser out of 10 Average Confidence score was 3/10. Also, we calculated the Net Sentiment Score (Calculated as difference of % with score of 10s & 9s and Score from 0 through 6s), at 27%. 27% IoT leaders in India exhibit a fairly positive outlook towards the industry. 82% of decision makers plan to increase their IoT workforce in next 12 months. Just 18% say that they have no hiring plan for the coming year. Key Challenges The biggest challenge faced by IoT leaders in India is around Procedures and processes. 67% leaders quote “Procedures/ Processes not standardized” as one of their key challenge. 45% decision makers believe that ‘Unavailability of IoT Talent’ is the major challenge that they face. This is in line with our earlier findings and is widely documented and spoken about. There is a huge demand for IoT professionals across levels. The pool of resources is not sufficient to fulfill the current requirements. While the industry is collaborating with several institutes and organizations to fulfill these demands, the quality of professionals and experience still remains challenge. 64% of leaders believe that ‘Little knowledge of IoT among customers’ is a major challenge for them. Obviously, we have not done enough to propagate IoT understanding to a wider audience in the market. While IoT has been a buzzword around the network, very few understand how they can in-cash on it. And even fewer can identify the key areas within their business to apply it to. Few leaders are wary of competition; just 15% believe it’s a challenge for them. Only 3% believe that IoT demand has peaked. It’s evident that the current optimism is due to an increased demand for IoT. Growth Area We asked our respondent on what according to them are the biggest growth industries for IoT in next 12 months. ‘Energy & Utilities’ is considered as a top area of growth in IoT by 70% of respondent. ‘Government/ Smart Cities’ comes at a close second at 67%. Key strategic decisions We asked our respondents on the key strategy decisions that they plan for the coming year in order to grow their business. 88% IoT leaders plan to collaborate with external partners. 64% are planning to grow their existing offerings. Just 33% plan to look for external financing in the coming year. Respondents’ Profiles Here’s the profile of our respondents.
In March 2017, we conducted a survey of IoT leaders in India to learn their views and plans with respect to their companies, and their outlook for growth of IoT industry. Based on that survey, we bring to you the annual IoT India Leaders Outlook study for this year. The 2017 IoT India Leaders outlook […]
["AI Features"]
[]
Дарья
2017-03-23T08:40:24
2017
652
["Go", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "GAN", "R"]
["AI", "RAG", "R", "Go", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/iot-india-leaders-outlook-2017/
3
7
4
false
false
false
24,686
Best Practices On Setting Up Development And Test Sets For ML, According To Andrew Ng
The availability of data and increased computational power have been the biggest drivers of artificial intelligence. Google’s TensorFlow played a huge role in revolutionising machine learning as it allows developers to build neural networks without knowing all the functionality. It supports multiple languages, so developers can create the ML models in Python and use them easily in other languages as well. This article is based on Andrew Ng’s free ebook Machine Learning Yearning where he gives technical direction for machine learning projects. One of the key aspects he discusses is about setting up the development and test sets. In the book, Ng discusses what happens when a team decides to deploy a classifier in the app and tests the performance based on the data collected. For example, you download a large training set by downloading pictures of cats (positive examples) and non-cats (negative examples) from different websites. The dataset is further split into 70 percent to 30 percent – training and test sets. Using this data, one builds a cat detector which works well on the training and test sets. But when this classifier is deployed into a mobile app, the performance doesn’t fare well. Setting Up Development And Test Sets Ng emphasises that working on machine learning applications is hard enough but having mismatched development and test sets add to the uncertainty about whether improving on the development set distribution also improves test set performance. As a lesson for beginners, he states that having mismatched development and test sets can make it harder to figure out what is and isn’t working. Ng affirms that it is an important research problem to develop learning algorithms that are trained on one distribution and generalise well to another. But if your goal is to make progress on a specific machine learning application rather than make research progress, he recommends choosing development and test sets that are drawn from the same distribution. How Large Should The Development/Tests Sets Be? The development set should be large enough to detect differences between algorithms that one is working on, states Ng. He cites an example – if classifier A has an accuracy of 90.0% and classifier B has an accuracy of 90.1%, then a development set of 100 examples would not be able to detect this 0.1% difference. Compared to other machine learning problems, a 100-example development set is small. Development sets with sizes from 1,000 to 10,000 examples are common. You stand a good chance of detecting an improvement of up to 0.1% when the set features 10,000 examples. For mature and important applications like , advertising, web search and product recommendations the former Baidu and Google chief talks about teams that are highly motivated to eke out even a 0.01% improvement, since it has a direct impact on the company’s profits. In this case, the development set could be much larger than 10,000, in order to pick up even the smallest of improvements. What should be the size of the test set? It should be large enough to give high confidence in the overall performance of the system. One popular heuristic had been to use 30% of your data for your test set. This works well when you have a modest number of examples — say 100 to 10,000 examples. But now in the age of big data, where we now have machine learning problems with sometimes more than a billion examples, the fraction of data allocated to dev/test sets has been shrinking, even as the absolute number of examples in the development or test sets has been growing. Ng emphasises that this eliminates the need to have excessively large development or test sets beyond what is needed to evaluate the performance of your algorithms. Ng Recommends That Teams Should: Choose development and test sets to reflect data that approximates your expectation The test set should not simply be 30% of the available data, especially if one wants the future data (mobile phone images) to be different in nature from the training set (website images) The development and test sets should ideally be large enough to represent accurately the performance of the model When discussing best practices on splitting test and development datasets, Stanford tutorial discusses that academic datasets often come with a train/test split (to be able to compare different models on a common test set). You will therefore have to build yourself the train/development split before beginning your project. Data Collection Another key tip is that as part of the machine learning strategy, teams should define the data collection process. If teams know what they want to predict, it will help them outline what data needs to be mined. By and large, the general recommendation for beginners is to reduce the complexity of data by understanding exactly what type of data needs to be harnessed. For example, most business problems can be solved with a simple segmentation, so it is important to know tasks/business problem and understand the right algorithm for it. For example, ML algorithms fall into five major categories: cluster analysis, classification, ranking, regression and generation. So, segmenting audience falls under cluster analysis.
The availability of data and increased computational power have been the biggest drivers of artificial intelligence. Google’s TensorFlow played a huge role in revolutionising machine learning as it allows developers to build neural networks without knowing all the functionality. It supports multiple languages, so developers can create the ML models in Python and use them […]
[]
["data collection", "dataset"]
Richa Bhatia
2018-05-17T08:52:26
2018
853
["dataset", "Go", "big data", "machine learning", "artificial intelligence", "AI", "neural network", "ML", "Python", "data collection", "TensorFlow", "R"]
["AI", "artificial intelligence", "machine learning", "ML", "neural network", "TensorFlow", "Python", "R", "Go", "big data"]
https://analyticsindiamag.com/ai-features/best-practices-on-setting-up-development-and-test-sets-for-ml/
4
10
1
true
true
true
10,101,420
Are Banks Ready for Generative AI?
Generative AI represents a paradigm shift in the way we approach creativity and innovation. However, given banking is one of the most-regulated industries, generative AI adoption could be tricky. At Cypher 2023, India’s largest AI conference, Srikanth Gopalakrishnan, head of the India Technology Centre at Deutsche Bank Group, said that we must tread carefully as we venture into this new frontier, taking into account the ethical challenges it poses and strive for a measured and responsible adoption. As we leverage the capabilities of generative AI, all the while safeguarding the unique attributes of human creativity, we can unlock a future where technology and human ingenuity harmoniously coexist. This harmonious coexistence, according to Gopalakrishnan, promises a more dynamic and enhanced society. “In regulated industries like banking and healthcare, a lot of homework has been done along with a lot of R&D in order for us to get to where we think the benefits of AI are going to be good,” Gopalakrishnan said. Explainability is key Gopalakrishnan said for banks to leverage this technology, they need to be 100 percent sure that the technology works. Given LLMs still hallucinate, leveraging the technology for customer-facing domains could prove tricky. “You can’t just say invest in the stock and leave it at that. The ‘trust me’ is not going to work. The ‘trust me’ must also have explainability. And that is why certain industries (like banking) will have to be extra careful about how to take this forward in a viable fashion. “We still need explainability. We’ve been talking to Google, NVIDIA and others, and they are all working on essentially marking up the explainability aspect within the code that they have. First of all, it needs to give confidence to us internally, then to the customers and the regulators,” he said. However, he feels there are a few areas in banking where generative AI can be adopted without worrying much about regulations. “Customer service is one of those areas,” Gopalakrishnan said. “I believe the volume of business can actually go up in terms of how to actually interact and reach out to customers. We are talking about bots being available, personalised responses being provided.” Other areas where banks can leverage generative AI are risk management and fraud detection. Future-proofing of jobs In his talk, Gopalakrishnan also touched upon the ongoing discussion on the impact of generative on jobs. “We have a large number of graduates joining us every year and I have been talking to them. As someone who is relatively new to the industry, they might have to rebrand themselves at least three to four times in their career,” he said. “So, it’s important for us to start understanding this when speaking about AI, and the transformation it will bring about in the traditional jobs. Repetitive tasks are going to go away, in fact, they are already fading away.” Even jobs that require a certain level of intelligence from a human to be able to make a decision will also probably go away, Gopalakrishnan said. The ability to learn and relearn is crucial, regardless of the career stage.
This harmonious coexistence between AI and humans, according to Gopalakrishnan, promises a more dynamic and enhanced society
["AI Features"]
[]
Pritam Bordoloi
2023-10-13T15:59:03
2023
516
["Go", "programming_languages:R", "AI", "innovation", "RAG", "ViT", "generative AI", "Rust", "R", "fraud detection"]
["AI", "generative AI", "RAG", "fraud detection", "R", "Go", "Rust", "ViT", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/cypher2023-are-banks-ready-for-generative-ai/
2
10
2
false
true
false
10,132,532
CoRover.ai Joins NVIDIA Inception to Accelerate BharatGPT
CoRover.ai, the company behind BharatGPT, announced its inclusion in NVIDIA Inception, a program designed to support startups advancing industries through technological innovation. CoRover.ai, known for its human-centric conversational AI platform, has developed BharatGPT, India’s first indigenous generative AI platform. The platform is accessible across various channels, formats, and languages, serving 1.3 billion users. CoRover’s solutions include virtual assistants like chatbots, voicebots, and videobots, which are deployed across multiple sectors, including government and private organizations like IRCTC, LIC, and the Indian Navy. The NVIDIA Inception membership will provide CoRover with access to NVIDIA’s resources, including GPUs, compute power, and software support. This partnership is expected to accelerate CoRover’s development of AI-driven customer engagement solutions. “As we are committed to addressing real business use cases in a B2B2C landscape, having access to NVIDIA’s technological know-how and resources through NVIDIA Inception will help CoRover effectively handle large language models and domain-specific models, automating conversational AI use cases,” said Ankush Sabharwal, CEO of CoRover. NVIDIA Inception supports startups with benefits such as NVIDIA Deep Learning Institute credits, preferred pricing on hardware and software, and ongoing technological assistance, aiding in product development, prototyping, and deployment. CoRover.ai recently announced a strategic partnership with AI auditing firm EthosAI.one to advance the development of responsible AI. The partnership aims to ensure the reliability, fairness, and accuracy of BharatGPT, reinforcing it as a trustworthy AI solution. EthosAI.one will continuously audit and enhance BharatGPT models, aligning them with the highest ethical standards.
The NVIDIA Inception membership will provide CoRover with access to NVIDIA’s resources, including GPUs, compute power, and software support.
["AI News"]
["CoRover.ai"]
Siddharth Jindal
2024-08-13T17:02:10
2024
242
["Go", "AI", "chatbots", "virtual assistants", "GPT", "Aim", "deep learning", "generative AI", "Rust", "CoRover.ai", "R"]
["AI", "deep learning", "generative AI", "Aim", "chatbots", "virtual assistants", "R", "Go", "Rust", "GPT"]
https://analyticsindiamag.com/ai-news-updates/corover-ai-joins-nvidia-inception-to-accelerate-bharatgpt/
2
10
2
false
false
false
10,136,589
Can Neuroscience Help Enterprises Derive Value from Generative AI?
Around 80% of generative AI use cases fail to deliver business value. Raja Jamalamadaka, managing director at Roche Information Solutions India, believes this disconnect lies within the human brain and how we interpret these technologies in a business context. While speaking at the keynote session at Cypher 2024– India’s Biggest AI conference– hosted by AIM Media House, Jamalamadaka said human emotions like fear, uncertainty and ambiguity are some of the reasons that are hindering the wider adoption of generative AI among enterprises. Drawing from his background in neuroscience, Jamalamadaka examined the intersection of human intelligence and artificial intelligence. He pointed out that a staggering 61% of respondents in a recent survey expressed fears about potential job losses due to AI advancements. This statistic underscored a critical point– while technology promises efficiency and innovation, it also instils anxiety about job security and the future of work. He believes understanding human emotions towards technology will play a critical role for enterprises that are looking to scale generative AI adoption. Hi-Touch Before Hi-Tech “High touch is all about simplicity, ease, helping people relate their emotions in a particular level, ensuring people understand that they will not lose their jobs, if anything, they will benefit from it,” Jamalamadaka said. He emphasizes that CXOs must assist their employees in grasping the value of generative AI and its potential to enhance their work. Recognising and valuing human intelligence can address many concerns and the limited benefits enterprises derive from generative AI. “High touch precedes high tech. People are doing it the other way around. Humans should be valued, while technology is meant to be utilised. Unfortunately, many people tend to prioritise technology over the value of human contributions,” he said. Moreover, many issues plaguing enterprises, such as a lack of motivation among employees, poor communication, resistance to change, and insufficient training, are significant factors hindering the ability to derive value from generative AI. By prioritising a “high touch” approach—valuing human contributions and addressing concerns like job security—businesses can better harness the potential of AI. AI Evolved From Human Intelligence Jamalamadaka also pointed out that artificial intelligence has evolved from human intelligence. AI functions in the same way human intelligence does. It digests millions of data points and identifies patterns, enabling it to make informed decisions and predictions similar to how humans process information. “Latest research shows that the human brain processes 11 million pieces of information per second. However, only 0.0004 percent of these pieces of information and forget the rest,” he said. This is how generative AI operates, but the key difference lies in recognising the significance of human intelligence, which enables us to perform many tasks that generative AI cannot yet achieve. Understanding and integrating human intelligence into technological advancements will be essential for truly realising the benefits of generative AI in the workplace.
By prioritising a “high touch” approach—valuing human contributions and addressing concerns like job security—businesses can better harness the potential of AI.
["AI Features"]
["Cypher"]
Pritam Bordoloi
2024-09-25T10:58:45
2024
470
["artificial intelligence", "programming_languages:R", "AI", "innovation", "Aim", "generative AI", "R", "Cypher"]
["AI", "artificial intelligence", "generative AI", "Aim", "R", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/can-neuroscience-help-enterprises-derive-value-from-generative-ai/
2
7
2
true
false
false
10,082,839
YouTube Enters Indian Edtech with ‘Courses’
At the ‘Google for India’ event, the tech giant added a new feature on YouTube, called ‘Courses,’ a subscription-based concept to offer a structured learning experience. Although it is now in the testing stage, it is anticipated to launch in early 2023; no specific date has been assigned yet. There are already eight ways to monetise YouTube, such as paid sponsorships, premium and ads, and Courses is adding to the list. Before the Google for India event, Ishan John Chatterjee, managing director of YouTube India, indicated that Courses would only be accessible in three countries: India, South Korea, and the United States. According to Chatterjee, India is one of the largest markets for online education. YouTube wants to make it as simple as possible for customers to access relevant content and advance their skills. He further added that the decision to monetise content related to digital learning is up to the content creators. Yet, they will soon have the option to reap financial benefits from their videos that assist viewers in skill enhancement. YouTube is primarily looking into four areas for courses: digital skills, entrepreneurship, profession, and personal interest. YouTube creators will also be able to upload documents in PNG and PDF formats to explain the courses that they are delivering comprehensively. The streaming site has already enlisted some local creators (LearnoHub, Speak English With Aishwarya, and Telusko) to create courses in various Indian languages on academic and vocational themes. According to YouTube director of Southeast Asia and Emerging Markets Ajay Vidyasagar, the company seeks to provide the creative ecosystem with new ways to monetise its work and create new employment opportunities. The most recent research by Oxford Economics, also shared by Vidyasagar, showed that in 2021, the creative ecosystem of YouTube supported more than 750,000 full-time equivalent jobs in India and contributed over INR 10,000 crore to the country’s GDP. With more than 300 million students attending schools in India, Google‘s efforts to penetrate the Indian education sector are escalating. In recent years, Meta and Amazon have also made substantial investments. Google is aggressively competing with Meta’s Instagram in India to swoop in more content creators. In addition, the 2020 Indian ban on TikTok prompted Google and Meta to introduce new services to fill the gap left by the Chinese company. However, the question persists: Why did YouTube choose to add another monetisation technique?
Courses is a subscription-based model for creators on YouTube even though it already offers several other monetization ways, including Ads and paid sponsorships.
["AI News"]
["Courses"]
Shritama Saha
2022-12-20T15:34:23
2022
395
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Scala", "Git", "programming_languages:Scala", "Courses", "R"]
["AI", "R", "Go", "Scala", "Git", "programming_languages:R", "programming_languages:Scala", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/youtube-enters-indian-edtech-with-courses/
2
8
1
false
false
false
48,750
WATCH: Top 10 TED Talks On Data Science
TED Talks have become a great and reliable way to enhance knowledge and open our minds to different possibilities. And interestingly, TED Talks are not limited a specific area of interest — they cover a wide plethora of topics, including emerging technologies like artificial intelligence, machine learning, internet of things, data science and quantum computing, among others. With Data Scientist being the hottest job of the 21st century, all aspirants should watch these videos to get new ideas and upskill their skills. In this article, we list down 10 must-watch TED Talks in data science. 1| Truth in Data Science This talk is presented by Jaya Tripathi who is a Principal Data Scientist at MITRE Corporation, Massachusetts. The speaker here describes her process for arriving at the truth of data science in her research on demographics and addiction. In this talk, she shares important aspects and techniques of data science, hypothesis testing, visualisation, how to identify frauds, other topics like the CRISP-DM data model and much more. 2| The Most Important Skills Of Data Scientists Jose Miguel Cansado, the General Director of Alto Data Analytics talks about how big data starts to drive the world, and what kind of skills will one needs to interpret it. He talks about the important skills that one should have to be a good data scientist, steps of data-driven decisions and the most important of all, the human factor in a data scientist. 3| We’re All Data Scientists Teaching Professor and the Director of Undergraduate Studies in the Department of Statistics at Carnegie Mellon University, Rebecca Nugent talks about the rise of Data Science and what it means for education at the bachelor’s as well as master’s level. According to her, Data Science is a field which belongs to everyone and calls for everyone to seize the opportunity to become data scientists within their own fields/jobs 4| Demystifying Data Science This talk is presented by Asitang Mishra who is a Data Scientist at the NASA Jet Propulsion Laboratory (JPL). In this talk, Mishra talks about what it means to be the modern-day data science superhero and how combining open source technology with multi-disciplinary collaboration at NASA, paves the way for some of humanity’s most ingenious solutions. He explains the various quirks of this new, fast-growing and at times vague field of data science in a simple and easy manner. 5| Data Art: An Emerging Complement to Data Science Data Visualization Artist at the University of Vermont Complex Systems Center in Burlington, Jane Adams talks about the relation between analogy and anecdote in visualisations of data, complex systems, visual analogy, and other such related topics. 6| Data hacking – Data Science For Entrepreneurs Kevin Novak, Senior Data Scientist at Uber talks about the meaning of data science at Uber and the predictive model that has been used for pick up and drop off. He also talks about the real-time data which consists of attributes like estimated time arrival (ETA), surge pricing, map matching, fare estimates, etc. 7| Data Science at Work This talk is presented by Peter Grindrod who is a Professor of Mathematics at the Mathematical Institute at the University of Oxford. In this talk, Grindrod illustrates how the activity challenges modern mathematical sciences and feeds from it, by using novel mathematical ideas to create new products and services within the digital economy. 8| The Human Insights Missing From Big Data This talk is presented by Tricia Wang, a global tech ethnographer where she demystifies big data and identifies its pitfalls. The talk also suggests that humans mostly focus on “thick data” which they think are precious, unquantifiable insights from actual people in order to make the right business decisions and thrive in the unknown. 9| World Changing: Data Science and AI Fred Blackburn leads the Justice, Homeland Security, and Transportation division of Booz Allen Hamilton, a top consulting firm based in Washington, D.C. In this talk, he stresses on the topic, Data Science combined with artificial intelligence and how it is set to create change that this world has never seen. He also talks about the computational power of computers and their impacts around the globe. 10| Why Everyone Should Be Data Literate This talk is presented by Jordan Morrow, Head of Data Literacy at Qlik where he discusses the various strategies such as how to decipher between what is true and what is not without becoming data scientists, how to use the correct information in order to make better decisions and other related topics on data literacy.
TED Talks have become a great and reliable way to enhance knowledge and open our minds to different possibilities. And interestingly, TED Talks are not limited a specific area of interest — they cover a wide plethora of topics, including emerging technologies like artificial intelligence, machine learning, internet of things, data science and quantum computing, […]
["AI Trends"]
["conference", "Data Science", "TED Talks"]
Ambika Choudhury
2019-10-25T14:00:27
2019
755
["big data", "data science", "Go", "artificial intelligence", "machine learning", "AI", "conference", "Git", "TED Talks", "ViT", "analytics", "Data Science", "R"]
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "R", "Go", "Git", "big data", "ViT"]
https://analyticsindiamag.com/ai-trends/watch-top-10-ted-talks-on-data-science/
3
10
1
false
true
false
10,071,705
Why Amit Sharma created DoWhy
“Data tells stories. My research aims to tell the causal story,” proclaims Amit Sharma, a researcher at Microsoft and the developer of software library DoWhy (2018). This year, this library was in the news when Microsoft moved DoWhy to an independent open source governance model in a new PyWhy GitHub organisation. Analytics India Magazine caught up with Sharma for a quick chat about PyWhy, Causal Inference, and more. AIM: Let’s begin by getting a peek into your early years and professional journey. Amit Sharma: I did my graduation in engineering from IIT Kharagpur. While studying there, I got a chance to intern at industry and university labs that exposed me to the process of research. I was acquainted with a few PhD students and liked how they were all trying to solve tough problems with many unknowns. And the best part was they were being paid to study and research – the idea fascinated me. So, I applied for PhD programs and got through the Computer Science department at Cornell University. My advisor Dan Cosley was in the Information Science department. Hence, I had the advantage of taking courses and interacting with students from both departments. While Computer Science focuses on the design of technology systems, Information Science studies how these systems interact with society. This dual experience changed my outlook. Initially, I wanted to build systems that would help people, but I gradually learnt that it is equally important to reflect on whether I really know what will help people and what are ways to confirm my hypothesis. In other words, the ability to ask the right question is often as important as coming up with the best answer to a question. I am now working as a principal researcher at Microsoft Research in India. I try to merge the ways of thinking in what I do – causal inference and technology for mental health. AIM: To whom (or what) do you credit your interest in Causal Inference? Amit Sharma: My first encounter with Causal Inference was during an internship at LinkedIn, where I was working to improve LinkedIn’s recommendation algorithm. I was struck by my team’s dependence on conducting a randomised A/B experiment to test a new algorithm, even though there were many established accuracy metrics that could be computed from log data. That felt wasteful, so I went up to my manager and asked, “why not evaluate algorithms offline using the log data? That will be so much faster”. He said, “We’ve tried that before. You’ll be lucky if offline evaluation provides the right direction of estimates, let alone an accurate answer.” I was intrigued. Something felt wrong here, but I didn’t know how to express it and moved on. A few years later, during an internship at Microsoft Research, my collaborators helped me find the answer: Causal Inference. Turns out that causality has been an important topic in statistics, economics, and the biomedical sciences, but it found very little attention in Computer Science at the time. And it could perfectly explain the A/B test riddle. The issue was that we wanted to find the causal effect of any new recommendation algorithm, but the offline accuracy metrics were not correctly set up to measure that (and as I learnt later, it is tough to set them up to measure the causal effect). But the really interesting part was that it was not just A/B testing versus offline evaluation or about finding the impact of a recommender system, the principles of causal inference apply to all decision problems from healthcare to economics. I started reading more about causality, especially from Judea Pearl’s writings and got hooked. AIM: You built the DoWhy library. Tell us about the whole process of development. Amit Sharma: Well, even though I was excited about the topic, it was very difficult to learn about Causal Inference. Back in 2015, the best resources were statistics textbooks or the Causality book from Judea Pearl. None were accessible to me. I spent a good part of 2015 trying to understand what these books were saying. At the outset, it looked like there was no agreement on the best way to do a causal analysis. But the more I read and worked with data, I realised that all causal analysis problems boiled down to four steps. They were the same steps repeated in each project, but these details were often skipped when presenting the analysis. The accepted best practices for formulating and validating assumptions that we’d hear from experts were not written down anywhere. So, my collaborator Emre Kiciman and I thought: Why not create a library for Causal Inference that enabled these best practices for everyone? The four steps are: model the world knowledge, identify whether a causal quantity is estimable given the knowledge, estimate the quantity if so, and finally refute or validate the obtained estimate. These four steps form the core API verbs of the DoWhy library we built. Before DoWhy, most software for causality focused only on the estimation step. But the other steps are equally important. As we designed the DoWhy library, we wanted to convey that Causal Inference is not like predictive machine learning, where you start with data. Here, you need to start with assumptions that you are willing to take on the data-generating process. If you make the wrong assumptions, no amount of data modelling will save you. Therefore, DoWhy’s focus is on setting up and validating assumptions of a causal analysis. AIM: How do you see DoWhy evolve? Amit Sharma: The response has been heartening. In a short time, DoWhy has been installed over 1.3 million times. It is being used as a teaching tool in universities, in research papers, and in answering various business questions in the industry. Going forward, we will continue to add better ways to validate the Causal Analysis. DoWhy is also moving towards other causal tasks: in addition to effect estimation, we are extending it to attribution, prediction and counterfactual estimation using the same four-step API. We’ve just recently moved DoWhy to an independent Github organisation, py-why, so DoWhy is now an open-source, community-led project. If you are interested, feel free to join the community on Discord. We welcome your contributions! AIM: Your work revolves around using modern algorithms as interventions. Please elaborate. Amit Sharma: Around the time that I started working on causality in online systems, there was a growing concern about algorithmic decision-making systems in critical domains such as finance, education and governance. I realised that we are quickly moving away from a world where technology helped people do a task to where systems are making important decisions that can affect people’s lives. This is a paradigm shift. Consider an algorithm used by a bank to decide on loan applications or a governmental algorithm to distribute aid. Such algorithms are not just passive prediction algorithms, they are coming in and making decisions with real consequences for the people involved. And their effects can be massive. So, two questions come to mind: 1) How do we measure the impact of these systems? 2) How can we design such systems for better impact? These questions are not too different from the evaluation of medical treatment or economic policy (that’s why the term algorithmic interventions) where causal inference has historically been applied. How do we do the same in computing systems? AIM: Right, so what are you working on now? Amit Sharma: I am working on developing better ways to validate causal models from data. That remains an open question, so advances here can accelerate the progress in building causal models, like what cross-validation did for machine learning. The other direction that I’m interested in is how causality can help machine learning systems become more robust and trustworthy. I am developing techniques that use interventional input examples to interpret the patterns learnt by a predictive ML model (see the DiCE project), evaluate its fairness with respect to people’s expectations and improve the ML model. I am also fortunate to be associated with the Center for Societal Impact through Cloud and AI (SCAI) at Microsoft Research, where I’m working to see how such models can be responsibly deployed in sensitive contexts. I’m also working with clinical psychologists from NIMHANS on a mental health app, MindNotes, that aims to reduce stigma around mental health and encourage more people to seek help. AIM: What resources would you suggest to those interested in knowing more about Causal Inference? Amit Sharma: Compared to predictive machine learning, causal inference requires a different thought process and can have a steep learning curve. So I would like to suggest a few learning resources: 1) It’s best to start with the Book of Why to understand the basic concepts. 2) If you like to go deeper into the context of computing systems, you can check out the draft book, Causal Reasoning: Fundamentals and machine learning applications, that I’m co-writing. The first chapter describes how causality, out-of-distribution predictive generalisation, and reinforcement learning are all connected. 3) If you prefer videos, you may check out this webinar on causal machine learning that includes a sample analysis with the DoWhy library.
The accepted best practices for formulating and validating assumptions that we would hear from experts were not written down anywhere. So, my collaborator Emre Kiciman and I thought: Why not create a library for Causal Inference that enabled these best practices for everyone?
["AI Features"]
["Interviews and Discussions"]
Shraddha Goled
2022-07-28T14:34:41
2022
1,524
["Go", "machine learning", "AI", "ML", "Git", "RAG", "Aim", "analytics", "Rust", "R", "Interviews and Discussions"]
["AI", "machine learning", "ML", "analytics", "Aim", "RAG", "R", "Go", "Rust", "Git"]
https://analyticsindiamag.com/ai-features/why-amit-sharma-created-dowhy/
3
10
2
false
true
true
7,155
Why Startups need to adopt analytics too?
If you thought that data analytics is only a need for big enterprises, well, think once again. Increasingly, more and more startups are waking up to the idea of data fuelling their business goals. From problem identification to product validation, from identifying a product-market fit to taking the product to the next level, data analytics is helping entrepreneurs take informed choices. What metrics matter? Entrepreneurs should know what metrics matter. Let’s say you have 1000 Facebook Fans. It is a metric, a valid metric. But do you actually know what to do with them? Do you know how these Fans impact your bottom-line? Do you know your Vanity Metrics from your Actionable Metrics? Take another situation. Let’s say you add a new chat feature to your product. During the same time, the number of subscriptions to your product shoot up. If you are not measuring the right metric, your teams will be in their respective ignorant bliss. Your sales person will think it is their hard work that has resulted in the increase in subscription. Your product person will assume it is the new chat feature they burnt their midnight oil working on which has resulted in this. Your marketing person will think it is the amazing article she has written on the blog which has led to this. Either may be right or may be all are wrong in their assumptions. Maybe some change in the external environment led to this. How will you know if you do not measure and analyse the right metric? However, there is no one-size-fits-all metric for startups. Among other things, it depends on factors like the stage in which your startup is, your goal (both long term and short term), etc. David McClure in Startup Metrics for Pirates (AARRR) breaks down startup analytics into five categories: acquisition, activation, retention, referral, and revenue. The acquisition category involves everything how your users get to know you and land on your site. Activation category involves how visitors experience your site for the first time. The retention category includes all those factors which help users return to your site. The referral category includes metrics which make a user refer your site to a friend. And the revenue category includes anything related to the monetization of your site. Metrics in each category differ and have to be carefully chosen based on what your goal is. Lean Startup & Lean Analytics In their book Lean Analytics , Alistair Croll and Benjamin Yoskovitz provides interesting insights into how entrepreneurs have used analytics to build lean startups. The book takes on from Eric Ries’ Lean Startup concept. One of the core fundamentals of a lean startup is the Build-Measure-Learn process which should guide all decisions in a startup. Within that cycle, lean analytics focuses on the Measure stage. Fig: The Build Measure Learn Cycle and the role of Data Analytics. Image Source: Lean Analytics “Lean Startup helps structure your progress and identify the riskiest parts of your business, then learn about them quickly so you can adapt. Lean Analytics is used to measure that progress, helping you to ask the most important questions and get clear answers quickly.” To put it simply, if the lean startup concept focusses on learning from your mistakes fast, lean analytics help in identifying those mistakes faster. Remember, when you are a startup, time is one of your most important resource. Startup Analytics in Action How Circle of Friends found a better Product-Market Fit using analytics? Circle of Friends was launched in 2007 as a simple Facebook application that allowed users to organize their friends into circles for targeted content sharing. By mid-2008, Circle of Friends had 10 million users. However, their engagement quotient was too low. Less than 20% of circles had any activity after their initial creation. Though the company did not have an in-depth data analytics in place at that time but Mike Greenfield, co-founder of Circle of Friends, did some exploratory analysis with his database. He found that one specific segment of users were more actively engaged than others. Their messages were on an average 50% longer, they were 110% more likely to engage in a threaded conversation and 60% more likely to accept invitations to the app. These and more such data helped Mike to shift their focus to that specific segment of users and in October 2008 Circle of Moms was launched on Facebook. Though they lost some users in the process, but those that remained were more active and engaged. By 2009, Circle of Moms had 4.5 million active moms as users. The community eventually moved out of Facebook and was acquired by Popsugar in 2012. Today it has 10,793,564 members How Burbn pivoted to become the most popular photo-sharing app using analytics? Burbn was a location-based social network launched in early 2010. It was a basic browser-based mobile app, developed using HTML5. It had four tabs. You could “Move” or check in somewhere new. You could also post your plans. Image Source But a closer look at their engagement data revealed that the location part only had a secondary appeal. Users were not exactly ‘checking in’ all the time on Burbn. However, the photo uploading feature turned out to be a hit amongst its users. That led to the pivot, the creation of an iPhone app exclusively focused on photo-sharing. Instagram was born in October 2010 and saw over 100,000 downloads in the first week itself. Time also worked in their favour. The iPhone 4 with its brilliant camera had just launched. And the rest as they say is history. How Server Density used data analytics to test their pricing plans and increase revenue by 114%? Server Density is a SaaS-based server and website monitoring tool. Image Source Initially, it followed a configurable pricing. Customers paid based on the number of servers and websites they wanted monitored. Though the majority of their customers had 7 servers, they had their pricing structure designed to cater to first time, single-server users in the hope of increasing their customer base. They decided to A/B test a new “packaged” pricing structure using the Visual Website Optimizer tool. Image Source As per the new structure, the lowest package started from US$99 per month. 10 servers and 10 websites according to the old plan would cost US$130 per month but would cost US$99 per month as per the new structure. They dropped prices but increased the Average Order Value (AOV). The result of this test – their free signups dropped by 24.96% but total revenue increased by 114%. These were just some of the many ways how startups can make use of data analytics. As you move on from alpha to beta to your subsequent version releases, the more data-backed your decision, the easier will it be to satisfy your customers, convince your investors and achieve your ultimate goal.
If you thought that data analytics is only a need for big enterprises, well, think once again. Increasingly, more and more startups are waking up to the idea of data fuelling their business goals. From problem identification to product validation, from identifying a product-market fit to taking the product to the next level, data analytics […]
["AI Startups"]
[]
Дарья
2015-03-27T07:28:40
2015
1,145
["Go", "programming_languages:R", "AI", "ML", "RAG", "ViT", "analytics", "GAN", "R", "startup"]
["AI", "ML", "analytics", "RAG", "R", "Go", "GAN", "ViT", "startup", "programming_languages:R"]
https://analyticsindiamag.com/ai-startups/why-startups-need-to-adopt-analytics-too/
3
10
4
false
true
false
10,164,522
India vs Pak: How JioHotstar Pulled Off 60 Crore Live Stream Views
The recent ICC Champions Trophy 2025 match between India and Pakistan was a thrilling spectacle, which not only showcased Virat Kohli’s stellar performance but also set unprecedented records in digital viewership. While the match was a treat for fans, it was a technical nightmare for the JioHotstar engineering team. The streaming platform reported a staggering 60.2 crore (602 million) views during this high-stakes encounter. With the introduction of the free mobile subscription feature, the engineering team had to prepare for nearly 50 million simultaneous streams—a feat no streaming service had attempted before. This required a fundamental rethinking of JioHotstar’s infrastructure, from API handling to network optimisation, to ensure a seamless experience for millions of cricket fans. To sum it up, the team did a God-level job. CDNs are the Key At the heart of JioHotstar’s live streaming architecture is a complex but efficient system that ensures users across mobile, web, and connected TVs get a smooth experience. When a viewer requests a live stream, the request first passes through content delivery networks (CDNs), which act as an external API gateway. These CDNs are crucial not just for distributing content efficiently but also for handling security checks and routing traffic intelligently. From there, an internal API gateway, supported by Application Load Balancers, directs the request to the appropriate backend service, which fetches data from either a managed or self-hosted database. With an anticipated spike in traffic during the last few overs of the match, this traditional workflow wasn’t going to be enough. One of the biggest issues was handling API calls at scale. Upon analysing traffic patterns, the team realised from previous events that not all API requests needed the same level of processing power. Some, like live score updates and key match moments, could be easily cached and served with minimal computation, while others, like user authentication and content personalisation, required direct database queries. This led to the creation of a new CDN domain dedicated to cacheable requests, allowing JioHotstar to reduce compute load and significantly improve response times. The internal API gateway, which serves as the front door for all requests, was particularly resource-intensive. To mitigate this, JioHotstar deployed high-throughput nodes (over 10 Gbps) and enforced topology spread constraints, ensuring that no single node handled too many API requests at once. Self-managed Kubernetes to EKS While optimising traffic handling was a major step, JioHotstar also had to rethink how its cloud-based infrastructure scaled. Previously, the platform relied on self-managed Kubernetes clusters, but these systems were already nearing their limits. As a result, JioHotstar migrated to Amazon Elastic Kubernetes Service (EKS), which offloaded the burden of cluster management to AWS and allowed the team to focus on optimising workloads. However, migrating to EKS introduced new challenges, particularly around network throughput. One of the most pressing issues was NAT Gateway congestion—a bottleneck that limited the speed at which data could flow. In a typical cloud setup, a single NAT Gateway per availability zone (AZ) handles traffic for multiple services. However, with millions of users streaming simultaneously, this setup quickly overloads. To solve this, the team shifted to a subnet-level NAT Gateway configuration, effectively distributing traffic more evenly across the network and eliminating the bottleneck. Even within Kubernetes, scaling wasn’t as simple as adding more nodes. During peak load testing, the engineering team discovered that several backend services were consuming up to 9 Gbps of bandwidth per node, creating uneven traffic distribution across clusters. While infrastructure optimisations played a crucial role in enabling scale, network constraints nearly derailed the effort. During internal load tests, the team encountered a critical IP address shortage in its Kubernetes clusters. Despite configuring private subnets across multiple AZs, JioHotstar found that it was unable to scale beyond 350 nodes—far below the 400+ required to support peak traffic. The culprit? Over-provisioned IP address allocations. One of the final hurdles came from Kubernetes service discovery. While scaling beyond 1,000 pods, JioHotstar discovered a hard limit in Kubernetes’ endpoints API, which tracks network locations for services. Once the limit was exceeded, Kubernetes truncated endpoint data, creating unpredictable traffic distribution issues. Though modern EndpointSlices offer a solution, JioHotstar’s API Gateway didn’t support them, forcing the team to vertically scale services to stay below the 1,000-pod threshold. Autoscaling Wasn’t Enough Autoscaling struggles to handle sudden traffic surges. For major cricket matches, JioHotstar experiences spikes of nearly 1 million users per minute, drastically increasing the number of active viewers. If a star batsman gets out, traffic can drop by millions within the same minute, putting immense strain on backend services. An unusual challenge here is that when users hit the back button instead of closing the browser, they are redirected to the homepage. If the homepage isn’t designed to handle high traffic, it can cause system failures. There are additional concerns. What if AWS lacks the capacity in a specific AZ to provision servers? In such cases, autoscaling becomes ineffective. Even step-wise scaling, where 10 servers are added at a time with a target of scaling from 100 to 800, may be too slow to respond to real-time demand. With 1 million requests per second and 10 terabytes of video bandwidth consumption per second, amounting to 75% of India’s total internet bandwidth, the scale of operations is staggering. Notably, even internet service providers (ISPs) struggle to deliver such massive traffic loads. Streaming platforms frequently encounter these challenges during high-profile events like IPL finals, royal weddings, or political broadcasts. To prepare for such spikes, JioHotstar conducts extensive load testing using 3,000 machines, each equipped with 36 CPUs and 72GB of RAM, across multiple regions. This rigorous testing process, known as ‘tsunami testing’, helps determine the breaking point of each service. The results are then used to plan infrastructure scaling effectively.
To sum it up, the engineering team did a God-level job.
["AI Features"]
["Jio"]
Mohit Pandey
2025-02-25T12:45:51
2025
956
["Jio", "Go", "API", "AWS", "AI", "cloud_platforms:AWS", "programming_languages:R", "ML", "Git", "R", "kubernetes"]
["AI", "ML", "AWS", "kubernetes", "R", "Go", "Git", "API", "cloud_platforms:AWS", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/india-vs-pak-how-jiohotstar-pulled-off-60-crore-live-stream-views/
4
10
0
false
false
false
10,125,480
ISRO’s Bhuvan is 10x Better Than Google Maps
Recently, ISRO chief S Somanath announced a significant milestone for India’s national space agency’s Geoportal-Bhuvan. “India’s Geoportal-Bhuvan delivers information that is ten times more comprehensive than what Google offers,” Somanath beamed. Bhuvan, renowned for its extensive geospatial data, offers valuable insights across agriculture, urban planning, and disaster management sectors. The introduction of Bhuvan-Panchayat and the National Database for Emergency Management (NDEM) marks a major stride in enhancing data access and utility. This integration of local government bodies on a GIS platform is unparalleled globally. “Panchayat is one of the sectors integrated in GIS. This is a GIS system that integrates all the government departments, which really does not exist in any other country,” said Radha Krishna Kavuluru, principal engineer at Dhruva Space & ex-project manager at ISRO. Bhuvan-Panchayat empowers village councils with enriched datasets and analytical tools for informed decision-making. “This localised approach ensures data caters to specific needs and challenges faced at the ground level,” Somanath explained. According to a recent study, Bhuvan-Panchayat provides high-resolution data to over 250,000 village panchayats across India, enabling precise governance interventions. NDEM integrates vital datasets to bolster ISRO’s disaster response capabilities. “It plays a vital role in risk assessment and mitigation during emergencies, reflecting ISRO’s commitment to leveraging space technology for societal benefit,” Somanath added. Since its launch, NDEM has been instrumental in managing over 500 disaster events, providing critical data for efficient responses. Similarly, ISRO’s Bhoonidhi Data Hub, which provides open access to Earth observation satellite data, serves as a comprehensive source of satellite images and essential geospatial data is also at par with that of NASA or ESA. Bhuvan’s Unique Features and Capabilities Bhuvan distinguishes itself from other geospatial platforms with several unique features tailored to India’s needs. It excels in data access and geoprocessing, allowing users to download and consume data as OGC web services for analysis. Bhuvan offers India-specific datasets, including potential fishing zones and periodic agricultural assessments, serving as a comprehensive source for India’s earth observation data needs. The platform’s open web portal maximises access and utility for the public good in India. This democratises access to India’s earth observation data, making it freely available to all. In contrast, other platforms like Google Earth, ArcGIS, and Bing Maps have their own unique strengths, such as global coverage, advanced GIS capabilities, and location intelligence. However, Bhuvan’s unique value lies in providing high-resolution data and capabilities specifically designed for India’s needs and priorities. As Somanath summarised, “Bhuvan democratises access to India’s Earth observation data through a public portal.” Emphasising the importance of Bhuvan, Radha Krishna also said, “Almost everything that can be done with satellite imagery like land use, land cover, flood information system… everything is already integrated on Bhuvan.” He advised, “If you just open Bhuvan 2D and see the applications, scroll down, you will see a lot of them.” The Bhuvan platform hosts an extensive collection of vector datasets including the locations of post offices, Aadhaar centres, and disaster response agencies. It also powers sector-specific applications such as School GIS, Tourism GIS, Water Body Information System, and real-time forest fire alerts. What sets Bhuvan apart is its unique integration of government departments on a single GIS platform that is openly accessible to citizens. “There is no such GIS system which is open for the public available in the real world,” noted Krishna. The platform democratises access to valuable geospatial data and satellite imagery that can be utilised for a variety of purposes including navigation. Bhuvan’s Advantages for Environmental Monitoring in India Bhuvan offers several advantages over Google Maps, particularly for environmental monitoring in India. One key advantage is its higher resolution satellite imagery, with up to 1 metre per pixel for much of India. Somanath pointed out that “this allows for more detailed monitoring of land cover, vegetation health, water bodies, etc, compared to what is available in Google Maps”. Bhuvan offers a wide variety of specialised environmental data layers, such as land use/land cover maps, wasteland maps, soil maps, and geological maps, which Google Maps does not provide. It also includes several tools and applications designed specifically for environmental monitoring and natural resource management. These tools enable in-depth analysis in areas like watershed monitoring, groundwater prospect mapping, and forestry applications. Furthermore, Bhuvan can integrate data from ground-based environmental sensors, such as weather stations and water quality monitors, providing a comprehensive monitoring picture. This sensor integration is not a standard feature of Google Maps. Another important feature of Bhuvan is its access to multi-temporal satellite data, allowing users to analyse environmental changes over time. The Bhuvan-Timelapse application facilitates this type of monitoring, offering capabilities beyond those of Google Earth’s historical imagery. As an ISRO product, Bhuvan is tailored to the specific needs and priorities of environmental monitoring in India. Somanath emphasised that “the imagery, data layers, and tools are focused on India’s geography and environmental challenges”, unlike the more general-purpose Google Maps. However, Somanath acknowledged that Google is developing new environmental monitoring capabilities that may help bridge this gap for other parts of the world. Usage in Public & Private Sector Several private companies and government entities in India utilise ISRO’s Bhuvan platform for a wide range of applications. In the private sector, MapmyIndia has partnered with ISRO to enhance Bhuvan’s capabilities by integrating their mapping database with ISRO’s satellite imagery and Earth observation data. This has led to many vehicle manufacturers using MapmyIndia’s Bhuvan-connected services for built-in navigation systems. Meanwhile, GAIL has launched BHUVAN-GAIL for pipeline monitoring and safety, while Hyderabad City Police have used a Bhuvan-based application for vehicle tracking. Start-ups and private firms are also increasingly adopting the platform. Government departments and institutions extensively use Bhuvan for various purposes. State governments leverage the software for monitoring and governance. The Department of Land Resources developed ‘Srishti’ on Bhuvan for overseeing the Integrated Watershed Management Programme. The Ministry of Rural Development geo-tags MGNREGA assets using Bhuvan. In Telangana, the irrigation department is setting up a Water Resources Information System on the platform. Bhuvan also supports disaster management, e-Governance applications, and more across numerous government agencies, academic institutions, and industries. Overall, Bhuvan has emerged as a key geospatial tool for diverse stakeholders in India. Bhuvan’s Evolution Over 15 Years Launched by ISRO in August 2009, Bhuvan made a humble beginning with simple display of medium resolution satellite images and thematic maps. But those initial six years provided ample opportunity for Bhuvan to grow in all directions. Over its initial years from 2009-2015, Bhuvan grew significantly, both horizontally in diverse application areas and vertically in terms of image resolution and map services. By 2015, it was providing data up to 1m spatial resolution. The platform will celebrate its 15th anniversary in August 2024, marking a significant milestone in its journey of empowering India with detailed geospatial data and tools.
Bhuvan-Panchayat also provides high-resolution data to over 250,000 village panchayats across India, enabling precise governance interventions.
["IT Services"]
["Google Map", "ISRO"]
Shyam Nandan Upadhyay
2024-07-02T16:03:01
2024
1,130
["Go", "ISRO", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Google Map", "ViT", "GAN", "R"]
["AI", "RAG", "R", "Go", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/isros-bhuvan-is-10x-better-than-google-maps/
2
8
0
false
false
true
58,734
Prof. S Sudhindra of Tapmi Explains How PG Programme L.E.A.D. Helps In Creating Exceptional Leaders In The Digital Age
The Data Science field in India has matured beyond the “data analysis” stage. Organisations and analytics firms have been providing solutions to the business problems of their clients for a few years now. For instance, companies like Mu Sigma overcoming data availability constraints through the introduction of emerging technologies such as the IOTs and robotics. Data Science as a field has moved beyond the hype cycle long ago and is here to stay.  A practitioner can be future proof only through constant exposure to newer technologies and learnability. According to our study, a demand-supply gap exists with 97,000 analytics job openings available which are hard to fill. To fill this gap, T.A. Pai Management Institute (TAPMI) and Mu Sigma have joined hands to bring an industry-relevant analytics and decision sciences 11-month programme for executives and IT professionals known as Leadership through Analytics and Decision Sciences (L.E.A.D.). To gain more insights on the current Data Science scenario and understand how L.E.A.D. programs are helping the enthusiasts to become future-ready, Analytics India Magazine caught up with Prof. S Sudhindra, who is a Professor of Operations and Information Science and the Program Chair, Post Graduate Program in Leadership through Analytics and Decision Science (L.E.A.D.). “We put a high premium on learnability both during the selection process and subsequently during the program – this is non-negotiable.” – Prof. S Sudhindra, Professor of Operations and Information Science Tell us how has the journey in training the next generation of data science professionals been? For the institute, this has been an innovative and enriching experience. Normally institutions educate students whose future job profiles are not clear. Here we have a cohort whose placements are already decided and therefore the pedagogy – cases, assignments, and datasets – can be customized. At the same time, the students come from diverse backgrounds such as IT, manufacturing, finance, and even social sciences. This brings a richness of thinking to the classroom. It has been a pleasurable journey for everyone concerned. How the PG Program in L.E.A.D. can help to breed future-ready leaders? The L.E.A.D. program pedagogy ensures that the students learn to define the problem using first principles and come up with creative solutions. They are exposed to the cutting-edge analytics tools – but the rich ecosystem enables the creation of new tools whenever needed.  Our aim is not to create students who are future-ready, but those who create the future. Towards this, we have a rich mixture of courses such as system dynamics, design thinking, and a whole lot of management courses to augment analytics learning. How the last two batches of PG in LEAD went off, what are your expectations from the third batch? The first two batches have been a resounding success. The trainers at Mu Sigma and the faculty at TAPMI have both given us excellent feedback. The unique model of TAPMI faculty taking some courses during the Mu Sigma phase and Mu Sigma trainers visiting Manipal during the TAPMI phase to conduct workshops and hackathons has ensured a high degree of integration between classroom and practice and the students have enjoyed the learning process. As Data Science continues to evolve as a discipline, can you cite how LEAD has kept abreast of these developments to stay relevant in the market and maintain a competitive edge? There are two ways an academic institution such as TAPMI can make its programs keep abreast of evolving technologies: first is by tightly integrating with the practitioners. L.E.A.D. program has total and seamless integration with the best in the Data Science industry. The second is by taking part in the evolution process itself.  Our collaboration with Mu Sigma includes joint research, collaborative case development between the faculty of TAPMI and the practising experts at Mu Sigma. These will ensure that we do not just keep abreast of the evolution: our students and faculty also are the part of that evolutionary process. Can you share what contributed to the success of TAPMI and LEAD Program? Collaboration with an industry leader, Mu Sigma, in all aspects of the program, makes LEAD a powerful program. There is no equivalent model existing in India so far. The quality of students that this program has attracted so far has also contributed to our success. The top management of TAPMI is completely committed to the success of this program: all the resources of TAPMI such as the Bloomberg terminals and other laboratories are fully available to the students and the TAPMI’s best faculty take the courses. Many data analytics programs in India do not do well because they are not fulltime. LEAD, being a fully residential program ensures that the students are fully immersed in the learning experience. How the L.E.A.D. program can be differentiated from the other Data Science programs available in the market? In a normal Data Science program, in general, the students learn how to carry out analytics tasks. In the L.E.A.D. program, the students learn to lead teams in solving business problems through analytics and decision sciences, using a patented process called the Art of Problem Solving (AoPS).  In order to prepare the students for this, they are taught a variety of managerial courses in addition to the analytics and decision sciences courses. The outcome is a set of students who are competent in using analytical tools and are equally adept at understanding the business contexts and provide innovative solutions. A key aspect of the LEAD program is its placement.  Can you share details on that? Yes.  Every student who graduates in the first attempt will be offered a job at Mu Sigma.  The first batch is about to graduate with 22 students, and we are confident that every student will join Mu Sigma.  Please note that Mu Sigma, being a highly successful analytics company with a large number of Fortune 500 clients has a huge requirement of talented and skilled data scientists in its leadership roles.  Hence we wish to increase the batch sizes in the coming batches to meet the ever-increasing demand of Mu Sigma. For the benefit of readers, tell us- what is the typical profile of students who join L.E.A.D. program? The program is open to a student with 2 – 6 years of work experience. The students need not have an IT or analytics background.  In fact, we are looking for students from diverse backgrounds to enable multiple perspectives in problem-solving. It is important for the students to have an aptitude for learning analytics – they need to master several tools and languages such as “R” and Python. As a leading voice in academia, what more can be done to encourage students to take up STEM-related courses? TAPMI, while being a B-School, understands the importance of STEM in the country’s development. The introduction of this program is one major step towards operationalizing our commitment towards science and technology. B-Schools in India must introduce business-relevant STEM courses from the fields of AI and machine learning, IoT, and other emerging technologies as these have immense applicability in Operations, finance, marketing, and even HR.
The Data Science field in India has matured beyond the “data analysis” stage. Organisations and analytics firms have been providing solutions to the business problems of their clients for a few years now. For instance, companies like Mu Sigma overcoming data availability constraints through the introduction of emerging technologies such as the IOTs and robotics.  […]
["AI Trends"]
["apm data science", "digital marketing", "leader of ai", "musigma"]
Ambika Choudhury
2020-03-16T12:00:00
2020
1,174
["data science", "Go", "machine learning", "AI", "ML", "RAG", "Python", "apm data science", "digital marketing", "Aim", "analytics", "R", "leader of ai", "musigma"]
["AI", "machine learning", "ML", "data science", "analytics", "Aim", "RAG", "Python", "R", "Go"]
https://analyticsindiamag.com/ai-trends/prof-s-sudhindra-of-tapmi-explains-how-pg-programme-l-e-a-d-helps-in-creating-exceptional-leaders-in-the-digital-age/
2
10
3
false
true
true
48,619
IIT Madras To Double The Number Of Seats In Its Data Science Course
In a piece of positive news, IIT Madras is planning to almost double the number of seats for its Inter-Disciplinary Dual Degree (IDDD) program on Data Science owing to the positive response from the industry as well as demand from the students. With a massive curriculum overhaul in 2015, IIT Madras offers great flexibility to students, giving them the opportunity to take courses across disciplines, and build towards expertise in modern interdisciplinary areas that will define the future of engineering and technology. In particular, IIT Madras provides its undergraduate students an option to upgrade to IDDD programmes, where the students will study for five years and obtain B.Tech. in a parent discipline and M.Tech. in an interdisciplinary area. Speaking about the importance of Data Science to the nation’s development, Prof B Ravindran, Head, Robert Bosch Centre for Data Science and Artificial Intelligence (RBC DSAI), IIT Madras, and the course coordinator said, “Data Science is greatly impacting every discipline and the graduates of this programme, by virtue of their interdisciplinary training, are well equipped to be leaders in a digital world. ” The IDDD Data Science Students will have a bachelor’s degree in the major they opted for when they joined, as well as a Master’s degree in Data Science, enabling them to apply their Data Science skills to solve problems in their parent discipline. This is a one-of-its-kind interdisciplinary programme in the country, providing students with a strong foundation in both their parent discipline, as well as frontier areas of data science. The graduating students are uniquely trained to fulfil the rapidly increasing need for data science and artificial intelligence professionals in the Indian industry. Students taking the course will also intern at companies and take up projects in data science. More than half of the students chose to take up projects that applied data science in their parent discipline. Students from eight of the 10 eligible departments in IIT Madras have already enrolled in this course.
In a piece of positive news, IIT Madras is planning to almost double the number of seats for its Inter-Disciplinary Dual Degree (IDDD) program on Data Science owing to the positive response from the industry as well as demand from the students. With a massive curriculum overhaul in 2015, IIT Madras offers great flexibility to […]
["AI News"]
["Data Science", "IIT Madras"]
Prajakta Hebbar
2019-10-22T15:13:05
2019
327
["data science", "API", "artificial intelligence", "programming_languages:R", "AI", "IIT Madras", "Git", "BERT", "llm_models:BERT", "Data Science", "R"]
["AI", "artificial intelligence", "data science", "R", "Git", "API", "BERT", "llm_models:BERT", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/iit-madras-to-double-the-number-of-seats-in-its-data-science-course/
2
9
1
false
false
false
25,895
How Unsupervised Meta Learning Easily Acquires Information About New Environments
Reinforcement learning is at the forefront of the development of artificial general intelligence. AI researchers at Google and the University of California, Berkeley, are trying to work out ways to make it easier for researchers working on meta learning or reinforcement learning systems. Researchers Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn and Sergey Levine introduced an approach called Unsupervised Meta Learning which allows an AI agent to get a distribution of tasks. The agent can go on to do meta learning over these tasks. Meta learning is similar to multi task learning where an agent learns to adopt new tasks quickly. Meta learning can use reinforcement learning (RL) to solve new problems. It becomes more efficient by using meta learning tasks using RL. Meta learning algorithms tend to do well when they have a data that has same distributions as the tasks the algorithm has to generalise upon. This shows that the performance of meta learning algorithms is heavily dependent upon the meta training task distribution. Hence generalisation of the meta learning algorithms will improve if tasks are taken from the similar distribution as the meta learning tasks. The researchers set out with the target of automating meta training process by dispensing away with the need of hand-designing meta training task. This target is particularly difficult because we need to address two big problems together. Meta reinforcement learning with broad task distributions Unsupervised exploration for proposing a wide variety of tasks for meta learning Unsupervised Meta Reinforcement Learning As the researchers put it the aim of unsupervised meta reinforcement learning is to observe an environment and produce a learning algorithm specifically made for this particular environment. This algorithm learns to maximise reward on any particular task in that particular task. The researchers propose a framework has two components. Component one is a task identification procedure, which interacts with a controlled Markov process without a reward function with an aim to construct a distribution over tasks. Component two, related to actual meta learning which has the reward functions it meta learns a reinforcement learning function that has the power to adapt to new tasks. Here the description of meta learning algorithm will affect how the reinforcement learning function will work. Because of this some meta reinforcement learning can adapt to new tasks and some simply can not. The researchers work on a stepwise approach, which acquires a task distribution. Then the algorithm meta trains on the task. The research tries out two research directions to extract the task distributions from an environment. 1.Task acquisition via random discriminators The researchers say that the most effective way to describe a simple task distribution is to use random discriminators on states. What this means is that whenever given a uniformly distributed random variable z, the researchers define a random discriminator as a parametric function. In this parametric function, the parameters are chosen randomly like a random weight initialisation for a neural network. 2. Task acquisition via diversity-driven exploration The researchers try to acquire more tasks of variety when there is more amount of unsupervised environment interaction. The researchers use a technique called Diversity is All You Need (DIAYN) for task acquisition. DIAYN tries to get a set of behaviours that are different from one another. The researchers mention that method is fully unsupervised. There is no handcrafting of distance metrics or subgoals. Meta Reinforcement Learning Using The Acquired Task Distributions The above method tells us how to get a distribution of tasks through various ways. Then the researchers now take a meta learning algorithm to acquire the adaptation procedure from this task distribution. The researchers take tasks T drawn from a manually specified task distribution provided by the researcher. Every task is different Markov Decision Problem (MDP). The main aim is the meta RL is to lean reinforcement function that can adapt to new tasks. The objective function used here is MAML, that is, model agnostic meta learning. MAML learns an initialisation based on data that makes the reinforcement procedure very fast. The researchers mention that tasks used in the training should be closer to the types of tasks that might be seen at meta test time. The researchers found that the unsupervised meta training learns the dynamics of the controlled Markov process (CMP). It is also found that the meta learning helps the policy to modify its behaviour in many ways with the help of unsupervised meta reinforcement learning. Results Systems based on Unsupervised Meta Reinforcement Learning are better than reinforcement learning on simulated 2D navigation and locomotion tasks. The tasks were of increasing difficulty: 2D point navigation, 2D locomotion using the “HalfCheetah,” and 3D locomotion using the “Ant”. The system also performs far better than human-designed tuned reward functions. It also shows that UML can wander around the problem space and build great reward signals.
Reinforcement learning is at the forefront of the development of artificial general intelligence. AI researchers at Google and the University of California, Berkeley, are trying to work out ways to make it easier for researchers working on meta learning or reinforcement learning systems. Researchers Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn and Sergey Levine introduced an […]
[]
["berkeley", "Google", "Google Brain", "Reinforcement Learning", "University of California", "Unsupervised Learning"]
Abhijeet Katte
2018-06-29T10:14:30
2018
802
["Go", "Reinforcement Learning", "programming_languages:R", "AI", "University of California", "neural network", "ML", "Google Brain", "programming_languages:Go", "berkeley", "Aim", "Google", "AI research", "R", "Unsupervised Learning"]
["AI", "ML", "neural network", "Aim", "R", "Go", "AI research", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/how-unsupervised-meta-learning-easily-acquires-information-about-new-environments/
3
9
0
true
true
true
10,121,347
Microsoft will Achieve 100% Renewable Energy by Next Year
At Microsoft Build 2024, CEO Satya Nadella reaffirmed the company’s ambitious commitment to sustainability. “We’re on track to meet our goal to have our data centres powered by 100% renewable energy by next year,” he declared. Our goal is to have our datacenters powered by 100 percent renewable energy by next year. #MSBuild pic.twitter.com/g8GNRk7OOn— Microsoft (@Microsoft) May 21, 2024 And we wonder how? Addressing the strategies behind this pledge, Nadella emphasised the company’s focus on sustainable cloud services. “We’re making our best-in-class AI infrastructure available everywhere and we’re doing this with a focus on delivering on cloud services sustainability. In fact, we’re optimising power and efficiency across every layer of the stack from the data centre to the network,” he explained. Nadella highlighted the innovative design of Microsoft’s latest data centres, tailored specifically for AI workloads. This design ensures responsible and efficient use of every megawatt of power, aiming to reduce both cost and energy consumption of AI operations. Additionally, advanced cooling techniques are being employed to align the workloads’ thermal profiles with the environmental conditions of their respective locations. Microsoft’s Sustainability Challenge However, Microsoft’s journey toward sustainability is not without challenges. The company’s annual sustainability report revealed that since 2020, carbon emissions have, in fact, risen by 30% owing to the expansion of data centres. This data underscores the gap between Microsoft’s 2020 climate goals and the current reality in the light of its ambitious target of becoming carbon-negative by the end of the decade. Interestingly, the goal was set before the AI explosion kicked in, forcing tech companies to find ways to build compute to train AI models. To address this challenge, Microsoft chief sustainability officer Melanie Nakagawa said, “Select scale, high-volume suppliers will be required to use 100% carbon-free electricity by 2030.” What is Google Doing? In 2020, Google announced its objective to operate on 24/7 carbon-free energy (CFE) across all its global operations by 2030. This goal involves procuring clean energy to meet their electricity needs every hour of every day, on every grid, wherever they operate. Google noted, “Achieving 24/7 CFE is far more complex and technically challenging than annually matching our energy use with renewable energy purchases. No company of our size has achieved 24/7 CFE before, and there’s no playbook for making it happen.” NVIDIA to the Rescue Recently,  NVIDIA announced the Blackwell platform. It allows organisations to develop and deploy real-time generative AI on trillion-parameter models while consuming up to 25 times less energy and cost than previous methods. If OpenAI uses Blackwell to train its large language models, the CO2 emissions associated with training GPT could potentially be around 12 tons. This is significantly less than GPT-4, which is estimated to produce around 300 tons of CO2. Reports since 2012 indicate a rapid increase in computing power for AI training, doubling every 3.4 months on an average. However, with major players like OpenAI, Google, Meta, and Microsoft adopting Blackwell, there’s a collective effort to address the sustainability challenges of AI innovation. At the recently concluded Microsoft Build, Nadella mentioned that they’ll be among the first cloud providers to offer NVIDIA’s Blackwell GPU V100s as well as GB 200 configurations. Earlier, in the GTC keynote in San Jose, NVIDIA CEO Jensen Huang stated, “Our aim is to continually reduce costs and energy consumption, as they are directly linked, to expand and scale up computation for training future models.” New NVIDIA Blackwell superchip with 208 billion transistors. 30 times more performance with generative AI while using 25% less energy.Incredible. pic.twitter.com/EB6jTbZ4Fs— Ashton Forbes (@JustXAshton) March 19, 2024 Training a GPT model with 1.8 trillion parameters typically takes around 3-5 months using 25,000 amps. However, to train a GPT-4 model, NVIDIA claims that it would have previously required 8,000 Hopper GPUs and 15 megawatts of power, still completing in about 90 days. This AI model is less costly than one might assume, but with 8,000 GPUs, the expenses are significant. Blackwell offers a more efficient alternative, needing only 2,000 GPUs and consuming just four megawatts over the same 90-day period. What’s Next? Recent findings from Cornell University highlighted that training LLMs like GPT-3 consumed electricity equivalent to 500 metric tons of carbon, which amounts to 1.1 million pounds. (A typical coal-fueled power plant working continuously for 24 hours burns about 2.7 million pounds of coal). Training LLMs is equivalent to burning coal for 10 straight hours, or nearly half a day. Recognising the need for an energy breakthrough to support the future development of AI, Open AI chief Sam Altman invested $375 million in Helion Energy, a private US nuclear fusion company. At a Bloomberg event during the World Economic Forum’s annual meeting in Davos, Altman emphasised the potential of nuclear fusion and affordable solar energy as viable pathways to support sustainable AI development.
However, Microsoft’s journey toward sustainability is not without challenges.
["AI Trends"]
["AI (Artificial Intelligence)", "Google", "Microsoft", "OpenAI"]
Vidyashree Srinivas
2024-05-23T15:00:00
2024
799
["Go", "API", "OpenAI", "AI", "Microsoft", "RAG", "GPT", "Aim", "generative AI", "Google", "GAN", "R", "AI (Artificial Intelligence)"]
["AI", "generative AI", "OpenAI", "Aim", "RAG", "R", "Go", "API", "GPT", "GAN"]
https://analyticsindiamag.com/ai-trends/microsoft-will-achieve-100-renewable-energy-by-next-year/
3
10
0
true
false
false
38,144
Google’s New Technique MorphNet Can Build Smaller, Faster Neural Networks
Photo by Ricardo Gomez A deep neural network changes its form constantly as it gets trained. The distribution of each layer’s inputs change along with parameters of previous layers. This change increases latency in learning and it gets harder to train as the model embraces nonlinearities. Since 2012, the capabilities of computer vision systems have improved greatly due to (a) deeper models with high complexity, (b) increased computational power and (c) availability of large-scale labeled data. The room to refine neural networks still exist as they sometimes fumble and end up using brute force for light weight tasks. To address this, the researchers at Google, have come up with MorphNet. The objective of the MorphNet is to provide more resources where necessary by using neural networks for tasks which use up more layers and sizing down on low end tasks. MorphNet offers new solutions for parameter optimisation by inducing better sparsification. MorphNet’s Targeted Pruning Source: Google With every pass, MorphNet learns the number of neurons per layer. When a layer has zero neurons, then that part of the network is cut off. This changes the topology of the network as the residual blocks in the network are removed. The training can be carried out in a single run. And, MorphNet can be scaled to be ready for application on larger networks. This technique also equips with the network with portability i.e, there is no need for keeping a track of the checkpoints that arise during training. This iterative approach of expanding and shrinking the neural networks gives better control over the usage of computational powers and time. A resource usage index could be done by evaluating the change in FLOPs per inference or model size. These can be markedly different depending on the application domain and corresponding constraints. FLOPs(Floating point operations per second) are units of measure of performance of a computational operation. A processor with higher range of FLOPs is considered to be more powerful. Allocating FLOPs(resource) during algorithmic operations(neural networks) is key to the time taken on training and other such fundamental operations. This is where Morphnet claims to make a difference, by expanding and shrinking the layers. A neural network is nothing but a bunch of matrix multiplication. If thought in these terms, shrinking can be something like skipping the null elements in the matrices which offer no significance but have to be multiplied in traditional multiplication procedure. For shrinking, a resource weighted sparsifying regulariser on activations is used and for expanding, a uniform multiplicative factor(width multiplier) is used. Putting MorphNet To Use The researchers list down a sequence of steps to deploy MorphNet on a sample CNN for image classification. Given an existing model (the “seed network”) and a target criterion, MorphNet will propose a new model by adjusting the number of output channels in each convolution layer. Deploying MorphNet in Few Steps: Choose a regularizer from morphnet.network_regularizers. The choice is based on Enable the scale parameters (“gamma variables”), i.e., by setting scale=True if you are using tf.keras.layers.BatchNormalization. Initialize the regularizer with a threshold and the output ops of your model (e.g., logits for classification). Add the regularization term to loss. Train the model. Save the proposed model structure with the StructureExporter.The exported files are in JSON format. Modify the model using the StructureExporter output. Retrain the model from scratch without the MorphNet regularizer. Know more about Morphnet here. Check how to deploy MorphNet with TensorFlow here
A deep neural network changes its form constantly as it gets trained. The distribution of each layer’s inputs change along with parameters of previous layers. This change increases latency in learning and it gets harder to train as the model embraces nonlinearities. Since 2012, the capabilities of computer vision systems have improved greatly due to […]
["Global Tech"]
["Google", "Neural Networks"]
Ram Sagar
2019-04-23T12:25:54
2019
571
["Go", "TPU", "Keras", "AI", "neural network", "Git", "computer vision", "Aim", "Google", "TensorFlow", "R", "Neural Networks"]
["AI", "neural network", "computer vision", "Aim", "TensorFlow", "Keras", "TPU", "R", "Go", "Git"]
https://analyticsindiamag.com/global-tech/googles-new-technique-morphnet-can-build-smaller-faster-neural-networks/
3
10
0
false
true
false
10,040,853
Telemedicine Moves Beyond Zoom Calls
The Indian health-tech market is expected to grow at 39% CAGR to reach $50 billion by 2033 from $2 billion now, according to an RBSA Advisor report. Telemedicine has seen huge growth since the pandemic breakout. Startups such as myUpchar, Practo, Tattvan, Lybrate and mFine have cashed in on the demand and hospitals such as Akash, Apollo Hospitals, Narayana Hrudayalaya started teleconsultations to avoid crowding at hospitals. According to a McKinsey Digital India 2019 report, telemedicine in India can reduce in-person outpatient consultation load by half and cost about 30% less for patients. In the US, artificial intelligence applications are expected to save $150 billion in healthcare costs annually by 2026. Futuristic telemedicine technology The objective of telemedicine is to improve healthcare access, reduce delays and save logistics costs. Of late, telemedicine has turned to artificial intelligence to achieve these objectives. Patient monitoring through video consultations is one of the first and most common applications of telemedicine. This has allowed faster and safer ways of consultations in both urban and rural India. Interestingly, rural India had teleconsultations (hub and spoke model) even before the pandemic. Hospitals set up mobile clinics in villages and connect doctors through video or audio. However, telemedicIne today is not just about video consultations. For instance, ultrasound is done with a probe attached to a smartphone in the US. Telepresence robots Telepresence robots can be remote-controlled using a software interface, allowing doctors to examine and interact with patients from anywhere. The concept combines AI and computer vision for navigation and obstacle detection. DrRho, a medical telepresence robot developed by Vyas Labs, provides a robotic base with human-environment manoeuvrability, robotic manipulators, an electronic stethoscope, blood pressure machine and thermometer, ECG and pulse oximeters. It also provides an intuitive vision system to the doctors, meaning the robot’s eyes turn as the doctor moves his head. The robot also carries a projector in cases of surgery or collaborative examination. Recently, researchers of the Integrated Systems Engineering Group of the University of Malaga (UMA) in Spain have developed a telepresence robot to enable quarantined persons to get on video calls. Two students from VR Siddartha Engineering College in India built a virtual telepresence robot: Developed a robot with an onboard camera and Wi-Fi capabilities that capture videos and allows users to monitor the situation on their smartphones, internet browsers, or Virtual Reality headsetsUsed accelerometers and gyroscopes, i.e., sensors that determine an object’s position and orientation to ensure the robot’s onboard camera moves according to the user’s head movementsData collected in the user’s smartphone is used to track head movementsData transferred to a Rasberry Pi device and used to control the movements of the robot’s cameraUsed Arduino for the robot’s back and forth, left and right movements. Later substituted Rasberry Pi and Arduino with MyRIO, a more expensive device with higher processing capability. The portable device combines the capabilities of Rasberry Pi and Arduino and serves both as a data processor and a controller. Electronic health records Electronic Health Records (EHR) systems created using big data analytics and neural networks have pushed telemedicine in India. A study by the American College of Physicians showed doctors spend 50 percent of their time on patient records. Today, Electronic Health Records (EHR) software is integrated with machine learning capabilities. This has enabled faster sending of prescriptions and other information directly to the patient. Hospitals and clinics use proprietary machine learning algorithms on top of the data to systematically categorise health data. Many B2B health tech platforms offer EMR that can be integrated across a network of hospitals or clinics. This way, the patient can walk into any hospital and access records. Cloud computing Many medical professionals use cloud computing to store, process and transmit health data. Cloud computing has also been successful in merging traditional healthcare infrastructure systems with new technologies like IoT & wearables. For example, Microsoft Band 2, equipped with BP sensors, can accelerate data retrieval and transactional processing capacity. Today, there is an increased demand for startups offering cloud solutions. For instance, Bengaluru-based Alkenist uses AI solutions to detect lung issues in COVID positive patients, where the cloud-based software analyses chest X-ray and CT-scan images. This helps doctors quickly decide on the next course of treatment. Mumbai based Qure.ai has built an AI-powered solution on AWS to identify abnormalities in chest X-rays. Cloud technology provides scalability and can be deployed anywhere effortlessly.
The Indian health-tech market is expected to grow at 39% CAGR to reach $50 billion by 2033 from $2 billion now, according to an RBSA Advisor report.  Telemedicine has seen huge growth since the pandemic breakout. Startups such as myUpchar, Practo, Tattvan, Lybrate and mFine have cashed in on the demand and hospitals such as […]
["IT Services"]
[]
Shanthi S
2021-05-28T11:00:00
2021
731
["machine learning", "artificial intelligence", "AWS", "AI", "neural network", "cloud computing", "computer vision", "Ray", "analytics", "R"]
["AI", "artificial intelligence", "machine learning", "neural network", "computer vision", "analytics", "Ray", "cloud computing", "AWS", "R"]
https://analyticsindiamag.com/it-services/telemedicine-moves-beyond-zoom-calls/
3
10
2
false
true
true
10,008,429
How Deep Learning Is Used For Tuberculosis Detection In City Of Nagpur
Tuberculosis or TB has remained one of the world’s most infectious diseases, responsible for more fatalities than HIV and malaria combined. Across the globe, TB has reached epidemic proportions affecting more than 27 lakh people annually in India alone. The shortage of healthcare specialists exacerbates the problem in rural regions and for people below the poverty line, who are a huge participant in the significant growth of TB cases in India. Case in point — the city of Nagpur in Maharashtra, one of the most populated cities in India, has the highest incidence of tuberculosis, with 35% of the population infected. The community in slums primarily leverage informal healthcare providers, which are although accessible and affordable, have limited awareness and diagnostic tools for TB detection. Thus, the goal was to reduce the diagnostic delay and effectively employ these healthcare providers to increase TB detection in the city. Also Read: How Is Indore Municipal Corporation Using Geospatial Technology Qure.ai’s AI-based Detection To The Rescue TB is a curable disease; however, to make it work, it requires early detection for doctors to start their treatment. But, with a shortage of healthcare providers for testing and detecting the disease, the whole process of treatment gets delayed, resulting in an increasing number of infected patients. Therefore, to reduce the diagnostic delay, Qure.ai, an AI company focusing on the healthcare domain, collaborated with PATH, a non-profit organisation to deploy an AI-powered solution at these healthcare and diagnostic centres in Nagpur. Explaining the solution, Reshma Suresh, the operations head at Qure.ai stated, qXR is an AI-based chest X-ray interpretation platform that has been designed to provide a comprehensive analysis of lung abnormalities. The AI model has been trained over a million curated X-rays data, and thus can now accurately detect 29 different, clinically relevant abnormal findings. “The model has been highly optimised for identifying classic as well as atypical pulmonary, hilar and extrapulmonary tuberculosis,” said Suresh. While qXR, developed by Qure.ai is a mass screening tool for TB, it can also be used as a surveillance tool. In the systematic screening setting, chest X-rays can be used for mass screening of the symptomatic, at-risk or vulnerable population for active TB. “As the X-ray is captured, qXR automatically processes them and provides a definitive indication on signs of tuberculosis within a minute,” said Suresh. “The presumptive cases are, however, subjected to bacteriological confirmatory results within the same or next day.” She further added, in a surveillance setting, qXR processes all incoming X-rays of individuals,  symptomatic or not, and alerts on X-rays with a positive indication for further proceedings. “With such a capability, it empowers the frontline healthcare providers, especially in resource-constraint areas, to make better decisions reducing the time to follow up cases.” Also Read: How Piramal Sarvajal Using IoT To Tackle Safe Drinking Water Issue The Tech Behind The AI-powered chest X-ray has been built with deep learning and CE certified algorithms. The dataset of 3.6 million chest X-rays has been collected over four years from around 250 sites across the world. The dataset is then used to train deep learning algorithms to detect various abnormalities on chest X-rays, including tuberculosis infection. The company utilised the X-ray dataset, and the labels automatically inferred from the radiology reports for developing the algorithm. Further, they used deep learning to train convolutional neural networks that form the building blocks in the system to detect the lung abnormalities. The system of around 150 deep learning models has been designed to predict the likelihood of tuberculosis infection from the pixel data of a Chest X-rays. The AI encapsulated inside qXR allows automatic reading of chest X-rays which generates reports within seconds. This advanced technology, irrespective of having CR/DR scans or PA/AP views, can detect multiple findings with the lungs, pleura, heart, bones and the diaphragm. The algorithms generate contour lines for lung and pleural abnormalities for quick and easy diagnosis. Integrated with multiple picture archiving and communication systems, the AI system provides the output under a minute for each scan. Further, the text report generated by the system can be pushed back as structured DICOM reports for immediate adoption in the workflow. The dashboard will provide a sidebar for all the findings presented in a dichotomised fashion. The AI system — qXR — can also analyse multiple scans from the same patient sequentially in order to create a progress report to predict and detect changes in lesions over time. Designed for real-world setting, this advanced AI system is not only hardware-free but also is a zero-footprint solution. With maximised sensitivity and specificity of qXR simultaneously, the software achieved a sensitivity and specificity of 71% with 95% of confidence intervals: 66%, 76%; and 80% with 95% confidence intervals: 77%, 83%, respectively. On the other hand, for detecting pulmonary tuberculosis, the system showcased an accuracy of 0.94 (95% CI: 0.92, 0.96) and 0.84 (95% CI: 0.82, 0.87), respectively for the area under the curve. Also Read: Will Scan Based AI Systems Be Useful In Diagnosis For COVID-19? Wrapping Up With Benefits When asked about the benefits of this deployment, Dr Shibu Vijayan, Global TB Technical Director at PATH stated that the program by PATH in association with Qure.ai was aimed to establish a bridge between private healthcare providers and public health systems. “The program showed positive outcomes with an overall 20% increase in the notification of TB cases compared to the previous year,” said Dr Vijayan. “Of which 13% has been attributed to qXR on improvement in the cases identified, who would have been missed otherwise with human involvement.” Further to this, there also has been a 50% increase in the proportion of bacteriological confirmations in the city through the program by PATH. Addressing the growing threat of tuberculosis, this system could be of great use to many rural healthcare providers who are looking to reduce the time lag of TB diagnosis for the doctors to start the treatment.
Tuberculosis or TB has remained one of the world’s most infectious diseases, responsible for more fatalities than HIV and malaria combined. Across the globe, TB has reached epidemic proportions affecting more than 27 lakh people annually in India alone. The shortage of healthcare specialists exacerbates the problem in rural regions and for people below the […]
["AI Features"]
["Active Learning", "case study", "Deep Learning"]
Sejuti Das
2020-09-27T18:15:42
2020
986
["Go", "TPU", "AI", "neural network", "RAG", "Active Learning", "case study", "Aim", "deep learning", "Ray", "GAN", "Deep Learning", "R"]
["AI", "deep learning", "neural network", "Aim", "Ray", "RAG", "TPU", "R", "Go", "GAN"]
https://analyticsindiamag.com/ai-features/how-deep-learning-is-used-for-tuberculosis-detection-in-city-of-nagpur/
4
10
0
false
false
true
34,282
5 Ways AI Is Used By Lawmakers For Crime Prevention In India
AI is set to revolutionise the Information and Communications Technology (ICT) tools in India. Some of the recent applications of AI are in the area of fingerprint analysis, recreating a face from the skull, creating images from pieces, and modern forensic methods. In this article, we list 5 ways AI is being used by lawmakers for crime prevention in India: 1| PAIS, Punjab Police’s AI-Based Facial Recognition System Punjab police started using PAIS, Punjab Police AI-based facial recognition System with options like face search, text search, etc. and a database with more than 100,000 records of criminals housed in jails across Punjab. The motive behind this app is to click an image with their smartphones when confronted with a suspect. Staqu Technologies built the police app which leverages facial recognition to create a face’s unique map. The Punjab police department bagged the FICCI Smart Policing Awards 2018 for using PAIS. 2| Investigation With AI Powered Equipments In Cuttack In a report from last month, the state police of Bhubaneswar decided to use AI and mobile computing in order to improve the analysis of crime data. AI is used to generate data of checklist for the investigation officers. Director General of Police, R P Sharma said, “We will start using AI from next year. The technology will guide investigating officers on procedures of investigation. If any officer commits any procedural mistakes, AI will immediately issue an alert. It will help officers make a quick and accurate search of a particular crime and its modus operandi, similarity between offences at different places and details about arrested persons from our digital database”. 3| Use of AI Powered Face Recognition App To Solve Criminal Cases Trinetra, an AI-enabled application that contains a database of approximately 5 lakh criminals and facial recognition features were launched by UP police chief O. P. Singh during a conference at annual police week held in UP 100 headquarter in December 2018. The database includes criminal records of state police, records of the prison department and GRP (railway network guards). 4| India’s First Police State, Andhra Pradesh In July 2018, Andhra Pradesh launched e-Pragati, a searchable database of millions of people residing in Andhra Pradesh that contains the e-KYC authentication Aadhaar numbers. Any information from the data can be easily searched in the control room opposite Chief Minister C. Naidu’s office for real-time governance. The surveillance system was set up for many reasons and one of the main reasons is to lower the crime rate in the state. 5| Delhi Police To Use AI Centre To Handle Crimes According to this report, Delhi police will be assisted in cyber policing and social media analysis by Indraprastha Institute of Information Technology (IIIT) to set an artificial intelligence-equipped centre. The centre will assist the Delhi police department for criminal identifications, biometrics, law and order management, cyber policing, etc.with the help of technologies of artificial intelligence, social media analysis, big data, image processing, etc. Besides this, the AI- enabled centre, Delhi police also aimed to install AI-enabled advanced traffic management system. High-resolution cameras with sensor-based real-time traffic volume count technology will be installed on the roads. In a report, the special commissioner of police (Traffic) Dependra Pathak, who is also the city traffic police chief, confirmed that his department has started work on this futuristic project. “High-resolution cameras with sensor-based real-time traffic volume count technology will first be placed on all arterial roads. Around 7,000-8,000 cameras with multidirectional infrared and colourless laser sensors will count the volume based on image pattern analysis. At every signal, we will also have IP based public address system. Through the cameras, we will see the traffic and also communicate with the drivers who are on move or at signals using the PA system.”
AI is set to revolutionise the Information and Communications Technology (ICT) tools in India. Some of the recent applications of AI are in the area of fingerprint analysis, recreating a face from the skull, creating images from pieces, and modern forensic methods. In this article, we list 5 ways AI is being used by lawmakers […]
["AI Trends"]
[]
Ambika Choudhury
2019-01-29T08:11:33
2019
626
["big data", "Go", "artificial intelligence", "programming_languages:R", "AI", "programming_languages:Go", "Git", "RAG", "Aim", "R"]
["AI", "artificial intelligence", "Aim", "RAG", "R", "Go", "Git", "big data", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-trends/5-ways-ai-is-used-by-lawmakers-for-crime-prevention-in-india/
3
10
0
false
true
false
10,085,780
The Macro Slowdown of TCS with Fewer Deals and Negative Headcount
Tata Consultancy Services (TCS) recently published its Q3 2023 results, wherein the tech giant recorded a decline of nearly 2% in revenue growth in constant currency terms (i.e., exchange rate used to eliminate fluctuations when calculating the financial performance numbers) when compared to the previous quarter. From 15.4% in Q2 2023, TCS’ revenue growth decreased to 13.5% in Q3 2023 in CC value. Additionally, the company also witnessed a quarterly decline of around 2,200 employees in the total headcount, with a de-growth of 3.7% in deal bookings to $7.8 billion. Several analysts claim that when one considers these statistics, a slowdown in the profitability of TCS in the near future is more or less inevitable. Is the UK slowing down performance? Fitch Ratings, one of the top three credit rating agencies internationally, expects that TCS’ revenue growth will slow down to 11–12% in FY24, recording around a 6% decline. According to the agency, the expected revenue growth for TCS in FY23 is around 18%. However, since there is a looming fear of recession in the UK and Europe, experts believe that the growth in this particular sector is likely to be impeded for TCS, who have a dominant market presence in these geographies. As per the latest forecast by the Bank of England, the UK economy is on the path to falling into a recession that is expected to last at least until the end of next year. However, even amid worsening situations, TCS chief Rajesh Gopinathan remains optimistic. While releasing the Q3 results, he said that the UK’s decision-making is faster than other nations. According to him, “Customers are very clear, and a lot of action is happening in the UK”. Commenting on Europe, Rajesh Gopinathan said that, “Europe’s decision-making has significantly slowed down”. He believes that Europe will be a cautionary tale for TCS this year compared to last year. Declining deals Gopinathan, while addressing the competitive intensity during presenting Q3 results, also commented upon the deal structures and explained that since they are complex, it’ll narrow down the field to a much-limited set of competitors. However, the numbers reflect otherwise. In the third quarter of FY23, TCS received new orders worth $7.6 billion, which, compared to the previous quarter ($8.1 billion), declined by 3.7%. The book-to-bill ratio, which is the ratio of orders received to units shipped and billed for a particular period, also declined to 1.07x in Q3 2023. In Q3 2022, it was around 1.17x while the historical average since Q1 2019 is around 1.24x. A book-to-bill ratio above 1 means more orders were received than filled while a ratio below 1 means more orders were shipped than received during a particular period. In TCS’ case, the ratio is declining, thereby indicating signs of an impending slowdown. As per Jefferies, an American independent investment bank, “Falling employee headcount and book-to-bill ratio point to sharp growth moderation in FY24”. Additionally, the bank expects TCS to deliver constant currency revenue CAGR (compound annual growth rate) of 7.5% over FY 23-25, “much slower than the 14% YoY expected in FY23”. TCS growth in the long term? TCS might have shown signs of a slowdown in the near future. However, the overall growth for upcoming years might not be affected anytime soon. With an order book of $7 billion in Q3 2023 and a strong deal backlog of $35 billion in the last 12 months, TCS is relatively safer than other companies working in the same sector. Motilal Oswal says, “TCS, with its order book and exposure to long-duration orders, is well-positioned to withstand the weakening macro environment”.
Fitch Ratings, one of the top three credit rating agencies internationally, expects that TCS’ revenue growth will slow down to 11–12% in FY24, recording around a 6% decline.
["IT Services"]
["TCS"]
Lokesh Choudhary
2023-01-24T18:45:53
2023
599
["Go", "programming_languages:R", "TCS", "AI", "programming_languages:Go", "RAG", "Aim", "ViT", "R"]
["AI", "Aim", "RAG", "R", "Go", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/the-macro-slowdown-of-tcs-with-fewer-deals-and-negative-headcount/
3
8
1
false
false
false
41,244
Top 6 Metrics To Monitor The Performance Of GANs
Generative Adversarial Networks (GANs) have found prominence over the last few years. From deep fakes to generating faces of people that don’t exist, GANs have been deployed for quite unpopular yet alarming applications. The fundamental nature of these dual networks is to outplay each other. One generates images to fool the other while the other tries not to be fooled. Given enough time, the network becomes so good that it ends up making fake images as realistic as possible. However, this is only the infamous aspect of GANs. The potential of GANs was already seen at the Sotheby’s auction last year when the painting titled Edmond de Belamy, from La Famille de Belamy was sold for a whopping $432,500 and it now hangs opposite the works of pop art geniuses like Andy Warhol. Celebrated computer scientist and Turing award winner Yann Lecun observed, “GANs and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” One variant of GAN, conditional GANs (cGAN) has been used to fine tune trading strategies. The potential is unbound and undiscovered. So, it is extremely crucial to monitor the performance of GANs. Here are a few metrics that can be used to validate GANs: Frechet Inception Distance(FID) In order to maintain consistency in the quality of the images that are generated, Frechet Inception Distance(FID) is used. Lower the FID, the better the quality. In other words, the similarity between real and generated images is close. FID compares the statistics of generated samples to real samples, instead of evaluating generated samples in a vacuum. Annealed Importance Sampling Since comparing models by inspecting samples is labour-intensive, and potentially misleading, Annealed Importance Sampling was developed. In this approach, the log-likelihood for decoder-based models are evaluated and the accuracy is validated using bidirectional Monte Carlo. Geometry Score In this method, the problem of estimating the quality and diversity of the generated images is tested by going through the topology of the underlying manifold of generated samples may be different from the topology of the original data manifold, which provides insight into properties of GANs and can be used for hyperparameter tuning. Contrary to methods like Inception Score and FID, this topological approach does not use auxiliary networks and is not limited to visual data. Based on the probabilistic understanding given two datasets X1 and X2 Geometry score is given by: Where MRLT is Mean Relative Living Times (MRLT) Tournament Based Method Tournament Based method was Introduced by the researchers at Google Brain. In this approach, a tournament is conducted where a single model is rated by playing against past and future versions of itself. This helps in monitoring the training process of GANs. And these measurements are classified into two ratings: win rate and skill rating. The tournament win rate denotes the average rate at which a generator network fools the discriminator network. Whereas, a skill rating system, as its name suggests gives a skill rating for each generator. Discriminator Rejection Sampling To rectify the errors surfacing in GAN generator distribution, a rejection sampling based method was introduced. The idea behind this method is to improve the quality of trained generators by post-processing their samples using information from the trained discriminator. Precision And Recall Though metrics like Fréchet Inception Distance (FID) are popular with the evaluation of GANs, they are unable to distinguish between different failure cases owing to their one-dimensional scores. This is where traditional Precision and Recall might prove to be useful. Know more about GAN training here.
Generative Adversarial Networks (GANs) have found prominence over the last few years. From deep fakes to generating faces of people that don’t exist, GANs have been deployed for quite unpopular yet alarming applications. The fundamental nature of these dual networks is to outplay each other. One generates images to fool the other while the other […]
[]
["evaluation metrics", "GANs", "Precision and Recall"]
Ram Sagar
2019-06-25T06:31:53
2019
593
["Go", "Precision and Recall", "evaluation metrics", "AI", "programming_languages:R", "RPA", "ML", "programming_languages:Go", "RAG", "GANs", "GAN", "R"]
["AI", "ML", "RAG", "R", "Go", "GAN", "RPA", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/top-6-metrics-to-monitor-the-performance-of-gans/
2
9
0
false
false
false
10,161,929
French AI Lab Mistral Sets Sights on IPO
French AI startup Mistral AI is preparing for an initial public offering (IPO), co-founder and CEO Arthur Mensch announced during an interview with Bloomberg at the World Economic Forum (WEF). “We’re not for sale,” Mensch told Bloomberg, adding that the company plans to open a new office in Singapore to strengthen its presence in the booming Asia-Pacific market. It is also expanding operations across Europe and the United States, building on its mission to compete with industry leaders like OpenAI. Last week, Mistral AI announced a partnership with AFP, a global news agency recognised for its fast and reliable coverage of world events and daily issues. As part of this collaboration, Mistral AI’s assistant, Le Chat, will integrate AFP’s newswire stories and enhance its responses with accurate, up-to-date information aligned with the highest standards of journalism. France’s Answer to Silicon Valley Mistral is often referred to as “France’s answer to Silicon Valley AI powerhouses”. Mistral AI was founded in 2023 by former employees of Meta and Google, including Arthur Mensch, Guillaume Lample, and Timothée Lacroix. Their mission is to make GenAI more enjoyable and accessible to users. Last year, the company raised over €105 million in a seed-stage funding round led by Lightspeed, a US-based venture capital firm. Earlier this year, it hosted a hackathon in Paris, where it provided GPUs for participants. Mistral AI releases all its models under open licenses to encourage free use and modification. The company aims to develop efficient, versatile AI models trained on a diverse range of datasets, including text, code, and images. This approach makes its models more adaptable than those trained on a single data type. Despite being a relatively new player in the field, Mistral AI is already competing with major names like Anthropic PBC’s Claude family, OpenAI’s GPT-4, and Google’s Gemini. An IPO would be a defining moment for Mistral, which would help it scale faster while showcasing the strength of European AI innovation on the global stage. It also comes at a time when the demand for AI solutions is surging worldwide, with businesses, governments, and consumers increasingly relying on AI to drive efficiency and creativity.
The company is also expanding operations across Europe and the United States.
["AI News"]
["mistral ai"]
Aditi Suresh
2025-01-22T12:02:28
2025
357
["Anthropic", "Go", "GenAI", "mistral ai", "API", "OpenAI", "AI", "RAG", "GPT", "Aim", "R"]
["AI", "GenAI", "OpenAI", "Anthropic", "Aim", "RAG", "R", "Go", "API", "GPT"]
https://analyticsindiamag.com/ai-news-updates/french-ai-lab-mistral-sets-sights-on-ipo/
2
10
4
false
false
false
10,089,432
GPT-4 Predictions: Hits and Misses
The cat’s finally out of the bag. GPT-4 is here and has got the world busy. AIM published GPT-4 predictions hours before OpenAI’s surprise launch. While they might have given a GPT-4 live demo for developers, it wasn’t concrete in addressing some of the critical features everyone was anticipating. The biggest offering of GPT-4, as predicted, is its multimodality model where it is capable of processing image and text inputs to produce text output. The feature will supposedly find use in dialogue systems, text summarisation and machine translation. However, OpenAI did not talk about the parameters and capacity of GPT-4. Multimodality The biggest prediction of multimodality was partially addressed with the integration of images. In the Microsoft Germany event last week, when CTO Andreas Braun announced the possibility of multimodality in GPT-4, the integration of image, video, audio and many more features seemed like a possibility. However, the GPT-4 developer demo only showcased image integration. Greg Brockman, President and Co-Founder of OpenAI, explained that the image feature in GPT-4 is in preview mode and merely a “sneak-peak”. He further added that it is not yet publicly available and that they are still partnering with ‘Be My Eyes’, a startup that works towards creating technology to help people who are blind or have low vision. In the demo, GPT-4 was able to logically describe an image, such as “Why is this image funny?”, a feature that was proposed in Microsoft’s Kosmos-1 where multimodality is used to analyse images and give output. GPT-4 can understand images and express logical ideas about them. Source: OpenAI GPT-4 Developer Livestream GPT-4 is also equipped to read hand-written messages with specific instructions and convert them to the required output. Hand-drawn pencil drawing -> website (https://t.co/4kexpvYAgV).Prompt: "Write brief HTML/JS to turn this mock-up into a colorful website, where the jokes are replaced by two real jokes." https://t.co/zQ4smwqGVo pic.twitter.com/cunT74HO5l— Greg Brockman (@gdb) March 15, 2023 Parameters for GPT-4 OpenAI did not talk about the parameters GPT-4 is trained on, leaving the big prediction of whether GPT-4 is trained on 100 trillion parameters—as rumoured—unresolved. Though the question was refuted by Sam Altman in an interview in January, OpenAI did not confirm the same. OpenAI also did not talk about the costs or the kind of technical support it utilised in order to build GPT-4. OpenAI, however, spoke at length about the advanced text feature in GPT-4 which necessarily means that more parameters were employed to train the new model. GPT-4 can read, analyse and generate up to 25000 words of text which is “8 times more than ChatGPT”. In addition, it can even write code in all major languages. The constant comparison to their GPT-3 model was nearly like an affirmation of how this model is better than ChatGPT. Source: OpenAI GPT-4 Introduction (youtube.com) Hallucinations Predictions on rising hallucinations of LLMs had been mentioned by AI experts; the risk being notably higher with GPT-4. Gary Marcus had also mentioned how training large sets of data will bring more hallucinations to the fore. However, Sam Altman debunked the prediction. Altman mentioned that GPT-4 will hallucinate “significantly less” and will be “less biased”, however no clarity on how that will materialise was offered. With Brockman emphasising on how OpenAI will continuously work to “make the system work faster”, the claim of fewer hallucinations can only be confirmed with time. GPT-4 much larger than GPT-3 In November 2022, AIM had spoken about how GPT-4 will be far bigger than GPT-3 and perform tasks that GPT-3 can’t. In the developer demo video, Brockman details tasks that were previously not possible with GPT-3. He emphasises on “how to work with the system to accomplish a task that none of us like to do but have to” and goes on to explain how GPT-4 can help with your “taxes”. With focus on GPT-4 offering much more than its predecessor, OpenAI seemed to focus on acquiring new users as they kept mentioning how the new model had been tested for months to “make it suitable for society” and “add value to everyday life”. It was earlier mentioned that there would be more platform integration with LLMs and GPT-4’s announcement led to collaboration announcements. Focusing on education and passing online exams, GPT-4 aimed to reach the “teaching segment”. This was evident in the announcements by online education platforms like Khan Academy and Duolingo that came around the time of the GPT-4 launch event. While broad GPT-4 predictions did come true, lack of clarity from OpenAI has curbed us from concluding the exact magnitude of GPT-4. With time and further adoption, use cases will be the only confirming factor in understanding how much of their claims stand true.
With OpenAI’s official GPT-4 launch, predictions went haywire.
["AI Highlights"]
["ChatGPT", "GPT-3", "GPT-4", "Microsoft", "multimodality", "OpenAI", "Sam Altman"]
Vandana Nair
2023-03-15T17:00:00
2023
780
["GPT-3", "Go", "ChatGPT", "Sam Altman", "TPU", "OpenAI", "AI", "ML", "GPT-4", "GPT", "Aim", "multimodality", "R", "Microsoft", "startup"]
["AI", "ML", "ChatGPT", "OpenAI", "Aim", "TPU", "R", "Go", "GPT", "startup"]
https://analyticsindiamag.com/ai-highlights/gpt-4-predictions-hits-and-misses/
3
10
1
false
true
false
7,166
Beethoven’s ‘Eroica Effect’ and Analytics
I was educated as an industrial engineer. Engineers are typically not perceived as very worldly or sophisticated. They are often pictured with a shirt-pocket protector stuffed with pens. But some engineers, like me, do have appreciation for the performing arts. For example, I appreciate classical music. In particular, I admire and am in awe of the great classical music composers. How did Tchaikovsky and Mendelssohn transcribe with a pen such beautiful music as notes from their brain on to a page of musical score for so many instruments? (Hint: I don’t think they had a smartphone or email to distract them.) I believe that in the next few years the adoption rate for enterprise performance management (EPM) methods imbedded with business analytics will accelerate. Examples of analytics are regression, correlation, clustering, and segmentation analysis. Core EPM methods include strategy management (strategy maps, balanced scorecard, dashboards); profitability analysis (by products, channels, and customers); driver-based budgets and rolling financial forecasts), enterprise risk management (ERM); and continuous improvement (lean and six sigma quality management). They should ideally be seamlessly integrated. This acceleration will have an effect similar to the one Ludwig van Beethoven’s masterpiece – his third symphony, Eroica – had on the future of classical music. Beethoven followed Eroica with his universally memorable fourth to ninth symphonies, and other great composers emulated him. What connection am I making between classical music and EPM? Breaking free from tradition Ever hear much about Beethoven’s first or second symphony? Few people have. That is because it was with Eroica, his third symphony, where Beethoven himself is quoted as saying, “I will now take a new path.” It was a radical change in music composition. Eroica, inspired by Beethoven’s admiration for Napoleon as a world leader, had true melody. Prior to Eroica, Beethoven’s compositions followed a tradition where melody was rare. Before composing Eroica he complied with the conventional rules of what tasteful music for the elite should sound like. His prior music was influenced by masters who dared not change from tradition, such as Bach and Haydn. But Beethoven had a strong urge to break free from tradition. With Eroica, classical music was changed forever. The evidence of the “Eroica effect” is this: How many billions of people, including you and me, will die with little trace of remembrance generations from now other than a cemetery tombstone or urn with your ashes? But the music works of Beethoven, Mozart, Rossini, Sibelius, Grieg and others in their league will be listened to for a long time to come – possibly for centuries. Are we now at a point where the application of business analytics and the implementation of EPM’s suite of integrated methodologies, similar to Eroica, will also “take a new path?” Yes – because tradition increasingly gives way to change, and organizations are slowly and gradually learning to not just manage change but to drive change. The future of business analytics and enterprise performance management People are what it’s all about, so I honor and respect the importance of applying the principles of behavioral change management. However, my love for quantitative analysis influences me to conclude with a short narration by the great Princeton University mathematician and Nobel Prize winner John Nash. Nash introduced a theory describing how rational human beings should behave when there is a conflict of interest. In the Academy Award-winning movie about Nash’s life, A Beautiful Mind, he said: “I like numbers because with numbers truth and beauty are the same thing. You know you are getting somewhere when the equations start looking beautiful. And you know that the numbers are taking you closer to the secret of how things are.” The executive management teams with the courage, will, caring attitude, and leadership traits to take calculated risks and be decisive will likely be the initial adopters of a fully integrated EPM system imbedded with business analytics. They will achieve the full vision of applying business analytics and EPM methods. Other executive management teams will follow them.
I was educated as an industrial engineer. Engineers are typically not perceived as very worldly or sophisticated. They are often pictured with a shirt-pocket protector stuffed with pens. But some engineers, like me, do have appreciation for the performing arts. For example, I appreciate classical music. In particular, I admire and am in awe of […]
["IT Services"]
[]
Gary Cokins
2015-03-29T08:24:46
2015
666
["programming_languages:R", "AI", "ML", "RAG", "analytics", "GAN", "R"]
["AI", "ML", "analytics", "RAG", "R", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/it-services/beethovens-eroica-effect-and-analytics/
2
7
2
false
true
false
10,002,149
Top 5 IoT Development Platforms That Every Organisation Should Know About
Organisations today have been deploying IoT at a large scale, but it might be a challenge to focus on developing the right tools, services, applications and integrations with IoT. This is where IoT development platforms come into the picture. Building an IoT product can be a complex task and might need both hardware and software to be taken care of. An IoT platform allows easy provisioning, management and automation of IoT connected devices. It manages all the interactions between the hardware and the application layers. It does the work of connecting the hardware and comes with features to speed up the development of applications for connected devices. They make it easy for companies to build and deploy IoT products that solve real problems and create real value. In this article, we discuss 5 widely used IoT platforms for developers to deploy IoT solutions, in alphabetical order. 1. IBM Watson IoT: IBM is one of the leaders in the IoT space. It uses IBM Cloud and deploys analytics service for visualization and AI-driven analytics in the cloud. It offers advanced integration with machine learning capabilities. It provides businesses with an extensible catalogue of analytical functions to enrich, augment and gain insights from data in a simple and intuitive way. 2. Intel development boards: It is an end-to-end reference model and family of products that work with third-party solutions to provide a foundation for seamlessly and securely connecting devices. It performs actions such as sense, filter, process, analyze and actuate while securing and managing machines and data. It can support onboarding, monitoring, diagnostics and remote control of devices. It is designed to be highly secure and detects threats, thereby safeguarding any wrongdoings at most times. 3. Microsoft Azure IoT: The Azure IoT development platform by Microsoft embraces security from the endpoint. It has high reliability with 99.95 percent of uptime reliability and connects the data and apps through the cloud. It is open to use any device, OS, data source, software or service, on-premises, at the edge or in the cloud. Microsoft offers cost-effective if you decide to buy additional services. It is quite scalable and takes advantage of one of the largest partner ecosystems and enable millions of devices and terabytes of data in most regions worldwide. 4. Particle.io: Particle.io is an open source, affordable and reliable development platform for IoT. The cloud-connected microcontrollers of Particle.io are powered by Device OS, which is a lightweight operating system for embedded IoT devices. The user can manage and control his devices through the Device Management Console, and every device is exposed through a secure API. It is extremely secure since its security is built from the ground up and every message sent is encrypted. Each device is given its own private key, so unauthorized hardware cannot sneak into any message. It continuously monitors their servers and the security landscape to ensure that the devices stay locked down. 5. Thinger.io: Thinger.io is an open source platform that offers a scalable cloud infrastructure for connecting millions of devices and has easy to use the admin console to control these devices or integrate them in the business logic with its REST API. It provides a ready to use scalable cloud infrastructure for connecting things, is fast and secure and has a cloud infrastructure with an easy to use admin console. Makers can register for free accounts to start building their IoT projects in minutes, just using Thinger.io’s cloud infrastructure.
Organisations today have been deploying IoT at a large scale, but it might be a challenge to focus on developing the right tools, services, applications and integrations with IoT. This is where IoT development platforms come into the picture. Building an IoT product can be a complex task and might need both hardware and software […]
["AI Features"]
["Developers", "IBM", "Intel", "IoT"]
Disha Misal
2019-05-29T19:26:28
2019
573
["API", "machine learning", "Developers", "AI", "Azure", "R", "ML", "Scala", "REST API", "IBM", "analytics", "GAN", "Intel", "IoT"]
["AI", "machine learning", "ML", "analytics", "Azure", "R", "Scala", "API", "REST API", "GAN"]
https://analyticsindiamag.com/ai-features/top-5-iot-development-platforms-that-every-organisation-should-know-about/
2
10
1
true
false
false
26,636
IIT-Hyderabad Develops New AI-Based System To Catch Bikers Without Helmets
With the intention to curb the habit of driving a two-wheeler without a helmet, the Hyderabad City Police has signed a memorandum of understanding with the Indian Institute of Technology, Hyderabad. Here, the city traffic police will be using an artificial intelligence-based programme developed by the premier institute to automatically detect motorcyclists who are driving without helmets in surveillance videos. IIT Hyderabad has procured the required permissions to access to video data from the city’s network of CCTV cameras. Reports have suggested that the technology is in a “ready-to-be-deployed stage”. Dinesh Singh, one of the research scholars on the project, told a leading daily, “It will be fully automatic along with a web interface to verify the alerts by the operators (traffic police, etc.). From there, it will be connected to the existing RTO website to generate challans and send a notification to the riders through an SMS.” The technology will follow the following stages: The solution is partially installed in cameras Also partially installed on the servers of the central police control room A software is installed on an embedded card attached to CCTV cameras It helps in the detection of violators (in this case, motorcyclists riding without helmets) The system will then send out an alert to the central alert database Hyderabad is seeing a definite shift in the technological space. Andhra Pradesh chief minister Chandrababu Naidu has turned the city into an IT hub which houses India’s Google and Facebook headquarters. (Andhra Pradesh currently shares its de jure capital Hyderabad with the neighbouring state Telangana, which was formed from the bifurcation of Andhra Pradesh in 2014.) Already in China, the Government is combining technologies such as artificial intelligence, facial recognition and the mandatory social rating system to monitor, check and penalise ‘undesirable’ conduct of its citizens.
With the intention to curb the habit of driving a two-wheeler without a helmet, the Hyderabad City Police has signed a memorandum of understanding with the Indian Institute of Technology, Hyderabad. Here, the city traffic police will be using an artificial intelligence-based programme developed by the premier institute to automatically detect motorcyclists who are driving […]
["AI News"]
["AI (Artificial Intelligence)", "cctv", "Chandrababu Naidu", "Hyderabad", "iit hyderabad", "telangana"]
Prajakta Hebbar
2018-07-24T09:28:45
2018
299
["Go", "API", "artificial intelligence", "programming_languages:R", "AI", "telangana", "R", "iit hyderabad", "Chandrababu Naidu", "programming_languages:Go", "Hyderabad", "GAN", "cctv", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "R", "Go", "API", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/iit-hyderabad-develops-new-ai-based-system-to-catch-bikers-without-helmets/
2
8
0
false
false
false
10,059,842
Emerging trends in low-code/no-code platforms in AI
The low-code development platform market is expected to grow at a CAGR of 25.26% from 2022-2027 to reach a cap of USD 64.56 billion by 2026, according to Expert Market Research. “While low-code application development is not new, a confluence of digital disruptions, hyperautomation and the rise of composable business has led to an influx of tools and rising demand,” said Fabrizio Biscotti, research vice president, Gartner. Gartner has projected low-code application platforms (LCAP) will be the largest component of the low-code development technology market through 2022. Below we look at the emerging trends in no-code/low-code development: No-code for developers Yes, marketers and analysts will cash in on the no-code/low-code technology, but experts suggest that the biggest DevOps trend for 2022 will have software developers using low-code/no-code tools. The majority of developers think such tools will help save time, reduce overload, enhance productivity, and not render them obsolete. According to a report by IDC, 40% of low-code developers are full-time developers, whereas 33% are part-time developers. To wit, developers are no longer defined by their proficiency in coding but by their ability to build digital solutions. That said, the low-code/no-code platforms are still at the nascent stage. Even the most popular tools require significant API expertise and JavaScript experience. Currently, the products that do not require any coding are limited in functionality. Retail play While Amazon and Flipkart are the major ones, the competition in Indian e-commerce is stiffening. For Indian retailers, no-code/low-code is the fastest way to roll out direct-to-consumer digital initiatives to improve customer experience and get loyal customers. India is predicted to surpass the US to become the second-largest e-commerce market by 2034. The no-code/low-code platforms power the e-commerce growth strategy and give retailers and brands access to enterprise-grade infrastructure that allows merchandising and planning; distributed storefront management; customer reviews, retention, and loyalty; sale, offer, and discount automation; and customer support automation. Solutions Before launching Amazon Sagemaker Canvas, AWS rolled out two no-code/low-code services: Amazon Honeycode, a visual web and mobile application builder; and Amplify Studio, a visual development service for application stacks. The Amazon Sagemaker Canvas is an ML service for any engineer or business user to build ML predictions via point-and-click options and automatically combine their data and create individual or batch predictions. Amazon wants to expand the no-code/low-code portfolio as the demand for cloud-focused skills is too high and believes the solution will address the current skill-gap problems. The company also wanted to tap into the demographic of non-technical users with no-code/low-code. SAP also jumped on the no-code/low-code train. Its solution allows customers to develop apps and adjust the SAP processes through drag-and-drop options. They also plan to automate workflows for controlling and finance departments. For their no-code development environment, the company took over AppGyver, a no-code platform, and for low-code, SAP has launched the Business Application Studio. Google, too, has a no-code AI solution called AutoML. The solution presumed some ML knowledge on the part of the developer. AutoML is a feature-rich suite of AI products that enables users to train high-quality ML models for business use cases. Meanwhile, Microsoft has a free desktop program called Lobe to build customised AI models, sans coding. Available for Windows and Mac, Lobe is best suited for image classification. Akkio is an end-to-end no-code free AI platform designed for sales, marketing, and financial activities. The platform claims it can help turn data into live AI predictions in less than 10 minutes without writing code or hiring a data expert.
Before launching Amazon Sagemaker Canvas, AWS rolled out two no-code & low-code services.
["IT Services"]
["Automl", "DevOps", "ecommerce", "Microsoft", "no-code", "SageMaker", "SAP"]
Meeta Ramnani
2022-02-04T14:00:00
2022
583
["Go", "Automl", "Amazon SageMaker", "AWS", "AI", "SageMaker", "SAP", "ML", "no-code", "RAG", "Aim", "ecommerce", "JavaScript", "DevOps", "R", "Java", "Microsoft"]
["AI", "ML", "Aim", "Amazon SageMaker", "RAG", "AWS", "R", "JavaScript", "Go", "Java"]
https://analyticsindiamag.com/it-services/emerging-trends-in-low-code-no-code-platforms-in-ai/
3
10
3
true
false
false
45,475
Can Artificial Intelligence Remove Tax Compliance Inefficiencies In India?
There could also be a number of challenges that tax authorities in India may face when using artificial intelligence. These can include the need to balance centralisation of tax assessment with on-ground experience, uncertainties around being able to develop models which deliver a positive impact on performance, the varying range of choices in AI solutions, and the need for skills to leverage those advanced solutions. India may soon become the first country to use artificial intelligence and machine learning in the tax assessment process. Finance Minister Nirmala Sitharaman has announced that the government will deploy a faceless assessment system based on AI and ML, starting October 2019. The overall process will increase the accuracy and transparency of India’s tax assessment process, thereby improving the tax base and compliance. As per the announcement, data within income tax returns, statement of financial transactions and from other sources will be analysed to look for anomalies so that tax compliance is achieved. All of these would require the tax department to process data generated from billions of financial transactions taking place every day in India. The adoption of AI is going to get rid of the existing inefficiencies in the tax processes as well as create insights that can strengthen tax collection and prevent tax evasion and fraud. For AI to succeed, it needs data and clearly, with one of the largest tax-paying population in the world, the government would have all the data it needs. Faceless Electronic Portal To Scrutinise Tax Assessment If any anomaly is found in a tax payer’s assessment, there will be automatic scrutiny done using faceless electronic portal wherein a structured questionnaire based on the anomaly will be sent to the taxpayer automatically. Powered by natural language processing capability, the faceless portal would take in the answers from taxpayers. If the system is satisfied with the answers, the case will be closed, otherwise, it will be assigned to an income tax officer for a further enquiry. It is to be noted that the income tax department (ITD) had earlier identified many citizens who had not paid their tax liabilities when filing their tax returns for the assessment year 2018-19 using data analytics. This was part of the non-filers monitoring system (NMS) deployed by ITD, which monitors individuals for high-value transactions and potential tax liabilities using data analytics. AI/ML Based Tax Assessment Is A Logical Step To Government’s Recent Policies Like most countries, India’s Income Tax Department has been dependent on human tax assessment officers to assess tax returns filed by individuals, which leaves scope for inefficiencies and tax evasion on a large scale, all of which have magnified the compliance workload for businesses and the tax department. India’s existing tax environment requires increased transparency across different departments and tax authorities, and using AI-led automation can be tremendously helpful. The AI/ML tax assessment system is a logical next step to policy decisions implemented by the Indian government in recent years. This includes full data-localisation by foreign companies conducting business in India, digitisation of tax filing systems,  and the government’s focus on deploying electronic payment systems and making large cash transactions illegal. To roll out the use of AI/ML in taxation, the government is working to integrate data from Ministry Of Corporate Affairs, Central Board of Direct Taxes (CBDT) and the various systems therein within a year, particularly before next year’s budget.
There could also be a number of challenges that tax authorities in India may face when using artificial intelligence. These can include the need to balance centralisation of tax assessment with on-ground experience, uncertainties around being able to develop models which deliver a positive impact on performance, the varying range of choices in AI solutions, […]
["AI Features"]
["AI (Artificial Intelligence)", "AI in government", "human intelligence at machine scale"]
Vishal Chawla
2019-09-04T17:40:00
2019
561
["Go", "artificial intelligence", "human intelligence at machine scale", "machine learning", "AI", "AI in government", "ML", "Git", "RAG", "automation", "analytics", "R", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "ML", "analytics", "RAG", "R", "Go", "Git", "automation"]
https://analyticsindiamag.com/ai-features/artificial-intelligence-tax-india/
2
10
1
false
false
false
30,516
World’s Largest Supercomputer SpiNNaker Has The Potential To Unlock Secrets Of The Human Brain
From the 1954 invention by IBM NORC which had the speed of 1 microsecond to IBM’s 2018’s innovation Summit performing of 200 petaflops — supercomputers have come a long way in their journey. They are being used for everything — from fluid dynamics to nuclear explosion simulations. Now this month, in November 2018, the University of Manchester’s School of Computer Science switched on the world’s largest brain-like supercomputer called SpiNNaker, creating history. An Inspiration From The Human Brain The nervous system has neurons which are the basic unit of the brain and they communicate with each other in unique ways. They are also responsible for transferring information to other nerve cells, muscles or gland cells. They communicate with each other via electrical events called “action potentials” and chemical neurotransmitters. This action is called a ‘spike’ because the shape of the action potential is like a spike when measured on an electrical equipment. (Left) A neuron spikes when a combination of all the excitation and inhibition it receives makes it reach the threshold. (Right) Actual neuron in the mouse’s cortex. Image source: The University of Queensland. Now, SpiNNaker uses a large system of computers to imitates these neuron-like spikes. It mimics the parallel architecture of the human brain sending billions of small amount of information to thousands of different destinations at the very same time. Steve Furber, ICL Professor of Computer Engineering of The University of Manchester who conceived the initial idea of a such a computer said, “We’ve essentially created a machine that works more like a brain than a traditional computer, which is extremely exciting.” What Makes SpiNNaker Unique The traditional computers transfer information from point A to point B via a standard network. SpiNNaker, on the other hand, can send billions of small information to thousands of destinations simultaneously, mimicking the parallel communication of the human brain which itself is composed of billions of simple computing elements, communicating using unreliable spikes. This new neuromorphic computer has 100 million transistors in each of its chips and is capable of completing more than 200 million actions per second. The basic building block of the supercomputer is a 48-chipboard which comes in a range of sizes. The goal is to be able to simulate a single network consisting of one billion simple neurons, requiring a machine with over 50,000 chips. Image source: The University of Manchester Each SpiNNaker chip contains two silicon dies: the SpiNNaker die itself and a 128 MByte SDRAM (Synchronous Dynamic Random Access Memory) die, which is physically mounted on top of the SpiNNaker die and stitch-bonded to it. SpiNNaker Objectives One of its main objectives is to help neuroscientists to understand the very complex human brain better. To do this, it runs extremely large real-time simulators, impossible for other machines. It has been used to simulate high-level real-time processing in a range of isolated brain networks. This includes an 80,000 neuron model of a segment of the cortex, the outer layer of the brain that receives and processes information from the senses. “The ultimate objective for the project has always been a million cores in a single computer for real-time brain modelling applications, and we have now achieved it, which is fantastic,” said Professor Furber. This computer has also simulated a part of the brain called Basal Ganglia, responsible for functions like control of voluntary motor movements, procedural learning, habit learning, eye movements, cognition, and emotion, and is an area affected by one of the most common neurological disease of Parkinson’s. The computer makers aim to unravel many mysteries of the human brain with this computer by running large-scale simulations, and therefore has a massive potential in neuroscience discoveries. Future Endeavours Apart from neuroscience, this research will also help in robotics and computer science fields. Robotics: The small board of SpiNNaker will make it possible to simulate a network of tens of thousands of spiking neurons, process sensory input and generate motor output, all in real-time and in a low power system, making it helpful for researchers in robotics who need low power computation. Computer Science: Since the computer is different than other computers in terms of deterministic, repeatable and reliable communications that the other computers follow, it offers an opportunity to unravel the potential of parallel computation, helping in computer science research. This new invention will lead to new and advantageous principles for energy-efficient massively parallel computing and the future will tell us what discoveries it presents the world.
From the 1954 invention by IBM NORC which had the speed of 1 microsecond to IBM’s 2018’s innovation Summit performing of 200 petaflops — supercomputers have come a long way in their journey. They are being used for everything — from fluid dynamics to nuclear explosion simulations. Now this month, in November 2018, the University […]
["AI Features"]
["brain", "computer science", "neurons", "neuroscience", "Robotics"]
Disha Misal
2018-11-22T03:38:20
2018
743
["computer science", "Go", "TPU", "programming_languages:R", "AI", "brain", "innovation", "programming_languages:Go", "Robotics", "neurons", "Aim", "ai_applications:robotics", "neuroscience", "GAN", "R"]
["AI", "Aim", "TPU", "R", "Go", "GAN", "innovation", "programming_languages:R", "programming_languages:Go", "ai_applications:robotics"]
https://analyticsindiamag.com/ai-features/worlds-largest-supercomputer-spinnaker-has-the-potential-to-unlock-secrets-of-the-human-brain/
4
10
0
true
false
false
10,097,412
Decoding Death Virtually in India
About a year ago, popular comedian Raju Srivastava succumbed to cardiac arrest post his workout session. Not many know that the post-mortem examination in his case was done digitally through a method called virtual autopsy. Virtual autopsy, also known as virtual post-mortem examination or virtual autopsy imaging, is a modern, non-invasive method of examining a body to determine the cause of death or investigate injuries. This advanced technique uses imaging technologies to provide a detailed analysis of the body without the need for traditional invasive autopsy procedures. Also, since virtual autopsies are minimally invasive, they enable the prompt release of the body for the last rites. It not only preserves the dignity of the deceased, but provides relief to their families, who may have otherwise received a stitched-up body after the post-mortem examination. Unlike traditional autopsies, which require incisions and tissue sampling, virtual autopsies are non-invasive and do not require physical alteration of the body. This can be particularly important in cases where cultural or religious beliefs prohibit traditional autopsies. While it is generally non-invasive, it can be considered minimally invasive in certain situations. It allows the identification of regions of interest in the body to determine the cause of death. Instead of cutting open the body, it focuses on specific areas such as the thoracic region, making only small incisions when required to investigate the death. The Tech Stack In an interview with AIM, Ash Govind, founder & CEO of Virtual Autopsy Solutions, explained that the two software requirements in their field of work are visualisation and forensic information system. Visualisation involves manipulating 3D images of the body, while the forensic information system logs the entire case from the scene of crime or death to the post-mortem examination, including additional examinations like toxicology, microbiology, or DNA testing. The global medico-legal system aims to determine the probable cause of death in cases of unnatural, sudden, and unexpected deaths. Traditionally, this was done through physical autopsies using a scalpel, but now it can be accomplished through multimedia files that include videos, screenshots, and other digital data in a report. The process involves using a CT scanner to obtain DICOM data, which is rendered through software to create a 3D reconstruction of the body. Pathologists and radiologists examine the 3D reconstruction to determine the probable cause of death and this information is then integrated into a multimedia report on the cause of death. This software works effectively for the purpose of virtual autopsies to an extent where Govind claimed that they conduct approximately 13,000 cases annually in the UK, making the country a world leader in post-mortem imaging and virtual autopsies. Credits: RSNA India Adoption In 2019, Union health minister Harsh Vardhan expressed interest in AIIMS and ICMR’s initiative to establish a virtual autopsy lab. He emphasised the government’s commitment to developing multiple centres across the country. To support the implementation of virtual autopsies, the Indian Council of Medical Research (ICMR) provided Rs 5 crore to AIIMS, and the process of acquiring a CT machine for the procedure is in progress. Initially, the virtual autopsy facility will be exclusive to AIIMS, but there are plans to extend it to other medical institutions nationwide, with AIIMS providing training. AIIMS currently conducts approximately 3,000 autopsies per year, as per a statement made in the Lok Sabha. Virtual Autopsy India is in conversations with AIIMS Delhi and other medical institutions and academia across the country. Other institutions like AIIMS Bibinagar, AIIMS Nagpur, Government Medical College Kashmir, PGIMS Rohtak and a few more are also in the process of setting up a virtual autopsy centre. It’s not just limited to India. Globally, virtual autopsy India is in discussion with the Dubai police and the Kazakhstan government as well. Recently, in June 2023 The Government Medical College (GMC) Anantnag hosted a groundbreaking two-day conference on ‘virtual autopsy’, a first-of-its-kind in Jammu & Kashmir. Organised by the Department of Forensic Medicine, the conference aimed to explore the non-invasive method. Dr Azia Manzoor, the head of the department, provided an overview of virtual autopsy, highlighting how it digitally reconstructs detailed images of the body’s internal structures. Dr Hemant Naik, CMO of Virtual Autopsy India, explained the benefits of virtual autopsy, emphasizing its precision and non-invasiveness. The conference was well-attended, with over 200 delegates from Jammu & Kashmir and other parts of the country, particularly north India. By involving forensic experts, trained autopsy technicians, and police officers, the process’s effectiveness and accuracy can be further enhanced. Challenges Facing Adoption of Virtual Autopsy In a conversation with AIM, Dr Naik discussed the challenges facing the adoption of virtual autopsies in India. Firstly, acquiring funds is a major obstacle due to the significant cost involved in setting up the required infrastructure, such as a CT scanner and other technology, including software and hardware. The total cost for each project can amount to more than Rs 10 crore, making it essential to secure government funding, which can be challenging. Secondly, since virtual autopsy is a new technology, medical professionals need proper training to use it effectively. While training initiatives can overcome this challenge, it still requires dedicated effort. The third challenge lies in the ambiguity within the Indian medico-legal system regarding the acceptance of virtual autopsy as evidence in court. Although the Indian Evidence Act of 1965 allows digital evidence to be submitted, it does not specifically address medical imaging evidence, creating a grey area in the legal acceptance of virtual autopsy results. Overcoming these obstacles will be crucial in fully adopting and utilising virtual autopsies in the Indian medical field. The Benefits The advantages of virtual autopsies are numerous. Along with it being non-invasive, virtopsy will also save time compared to conventional autopsies due to quicker imaging processes. Moreover, virtual autopsies keep the body intact, ensuring that evidence remains intact for potential future investigations. The use of advanced imaging technologies enables detailed visualisations and interactive analysis in three dimensions, aiding in the identification of injuries or abnormalities contributing to the cause of death. Additionally, virtual autopsies have educational and research benefits. The digital storage of reports allows for future reviews, which is not feasible in cases where cremation is performed. They serve as valuable tools for training forensic pathologists and medical students, and they facilitate research through the storage and analysis of large datasets. Autopsies play a crucial role in police investigations, especially in cases of unnatural deaths. Traditional autopsies can take anywhere from 30 minutes to three days, depending on the complexity of the case and availability of experts. However, virtual autopsy offers a faster alternative, with the procedure being completed within minutes. Dr Abhishek Yadav, an associate professor of forensic medicine at AIIMS, highlighted the time and manpower-saving benefits of virtual autopsies. History of Virtual Autopsy Dr Michael Thali, a professor at the University of Zurich and co-founder of The Virtopsy Project, introduced virtual autopsies in 1999, creating permanent 3D models of bodies that can be easily accessed and shared for second opinions. This technique has become common practice in Swiss forensic investigations and is gaining popularity worldwide. Although cost is a consideration, the benefits of preserving 3D information without altering the anatomy outweigh the expense. The Virtobot system, a robotic tool working with CT scanners, generates high-resolution 3D images and documents injuries. The visualisation capabilities of virtual autopsies have proven valuable in court cases, aiding in understanding injuries. While not widely used in the US yet, the military and some forensics institutes have adopted virtual autopsies. India positioned itself to be the first country in the Southeast Asian region to introduce virtual autopsies. Several developed countries, including Switzerland, the UK, Germany, Canada, Australia, Japan, Hong Kong, Norway, Sweden, South Africa, Israel, and Middle-East countries, have already adopted this innovative procedure. Limitations However, virtual autopsies have limitations. They may not be suitable for cases requiring histological or toxicological analyses, as these procedures typically require tissue samples. Furthermore, the accurate interpretation of imaging findings relies heavily on the skills and experience of the pathologist or radiologist performing the analysis. To counter this, Naik discussed basic training, where participants are introduced to the technology, learn how to read the upper motor, and understand common findings. The training is focused on live demonstrations of scanning, reporting scans, performing biopsies, and conducting post-mortem examinations. When asked about the accuracy rate, Naik confidently stated that it is up to 98%. However, in January, a study in Germany compared pre-death diagnoses with results from traditional and virtual autopsies. Among 47 patients, both types of autopsies were used, and among 115 patients, only virtual autopsies were performed due to family’s refusal of standard autopsies. Virtual autopsies confirmed 88% of pre-death diagnoses, while traditional autopsies had a confirmation rate of 93%.
Virtual autopsy, or virtual autopsy imaging, is a modern, non-invasive method of examining a body to determine the cause of death
["AI Features"]
["challenges"]
Shyam Nandan Upadhyay
2023-07-24T13:00:00
2023
1,453
["Go", "funding", "programming_languages:R", "AI", "programming_languages:Go", "Git", "RAG", "Aim", "GAN", "challenges", "R"]
["AI", "Aim", "RAG", "R", "Go", "Git", "GAN", "funding", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/decoding-death-virtually-in-india/
3
10
0
false
true
true
10,172,065
Google Opens Cybersecurity Hub in Hyderabad to Strengthen India’s Digital Safety Infra
Google has launched its first Google Safety Engineering Centre (GSEC) in the Asia-Pacific region in Hyderabad to aid India’s digital safety infrastructure, making it only the fourth such centre globally. The facility was inaugurated on Wednesday by Telangana CM Revanth Reddy, alongside state IT minister D Sridhar Babu and other top government officials. The launch follows Google’s ‘Safety Charter’ for India’s AI-led transformation, which was unveiled at the Safer with Google Summit in Delhi on June 17. The GSEC will focus on three core areas: protecting users from online fraud, strengthening enterprise and government cybersecurity and building responsible AI solutions. It is also set to serve as a regional hub for APAC to combat digital threats. Using AI and LLMs, the centre aims to deploy real-time scam alerts via Gemini Nano on Android, improve fraud detection across services like Pay, Search and Gmail, and boost defences like Google Play Protect. It will also tackle AI misuse through adversarial testing, red teaming and watermarking tools like SynthID. Calling it a proud moment for Telangana, Reddy praised Google’s ethical philosophy and said, “This centre will create jobs, foster skills and boost India’s cyber defence. Telangana is poised to become a trillion-dollar economy by 2035.” With over a billion internet users, India’s digital growth comes with rising vulnerability. According to Heather Adkins, founding member of the Google Security Team, Google Pay alone prevented ₹13,000 crore worth of financial fraud in 2023. Yet, the threat looms large, with estimated cybercrime losses in India projected to hit ₹20,000 crore in 2025. The government is also on high alert. The Digital Threat Report 2024 noted a 175% rise in phishing attacks on banking and financial services, while over half of business email compromise cases now involve AI-generated deepfakes. The CERT-In cybersecurity agency has responded with national cyber drills and a cyber crisis management plan, having tackled over 14 lakh incidents in 2022 alone. Wilson White, Google’s VP for public policy, highlighted that Asia-Pacific is now the epicentre of digital scams, accounting for two-thirds of global fraud losses—$688 billion in 2023. “AI can help detect 20 times more scam pages and eliminate millions of fake listings,” he said.
The company also released a ‘Safety Charter’, aiming to enhance cybersecurity for its users.
["AI News"]
["Cybersecurity", "Google"]
Merin Susan John
2025-06-19T18:07:32
2025
361
["Go", "AI", "deepfakes", "Git", "responsible AI", "Aim", "llm_models:Gemini", "Google", "GAN", "Cybersecurity", "R", "fraud detection"]
["AI", "Aim", "fraud detection", "R", "Go", "Git", "GAN", "responsible AI", "deepfakes", "llm_models:Gemini"]
https://analyticsindiamag.com/ai-news-updates/google-opens-cybersecurity-hub-in-hyderabad-to-strengthen-indias-digital-safety-infra/
3
10
2
false
false
false
10,250
SQL Server 2016 release: What new does it have to offer?
Recently Microsoft announced its latest version of SQL Server – SQL Server 2016, last one being in 2014. This new version boasts of taking the SQL experience to an altogether new level on account of the features introduced in it. As per Microsoft – SQL Server 2016 is the biggest leap forward in Microsoft data platform history with features that increase performance, simplify management, and transform your data into actionable insights. What is SQL Server? For beginners who are new to the analytics space, let’s first understand what a SQL server is and look back at its history. Microsoft SQL Server, can be explained as a relational database management system (RDBMS). Its one of the flagship products by Microsoft for enterprises, and a frontrunner in the once heated up RDBMS market. The first version of SQL server dates back to the year 1989 and from then on 18 versions including the recent one have been launched. It is highly likely that a Data Scientist would come across the data being stored at a SQL server and thus a need to retreive and process data out of it. SQL Server 2016 In its latest release, Microsoft has given importance to performance, security and cloud integration and accordingly added features to SQL Server 2016. The new server has lot of built-in features from advanced analytics to unparalleled in-memory performance. These features allow you to get real-time insights from your transactional and analytical database. Features of SQL Server 2016 Microsoft has introduced quite some new features in its latest SQL Server 2016 version. Let’s have a closer look at each of these. Always Encrypted The Feature ‘Always Encrypted’ has been designed to guard the data at rest or in motion without impacting database performance. This feature enables SQL Server to execute operations on encrypted data. Hence data which is stored in the SQL Server will be encrypted, thereby securing it from DBA and administrators, making SQL one of the least vulnerable database. Real-time Operational Analytics This feature allows you to perform real-time operational analytics by combining SQL server’s two in-memory technologies i.e. In-Memory OLTP and the in-memory columnstore. This feature aims to tune your system for the ideal transactional performance, along with increasing your workload concurrency. It allows up to 30x faster transactions with in-memory OLTP (Online Transaction Processing). PolyBase With the growing importance of Big Data, this feature- PolyBase addresses the technology gap between SQL server and Hadoop. PolyBase is a technology which has connected SQL server and Hadoop. Basically you can construct and run SQL queries over Hadoop data stores, hence not needing HDFS or MapReduce. Stretch Database Stretch Database allows you to stretch your on-premise database to Azure, Microsoft’s cloud computing platform. In simple terms, stretch database keeps your data which is frequently accessed on-premise and moves your cold or occasionally accessed data to cloud. This reduces the cost for the organization as well allows it to have high performance applications. However. It is important on part of Microsoft to bifurcate the data correctly so that performance does get affected. End to end mobile BI (Business Intelligence) on any device This is a built-in feature in SQL server 2016. It allows you to transform your data into actionable insights and these comprehensions can be delivered on any device whether online or offline. There are 250+ built in analytical functions with modern data visualization techniques. In-Database Analytics SQL Server 2016 has R built-in to it, hence allowing the user to conduct analytics on operational data using R. This feature enables real-time operational analytics using R without moving the data for analysis. Enhanced In-Memory OLTP In Memory OLTP (Online Transaction Processing ) was first introduced with SQL Server 2014. With the latest version, SQL Server 2016, Microsoft aims at extending the functionality to more applications. This will also increase the concurrency. With all these built-in features, Microsoft’s SQL Server 2016 definitely has lots to offer to its customers. Also Microsoft’s offer of providing free database licenses to users of Oracle Database is a move to get Oracle customers, enhancing SQL Server customer base. But will all this do the magic and make SQL Server 2016 a success is altogether a new topic for discussion.
Recently Microsoft announced its latest version of SQL Server – SQL Server 2016, last one being in 2014. This new version boasts of taking the SQL experience to an altogether new level on account of the features introduced in it. As per Microsoft – SQL Server 2016 is the biggest leap forward in Microsoft data […]
[]
["SQL"]
Manisha Salecha
2016-06-22T13:56:12
2016
702
["big data", "business intelligence", "cloud computing", "AI", "R", "Aim", "analytics", "SQL", "GAN", "Azure"]
["AI", "analytics", "Aim", "cloud computing", "Azure", "R", "SQL", "big data", "GAN", "business intelligence"]
https://analyticsindiamag.com/ai-features/sql-server-2016-release-new-offer/
3
10
3
true
false
false
10,137,223
MathCo Opens its Global Delivery and Intelligence Centre in Bengaluru
MathCo recently announced the opening of its new 200,000-square-foot Global Delivery and Intelligence Centre located at IWF Campus in Bengaluru. This facility will serve as MathCo’s global headquarters housing its workforce that builds and delivers AI-powered solutions to Fortune 500 clients. The office will host the first-ever MathCo Experience Centre, which is designed to accommodate a growing workforce, and with a strong focus on fostering a symbiotic work environment that drives excellence across all its services like data science, engineering, custom products and GenAI-led solutions. The MathCo Experience Centre, a central feature of the new office, is a space where clients can interact with MathCo’s solutions, and jointly innovate. This centre provides firsthand experience of all the solutions built on NucliOS, MathCo’s proprietary AI engine. This space will be the centre of MathCo’s efforts in developing expertise in GenAI and other emerging technologies. “We always wanted to build a large analytics and AI ecosystem that brings together data scientists, engineers, designers and product specialists all under one roof – creating the perfect environment to cross-learn, collaborate and co-innovate. Now, this is where all the magic will happen! We look forward to bringing our clients – showcase our way of working, best in class solutions and this facility will allow us to demonstrate it at our full scale,” Aditya Kumbakonam, co-founder and COO at MathCo, said.
This centre provides firsthand experience of all the solutions built on NucliOS, MathCo’s proprietary AI engine.
["AI News"]
["Bengaluru", "GCC", "themathcompany"]
Pritam Bordoloi
2024-10-01T17:25:21
2024
225
["data science", "GenAI", "GCC", "programming_languages:R", "AI", "themathcompany", "analytics", "Bengaluru", "Nuclio", "R"]
["AI", "data science", "analytics", "GenAI", "Nuclio", "R", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/mathco-opens-its-global-delivery-and-intelligence-centre-in-bengaluru/
2
7
0
true
false
false
1,557
HPE and Tata Communications to build world’s largest IoT network in India that enhances resource utilization
HPE, which is demonstrating a variety of smart city IoT use cases on its HPE Universal IoT Platform at Mobile World Congress 2017, announced its association with Tata Communications, leading provider of A New World of Communications™, to support the roll out of India’s first LoRaWAN™ (LoRa) based network. LoRa network, the plans of which were unveiled by the company last year, is a part of Tata Communications’ long-term strategy of creating mobile platforms and ecosystems that enables its customers and partners to connect people and IoT-connected devices seamlessly on a global scale. As a part of the deal, the first phase of the roll-out would target Tier 1, 2, 3 and 4 cities in India, touching over 400 million people. The successful field trials of the same have been done in cities like Mumbai, Delhi and Bangalore. Apart from this, there are 35 proof-of-concept applications in trial on the network. This association between two of the leading companies in the world comes as a new era in the field of connected devices where the devices are enabled with embedded connectivity for enterprise-customer solutions throughout the country. Serving more than 2000 communities and covering 400 million people, it would be the first of its kind initiative in India that plans to connect devices, applications and other IoT solutions over LoRa network in facilities like campus, utilities, fleet management, security, smart buildings and healthcare. “As part of our commitment to innovation and in driving digital transformation globally, we are creating a cohesive, resilient and highly secure network to deploy IoT applications in India. We are excited to be partnering with HPE in this project as this platform is critical to amalgamating all the complex variables in enabling a truly digital India”, said Anthony Bartolo, president, Mobility, IoT and Collaboration Services, Tata Communications. Facilitating features like streamlining interoperability and management of heterogeneous IoT devices & applications that power intelligent edge, the HPE universal IoT platform is designed for massive scale, multi-vendor and multi-network support that uses the oneM2M interoperability standard. According to the company statement, the platform supports long-range, low-power connectivity deployments, as well as devices that use cellular, radio, Wi-Fi and Bluetooth. “Through our partner centric approach, the HPE Universal IoT platform will enable Tata Communications to build multiple vertical use cases for its Indian IoT network on a common platform with a common data model,” said David Sliter, vice president and general manager, Communications Solutions Business, HPE Apart from this, HPE would work closely with Tata Communications as its integral part of global cellular IoT connectivity services. This would provide a range of domestic and cross-border IoT connectivity and management services, particularly for applications requiring elements of mobility, such as connected cars.
HPE, which is demonstrating a variety of smart city IoT use cases on its HPE Universal IoT Platform at Mobile World Congress 2017, announced its association with Tata Communications, leading provider of A New World of Communications™, to support the roll out of India’s first LoRaWAN™ (LoRa) based network. LoRa network, the plans of which […]
["AI News"]
["iot platform india"]
Srishti Deoras
2017-02-28T13:10:49
2017
452
["iot platform india", "programming_languages:R", "AI", "innovation", "ML", "digital transformation", "Git", "ViT", "R"]
["AI", "ML", "R", "Git", "ViT", "digital transformation", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/hpe-tata-communications-build-worlds-largest-iot-network-india-enhances-resource-utilization/
2
8
2
false
false
false
10,058,658
How is image segmentation done using Image-Level Supervision?
To achieve better performance, deep neural network-based semantic segmentation typically requires large-scale cost extensive annotations for training. Some researchers have recently attempted to use object-level labels (e.g. bounding boxes) or image-level labels to avoid pixel-wise segmentation annotations, which are required for most methods (e.g. image categories). So, in this article, we will talk about how to segment images at the image level using the image-level supervision approach. Below are the major points to be discussed in this article. Table of contents Semantic segmentationWhat is instance segmentation?Types of supervision for segmentation Working methods Let’s start the discussion by understanding semantic segmentation. Semantic segmentation Semantic image segmentation is the problem of assigning an image’s pixels to a predefined set of labels based on the semantic structure to which the pixel belongs. For computing the probability distribution over the classes for each pixel, most successful models for semantic image segmentation generally use a variation of CNN. During inference, these distributions are fed as unary potentials to fully connected conditional random fields (CRF) with Gaussian edge potentials. The CRF is used to infer joint labelling for the image’s pixels. Conditional random fields (CRFs) are the statistical modelling tool used for structured prediction in pattern recognition and image processing. Successful semantic image segmentation necessitates access to a large number of densely labelled images. Dense labelling of images, on the other hand, is an expensive and time-consuming process. As a result, the number of densely labelled images available is typically a negligible proportion of the total set of images. As a result, models that rely solely on densely labelled images have a limited scope. In the sequel, these models will be referred to as fully supervised models. Due to the limitations of fully supervised models, models that can incorporate weakly labelled images for training have been developed. These include models that use a bounding box prior, a small number of points per class and image-level labels. Models that rely solely on image-level labels are of particular interest, as the web provides an almost limitless supply of poorly annotated images. In the following section, we’ll look at some recently proposed model that learns to generate segmentation masks from image-level labels alone, without the help of localization cues or saliency masks. Before that, we’ll go over instance segmentation and different types of supervision for segmentation, as they’re both relevant. What is instance segmentation? One of the most difficult tasks in computer vision is instance segmentation. However, obtaining the required per-pixel labels required by most instance segmentation methods is time-consuming and expensive. Current approaches to overcoming this issue rely on weaker labels (such as image-level labels) and pseudo labels obtained through object proposal methods. While the majority of these methods are for object detection and semantic segmentation, the task is to categorize each object pixel and distinguish between object instances. Most recent methods rely on deep networks and work in two steps, first detecting objects and then segmenting them. Mask-RCNN, for example, employs Faster-RCNN for detection and an FCN network for segmentation. Types of supervision for segmentation Weak supervision Because obtaining per-pixel labels is time-consuming, many weakly supervised methods have emerged that can use labels that are much cheaper to obtain. Bounding boxes, scribbles, points, and image-level annotation are all examples of labels. The dataset in the weakly-supervised setting, on the other hand, consists of images and associated annotations that are relatively easy to obtain, such as tags/labels of objects in the image. Image-level labels as weak supervision Because of its low cost, acquiring image-level labels is an appealing form of annotation. The annotator only needs to say whether or not a particular object class appears in an image, not how many of them there are. While this type of annotation is gaining popularity in academia, the majority of the proposed methods are for semantic segmentation. Only recently did a few works for this problem setup surface. Using the Class Activation Map (CAM), we were able to identify not only a heatmap that roughly represents the regions where objects are located but also peaks on that heatmap that represent the locations of different objects. Working methods In this section, we’ll briefly describe two image segmentation models based on image-level supervision. Segmentation by pseudo labels This method is proposed by Issam H. Laradji et al that can effectively train with image-level labels, which are much less expensive to obtain. Fundamentally, the Weakly-supervised Instance SEgmentation method (WISE) builds on the Probabilistic roadmap method (PRM) by training a fully-supervised method, Mask R-CNN, with its output pseudo masks. Because Mask R-CNN is potentially robust to noisy pseudo masks, and the noisy labels within these masks may be ignored during training because they are potentially uncorrelated, this procedure is effective. Below is the architecture of this method when it is being trained. Source The first component (shown in blue above) learns to classify the images in the dataset. The classifier generates a class activation map (CAM) first and then uses a peak stimulation layer (PSL) to obtain the CAM’s local maxima. The classification loss is computed using the average of these local maxima to train the classifier. Because the CAM peaks represent located objects, it chooses a proposal for each of these objects in order to generate pseudo masks. The second component (shown in green) uses these pseudo masks to train a Mask R-CNN. To summarize, this approach to instance segmentation with image-level supervision consists of two major steps: (1) obtain pseudo masks for the training images based on their ground-truth image-level labels; and (2) train a fully supervised instance segmentation method on these pseudo masks (shown in the above figure). This framework is built around two components: a network that generates pseudo masks by training a PRM on image-level labels and leveraging object proposal methods, and a Mask R-CNN is a fully supervised instance segmentation method. Segmentation by Pixel label estimator This model is proposed by Gaurav Pandey et al that learns to generate segmentation masks from image-level labels alone, without the use of localization cues or saliency masks. On the output of a CNN, we apply a pixel-label loss as well as a neighbourhood loss. Because real pixel labels are unavailable, the CNN output is mapped to auxiliary pixel labels to obtain an approximate segmentation mask. The neighbourhood loss enforces the constraints imposed by the conditional random field on the CNN output, forcing it to generate crisp segmentation masks that align with the object’s boundary. Below is the architecture of this model. Source As shown above, A fully convolutional network is used to generate a distribution over-segmentation masks p(z|x) from the input image. To generate qaux(z|x), the pixel label estimator incorporates image label information into the distribution. It forces the segmentation network’s output to be close to this updated distribution. At the same time, the neighbourhood loss forces the segmentation network’s output to be close to the distribution computed from its neighbours. The procedure can be elaborated upon. A segmentation network is fed an image, and the output is a distribution over the labels for each pixel location p(z|x). This distribution is known as the predicted distribution because it is the only one that will be required during inference. To make certain that the predicted distribution is a valid segmentation mask for the input image. As a result, it imposes a number of losses on the predicted distribution. The pixel-label estimator, in particular, incorporates image-label information into the predicted distribution to generate a distribution over pixel-level labels qaux. Because the true pixel-level labels are not available, this distribution can be thought of as an auxiliary ground truth. The auxiliary ground truth is used to train the segmentation network. Next, the neighbourhood estimator computes a smooth version of the output distribution by averaging the output of the neighbours for each location. Final words Through this post, we have discussed image segmentation under which we have seen what is semantic segmentation, instance segmentation and major types of supervision that are used when performing segmentation tasks. Lastly, we discussed two methods of image segmentation based on image-level supervision. The first method employs a two-stage pipeline for image-level label training. It uses class activation maps with a peak stimulation layer in the first stage. In the second stage, Mask R-CNN is used to train on the pseudo masks in a fully supervised fashion. The second model is based on image-level labels and is based on weakly-supervised semantic image segmentation. References Object Counting and Instance Segmentation with Image-level SupervisionInstance Segmentation with Image-level SupervisionLearning to segment with image-level supervision
In this article, we will talk about how to segment images at the image level using the image-level supervision approach.
["AI Trends"]
["Deep Learning", "image processing", "image segmentation", "Machine Learning", "Object Detection", "semantic segmentation"]
Vijaysinh Lendave
2022-01-19T11:00:00
2022
1,421
["Go", "TPU", "image segmentation", "AI", "image processing", "neural network", "R-CNN", "Machine Learning", "computer vision", "semantic segmentation", "RAG", "object detection", "Object Detection", "Deep Learning", "CNN", "R"]
["AI", "neural network", "computer vision", "RAG", "object detection", "TPU", "R", "Go", "CNN", "R-CNN"]
https://analyticsindiamag.com/ai-trends/how-is-image-segmentation-done-using-image-level-supervision/
3
10
0
true
true
false
10,065,436
OneWeb collaborates with New Space India to complete satellite launch
OneWeb, the low Earth orbit (LEO) satellite communications company, announced that they have entered into an agreement with New Space India Limited, the commercial arm of the Indian Space Research Organisation(ISRO), to help ensure OneWeb completes its satellite launch programme. OneWeb remains on track for developing its satellite constellation network, delivering industry-grade secure connectivity. The first launch is expected in 2022 from the Satish Dhawan Space Centre, Sriharikota. The launches will add to the total in-orbit constellation of 428 satellites of OneWeb, which includes 66 per cent of the planned total fleet, to build a global network that will deliver high-speed and low-latency connectivity. Sunil Bharti Mittal, OneWeb Executive Chairman, said, “This is a historic day for collaboration in space due to the shared ambition and vision of New Space India and OneWeb. The recent agreement on launch plans adds momentum to the development of OneWeb’s network, as we work across the space industry toward our common goal of connecting communities globally.” This launch contract follows a separate agreement between OneWeb and SpaceX to enable the company to resume satellite launches announced in March 2022. OneWeb has activated service with its network at the 50th parallel as the demand for the company’s broadband connectivity services continues to grow.
Historic contract includes launches of OneWeb satellites from the Satish Dhawan Space Centre, Sriharikota
["AI News"]
["ISRO"]
Poornima Nataraj
2022-04-21T15:16:45
2022
208
["Go", "ISRO", "programming_languages:R", "AI", "programming_languages:Go", "ViT", "GAN", "R"]
["AI", "R", "Go", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/oneweb-collaborates-with-new-space-india-to-complete-satellite-launch/
2
7
2
false
false
false
10,056,956
Did McDonald’s Make A Mistake By Investing In Tech Companies?
Mastercard and McDonald’s announced an agreement for Mastercard to acquire McDonald’s AI company, Dynamic Yield. The terms of the deal were not disclosed, but the transaction is set to close in the first half of 2022. For Mastercard, Dynamic Yield’s technology is an addition to their existing suite of services that help brands deliver trusted customer experiences across channels and effectively. Though the QSR has sold the company, it will continue working with Dynamic Yield and Mastercard on digital initiatives. This sale actually comes after McDonald’s was considering a partial sale of Dynamic Yield earlier this year to offload the part of the business. McDonald’s had purchased the company in 2019 for a whopping $300 million and, within less than three years, agreed to sell it. Interestingly, with the purchase of Dynamic Yield, McDonald’s was even called ‘becoming a tech company.’ But the sale is not shocking. In October this year, McDonald’s entered into an agreement with IBM to sell its McD Tech Labs for an undisclosed amount. McDonald’s had bought this company, too, in 2019. McDonald’s tech-first approach In 2019, the QRS invested a huge amount in acquiring tech companies. It acquired Dynamic Yield for $300 million. The company was a leader in personalisation and decision logic technology, which helped customers through the process of purchases and offered things they might want in order. This was applied to drive-thru locations, ordering kiosks, and mobile apps to maximise sales. McDonald’s also entered into an agreement with Apprente, an early-stage leader in a voice-based conversational technology company, to provide customers with personalised experiences. The company had tested Apprente’s technology in some select locations and created voice-activated drive-thrus for faster, simpler and accurate order taking. With this acquisition, the company created a Silicon Valley-based group called McD Tech Labs. The Apprente team was the group’s founding members. McDonald’s also said that they will hire more engineers, data scientists and other tech experts to expand the team. According to the statements by the company management, these investments in emerging technologies were to give McDonald’s additional insights that rivals won’t have access to. The fast-food giant was at the forefront of many emerging restaurant technologies as it had self-order kiosks and drive-thru tech, and it would leverage technology to improve customer experience that can drive sales. McDonald’s has also invested USD 3.7 million in mobile app developer Plexure. Plexure has been powering a version of McDonald’s Global Mobile App in 48 countries outside the US. Steve Easterbrook, President and Chief Executive Officer of McDonald’s, said during the announcement of this investment, “Across all of our markets, we’re using technology to elevate and transform the McDonald’s customer experience.” All these purchases were a part of McDonald‘s spending of $1 billion for upgrades in 2019. As the pandemic hit, McDonald’s found itself well-positioned to build on its prior digital innovations and create a user experience that was well-suited for the pandemic constraints. McDonald’s technology innovations gave customers ways to securely pay and personalise orders. The digital sales exceeded USD 10 billion, or nearly 20% of systemwide sales, in 2020, across the top six markets. Then why sell the companies? Under McDonald’s ownership, Dynamic Yield doubled its revenue and also expanded its customer base across verticals. The statement of the agreement with Mastercard said, “The acquisition by Mastercard will strengthen unique, existing synergies across McDonald’s digital engagement experiences, currently powered by SessionM and Test & Learn. In addition, McDonald’s plans to further scale and integrate Dynamic Yield’s capabilities globally and across ordering channels.” The agreement statement of selling McD Labs to IBM also mentioned, “McDonald’s development and testing of Automated Order Taking (AOT) technology in restaurants has shown substantial benefits to customers and the restaurant crew experience… AOT will continue to be integrated into McDonald’s highly secure technology ecosystem.” The sales involve selling off the tech companies, but not part from them. This reflects the fast-food giant’s larger effort to outsource its technology rather than owning and operating it. During the Q3 2021 earnings call, McDonald’s CEO Chris Kempczinski was quoted saying, “There are certain times when it may make sense for us to go acquire a technology so that we can accelerate the development of that, make sure that it is bespoke to McDonald’s needs. But at some point, that technology reaches a level of development where I think getting it to a partner who can then blow it out and scale it globally makes more sense.” Wrapping up The sale of the technology acquisitions could also be a move to ease the tensions between the corporate and the franchisees. While the corporate team makes all the buying and sales decisions of McDonald’s, it is also being said that the franchise owners had put pressure on the giant to reduce technology fees. In July this year, McDonald’s announced that it was cutting the technology fees that it planned to charge US franchisees by 62%. The fast-food giant is one of the oldest QSRs and has sustained the pandemic too. The outcome of these decisions might get clearer in some months to come.
Since 2019, McDonald’s made three major investments in technology companies, and before the end of 2021, it has already sold two. Were the decisions wrong?
["IT Services"]
["IBM", "Mastercard", "Mobile App"]
Meeta Ramnani
2021-12-23T15:52:22
2021
846
["Go", "Mobile App", "Mastercard", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "Git", "RAG", "IBM", "Rust", "R", "programming_languages:Rust"]
["AI", "RAG", "R", "Go", "Rust", "Git", "innovation", "programming_languages:R", "programming_languages:Go", "programming_languages:Rust"]
https://analyticsindiamag.com/it-services/did-mcdonalds-make-a-mistake-by-investing-in-tech-companies/
3
10
2
false
false
false
10,020,364
SKILLUP, India’s Biggest Data Science Education Fair
SKILLUP 2021, India’s largest and first of its kind Virtual education fair exclusively for Data Science and AI enthusiasts, is scheduled to be held on 22 & 23 April, 2021. Brought to you by Analytics India Magazine, SKILLUP will host more than 20 universities and institutes showcasing their courses and certificate programs. The two-day virtual fair is expected to attract over 3,000 prospective learners and professionals looking to upskill and get certification in the field of analytics and data science. In the post-COVID world, organisations and businesses need to adopt newer technologies to compete and stay relevant. With AI becoming a mainstream technology, the demand for certified professionals in AI and data science will witness an upward spiral. Higher education institutes and universities need to offer robust professional programs and certification in data science & AI to meet the surge in demand. The attendees of SKILLUP 2021 will get to interact with the representatives of several institutes, colleges and universities in a virtual setting. It will also host informative sessions on various analytics and data science certificates and graduate programs offered by premier institutes, foreign universities and leading industry players. The edu-fair offers actionable insights into program features, benefits, curriculum, faculty credentials, capstone projects, job opportunities, placement assistance, and more. Registration for the limited seats at SKILLUP 2021 is free and open now. To register, follow this link. SKILLUP 2021 will showcase various courses and certification programs such as Master of Business Administration, Post Graduate Program, Post Graduate Certificate, PG Diploma, Master of Technology/Science, Bachelor of Technology/Science in Business Analytics, Data Analytics, Data Engineering, Artificial Intelligence, Machine Learning, Cloud Computing, Data Security, Biostatistics and more. The Edu-fair will also host more than 20 hours of information sessions, virtual exhibit booths, knowledge talks, data science workshops, quizzes and contests for a comprehensive engagement and interaction with the attendee. Key Highlights Meet and speak with leading data science universities & collegesConnect virtually with the experts and one on one interactions with top data science education providersDiscover opportunities in data science field by pursuing certifications, part-time and full-time coursesLearn about scholarships, placement opportunities and success stories of the alumniA detailed guide on the courses offered by various data science institutes and apply for top programmes in data science, AI and analytics that best suits your needsGet clarity on the data science career path Attend talks, workshops and detailed sessions by the expertsGet advice from experienced expertsNetwork with fellow data science learners and professionals to explore the opportunities in the field Know more about SKILLUP 2021 here.
SKILLUP 2021, India’s largest and first of its kind Virtual education fair exclusively for Data Science and AI enthusiasts, is scheduled to be held on 22 & 23 April, 2021. Brought to you by Analytics India Magazine, SKILLUP will host more than 20 universities and institutes showcasing their courses and certificate programs. The two-day virtual […]
["Deep Tech"]
["big data for data science", "data science curriculum", "data science master", "master data science", "Statistics for Data Science"]
Srishti Deoras
2021-02-17T10:00:00
2021
423
["data science", "artificial intelligence", "machine learning", "programming_languages:R", "AI", "cloud computing", "big data for data science", "data science curriculum", "data science master", "data engineering", "Statistics for Data Science", "master data science", "analytics", "GAN", "R"]
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "cloud computing", "R", "data engineering", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/deep-tech/skillup-indias-biggest-data-science-education-fair/
3
10
2
false
true
false
10,114,744
Tech Mahindra to Build LLM for Indonesia on Project Indus Principles
Tech Mahindra has teamed up with Indosat Ooredoo Hutchison to build ‘Garuda,’ a Large Language Model (LLM) to preserve Bahasa Indonesia, the official and national language of Indonesia and its dialects. Garuda will be built on the principles of Tech Mahindra’s indigenous LLM ‘Project Indus‘, a foundational model designed to converse in a multitude of Indic languages and dialects. The IT giant signed a Memorandum of Understanding (MoU) at Mobile World Congress (MWC) 2024. As part of this partnership, Tech Mahindra will leverage its technology expertise to gather and curate data in the Indonesian language, which will be pre-trained and released as a conversational model for Indosat. Garuda will be developed with 16 billion original Bahasa tokens, providing 1.2 billion parameters to shape the model’s understanding of the Bahasa language. These parameters will influence how the model processes input and formulates output. A beta version of the Garuda model will be released for testing by Indosat and Bahasa Indonesia speakers. The model will be further improved using RLHF (Reinforcement Learning from Human Feedback) techniques to ensure its robustness for conversation. Additionally, any specialized use cases will be developed using the LIMA (Less is More for Alignment) method. “The LLM market is expected to reach 40.8 billion USD by 2029. In this direction, the emergence of LLMs such as Garuda and Indus can enable people and enterprises to communicate online in their local dialects and languages, creating new opportunities in the digital world. “We believe that the model will significantly promote Indonesia’s linguistic diversity and unlock new business opportunities for enterprises in the region,” said Harshvendra Soin, President – Asia Pacific and Japan Business, Tech Mahindra.
Tech Mahindra will leverage its technology expertise to gather and curate data in the Indonesian language
["AI News"]
["LLMs", "project indus", "Tech Mahindra"]
Pritam Bordoloi
2024-02-29T12:41:46
2024
275
["Tech Mahindra", "TPU", "RLHF", "programming_languages:R", "AI", "LLMs", "Git", "RAG", "R", "project indus"]
["AI", "RAG", "RLHF", "TPU", "R", "Git", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/tech-mahindra-to-build-llm-for-indonesia-on-project-indus-principles/
2
7
3
false
false
false
65,013
Naxon Labs Launches Brain to Computer Interface Solution
Naxon Labs launched today Explorer: a cheap and useful tool and neurofeedback system for professionals in the fields of Engineering and Information Technology, Neuroscience, and Medicine. Using this technology, you can save time with automatic blink and artifact detection and display in real-time brain wave frequencies per channel or by average. The data captured can be downloaded for further analysis with tools like MATLAB, Brainstorm or EEG Lab. The device can be connected from a PC, a MAC or a tablet with Bluetooth. Explorer consists of an electroencephalography monitor adapted for portable EEG (Electroencephalography), in particular the Muse headset by Interaxon Inc. With Naxon Explorer you can visualize, record and analyze brain activity with wireless EEG technology. It incorporates features to organize projects, clients or participants, at the same time you can attach notes, synchronize events and change parameters during recordings. An upcoming update will integrate machine learning tools and automatic pattern analysis for analyzing EEG data and detect the presence of certain evoked potentials based on the events or stimuli marked in a session, and also training models based on input EEG data that can then be used for practical applications. “We want to open to the world the possibilities of researching the brain while betting on innovation on what the major current technology leaders agree is the 21st century next frontier: neurotechnology, an area that combines applied neuroscience, wearable technology, BCI, Cybernetics, biosensor development, AI and machine learning” said the cognitive neuroscientist Leandro Castelluccio, MSc, Naxon Labs’ CEO and Co-Founder. Through Chevening Scholarships, Leandro got his master in Cognitive Neuroscience in the University of Sussex, a leading research-intensive university located in Brighton, United Kingdom, where he got a lot of interest in leveraging information technology tools to have a better understanding of brain activity in patients. Currently Leandro also develops research activities at the Psychology School at Universidad Católica del Uruguay where he got his bachelor’s degree in psychology. Under the framework of Brain-Computer Interfaces, Naxon Labs is a company that works with portable EEG technology for the development of practical tools and innovative applications as well as mind-controlled hardware and software technology. Explorer comes as a first release as the company works in Emotions, a second platform consisting of an emotion monitoring system that translates brain information into objective visual markers of states such as anxiety, relaxation, concentration, joy or sadness, among others.
Naxon Labs launched today Explorer: a cheap and useful tool and neurofeedback system for professionals in the fields of Engineering and Information Technology, Neuroscience, and Medicine. Using this technology, you can save time with automatic blink and artifact detection and display in real-time brain wave frequencies per channel or by average. The data captured can […]
["AI News"]
[]
Vishal Chawla
2020-05-12T06:53:49
2020
396
["Go", "machine learning", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "RAG", "ViT", "GAN", "R"]
["AI", "machine learning", "RAG", "R", "Go", "GAN", "ViT", "innovation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/naxon-labs-launches-brain-to-computer-interface-solution/
2
10
0
false
false
true
10,162,310
MedMitra AI Secures ₹3 Crore Funding to Transform Healthcare Delivery with AI
In a major boost to India’s health-tech landscape, MedMitra AI, a platform that uses artificial intelligence (AI) for healthcare, has raised ₹3 crore in a pre-seed funding round. The investment was co-led by venture capital firms All In Capital and WEH Ventures, alongside angel investors Rohan Khandelwal, Pawan Gupta, and Venkat Subramanyam. This funding marks a crucial step in MedMitra AI’s mission to transform healthcare delivery with advanced AI solutions. At the forefront of innovation, MedMitra AI is developing autonomous AI agents designed to support healthcare professionals in diagnosis, treatment, and prognosis. By integrating multimodal data such as patient history, lab reports, prescriptions, and imaging, the platform enables precise, efficient, and personalised care. Currently, MedMitra AI is focusing on general medicine and chronic care, and it further aims to enhance clinical outcomes while addressing systemic inefficiencies in India’s healthcare ecosystem. “Our mission is to create AI-driven solutions that seamlessly integrate into doctors’ workflows while prioritising reliability and clinical relevance,” said Shivangi Sharma, co-founder of MedMitra AI, and an AI graduate from Northwestern University. The newly raised funds will be directed towards expanding the team, accelerating product development, and strengthening MedMitra AI’s market presence. The company also plans to launch a specialised version of its platform for medical students, aiming to redefine medical education by enhancing learning and preparing students for real-world challenges. Highlighting the investment’s potential, Kushal Bhagia, founder of All In Capital, said, “MedMitra AI is a game-changer, streamlining diagnostics to deliver faster, more accurate outcomes while extending high-quality care to underserved communities.” With the Indian AI in healthcare market projected to reach $8.73 billion by 2030, growing at a staggering CAGR of 41.8%, MedMitra AI is strategically positioned to lead this transformation. Fueled by technological advancements, rising healthcare expenditure, and government initiatives like the India AI Mission, the company’s vision aligns with the nation’s push for deep-tech innovation.
MedMitra AI is developing autonomous AI agents designed to support healthcare professionals in diagnosis, treatment, and prognosis.
["AI News"]
["AI (Artificial Intelligence)", "Healthcare"]
Vidyashree Srinivas
2025-01-28T13:57:29
2025
310
["Go", "API", "funding", "artificial intelligence", "AI", "R", "ML", "innovation", "venture capital", "Aim", "Healthcare", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "ML", "Aim", "R", "Go", "API", "innovation", "venture capital", "funding"]
https://analyticsindiamag.com/ai-news-updates/medmitra-ai-secures-%e2%82%b93-crore-funding-to-transform-healthcare-delivery-with-ai/
2
10
1
false
false
false
10,143,286
Linux Foundation Expands Global Footprint with Strategic India Launch
Linux Foundation marked a significant milestone in the open-source ecosystem with the launch of LF India, a strategic initiative aimed at fostering open collaboration and innovation in the emerging technology markets. The company announced the move at KubeCon + CloudNativeCon India in New Delhi. India’s developer ecosystem, with 9.5 million developers, is projected to become the world’s largest by 2028. The foundation’s expansion builds upon an impressive foundation, with nearly 200,000 Indian developers already contributing to Linux Foundation projects. India’s growing influence in the open source community is evident, with the country being the fourth-largest contributor to Cloud Native Computing Foundation projects and the third-largest contributor to Kubernetes, accounting for 18.7% of global open source commits. This strategic move has attracted significant industry support, with Infosys, one of India’s leading IT services companies, announcing an “open source first strategy.” The timing aligns with India’s projected IT spending trajectory, expected to reach $124.6 billion in 2025, with recent surveys indicating that 78% of Indian enterprises now prioritise open-source solutions for their digital transformation initiatives. The initiative has also garnered international backing, notably from the US Department of Defense, which views this collaboration as crucial for developing secure 5G and 6G technology ecosystems. This international partnership gains additional significance as India’s government has ramped up its open-source adoption, with over 85% of government projects now leveraging open-source platforms. The foundation’s partnership with the International Startup Foundation (ISF) addresses the growing demand for specialised skills, responding to a 42% year-over-year increase in demand for open-source expertise in India. This educational initiative aims to bridge the skills gap through advanced training and certification programs in Linux and related technologies.
India’s developer ecosystem, with 9.5 million developers, is projected to become the world’s largest by 2028.
["AI News"]
["Linux", "Open Source AI"]
Sagar Sharma
2024-12-11T15:40:58
2024
275
["Go", "AI", "innovation", "digital transformation", "Git", "RAG", "Open Source AI", "Aim", "Linux", "R", "kubernetes", "startup"]
["AI", "Aim", "RAG", "kubernetes", "R", "Go", "Git", "digital transformation", "innovation", "startup"]
https://analyticsindiamag.com/ai-news-updates/linux-foundation-expands-global-footprint-with-strategic-india-launch/
3
10
4
false
false
false
10,094,080
Odisha Goes All In On AI, Launches AI For Youth
Earlier today, Odisha chief minister Naveen Patnaik announced the ‘Odisha for Artificial Intelligence’ and ‘Artificial Intelligence for Youth’ initiative. Conducted in conjunction with top chipmaker Intel, this initiative will take place in 3 cities across the state, namely Bhubaneswar, Puri, and Cuttack. The Indian government has been going big on AI. The Budget for 2023 was heavily focused on AI solutions for uniquely Indian problems, with the government setting up 3 centres of excellence to improve knowledge on AI. Towards this end, many states have been taking up AI initiatives of their own to upskill the population. These centres of excellence will not only create a robust research environment towards AI, but also ensure a steady stream of AI professionals in the job market. With the tagline ‘Make AI in India and Make AI work for India’, the government is heavily incentivising the creation of an AI ecosystem in the country. Towards this end, Mr. Patnaik stated, “[The initiative] will also create an ecosystem fostering research, innovation, and application across sectors.” As part of this undertaking, Odisha is offering AI courses to 2 sets of people. Firstly, the Odisha for AI initiative is a free 4-hour course conducted by Intel and made available on the Odisha for AI website. The program is split into two sections, titled AI Aware and AI Appreciate. The AI Aware program covers basic topics like what AI is and differentiating between AI and non-AI machines. On the other hand, the AI Appreciate course describes the different domains of AI and their impact on various industries, along with a primer on AI ethics and responsible AI. These courses can be completed in about 4 hours, and are available for all online. It is targeted at demographics ranging from students, stay at home parents, professionals, and even senior citizens. Once users finish the quiz at the end of the course, they will be granted a badge that can be shared on social media. AI For Youth, on the other hand, targets children in school. This program  will take place in around 2000 schools enabled by Odisha’s 5T (team work, technology, transparency, transformation, time limit) initiative. This will also be offered in Odisha Adarsha Vidyalayas. According to the CM, this will unlock the limitless potential of our youth and build a future where AI serves as a tool for the empowerment of citizens. Meanwhile, it seems that other states are falling behind. Karnataka, often hailed as the technological capital of India, has signed various MoUs and partnerships to bring AI to various fields like pollution control, supply chain optimisations, and digital agriculture, but nothing has come from these partnerships.
Odisha is setting an example for states like Karnataka and Telengana, which are lagging behind.
["AI News"]
[]
Anirudh VK
2023-05-30T11:13:05
2023
441
["Go", "API", "artificial intelligence", "programming_languages:R", "AI", "innovation", "Git", "AI ethics", "responsible AI", "R"]
["AI", "artificial intelligence", "R", "Go", "Git", "API", "AI ethics", "responsible AI", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/odisha-all-in-on-ai-launches-ai-for-youth/
2
10
1
false
false
false
10,093,512
Council Post: Shaping Tomorrow – The Transformative Potential Of Quantum Machine Learning
Introduction: Understanding Quantum Machine Learning Quantum Machine Learning (QML) is an emerging field combining two revolutionary technologies: quantum computing and machine learning. This intersection can revolutionize artificial intelligence, computing, and data analysis by harnessing the unique properties of quantum mechanics. The principles of quantum computing and traditional machine learning are combined in QML and can enable unparalleled computational power and problem-solving capabilities. QML leverages quantum bits (qubits) to represent and process data, exploiting quantum superposition, entanglement, and interference to explore multiple solutions simultaneously. Quantum superposition allows qubits to exist in various states simultaneously ( 0, 1, or both), while entanglement creates strong correlations between qubits, even when separated by large distances. Quantum interference is critical in designing and implementing quantum algorithms for machine learning tasks. Though the field is still developing, and many applications are in their infancy, QML holds great promise for overcoming current limitations in classical machine learning. The future of Quantum Machine Learning is certainly promising, but what exactly does it hold for us? Envisioning the Future of Quantum Machine Learning Key areas set to benefit from Quantum Machine Learning (QML) include personalized medicine, drug discovery, logistics optimization, materials science, artificial intelligence, cryptography, and secure communications. By enabling more accurate modeling and prediction, QML can redefine its competitive advantage, alter commercial operating models, and reshape entire sectors. However, realizing the full potential of QML depends on overcoming challenges such as developing more advanced quantum hardware and efficient algorithms tailored for specific applications. Organizations that adopt these emerging technologies can drive innovation, create value, make data-driven decisions that are not possible with traditional computing, and tackle complex global challenges like climate change and resource scarcity. A significant point to consider is that the learning curve of quantum computing is steep. Consequently, a delayed adoption strategy may become risky, emphasizing the importance of gaining a significant edge over rivals. The benefits of QML are numerous, especially when considering its potential applications and role in achieving sustainability goals. QML Applications and its Role in Achieving Sustainability Goals In healthcare, QML expedites drug discovery and personalized treatments. In finance, it can optimize trading algorithms and risk assessment. Moreover, QML contributes to the fight against climate change by enhancing renewable energy technologies, accelerating materials discovery, and optimizing resource management. The transformative potential of QML extends to various applications, including smart cities, traffic management, and supply chain optimization. One urgent challenge is tripling our energy storage to limit global warming to two degrees by 2050. QML, through its powerful computational abilities, could be crucial in designing and optimizing next-generation technologies, such as more potent, durable, and affordable energy storage systems. These advancements can drive market share gains and higher profits for forward-thinking businesses. QML’s ability to concurrently run many simulations facilitates quick testing, comparison, error correction, and deployment of goods or services, further catalyzing innovation across industries. To fully grasp the implications and potential applications of QML, we need to understand the quantum algorithms and techniques that power it. Quantum Algorithms and Techniques Quantum algorithms like Quantum Support Vector Machines (QSVM), Quantum Neural Networks (QNN), and Grover’s and Shor’s algorithms are central to the advancement of Quantum Machine Learning (QML). QSVM and QNN offer efficient data classification, pattern recognition, and optimization, outperforming traditional machine learning techniques. Separately, Grover’s algorithm, which accelerates unstructured search problems, and Shor’s algorithm, with its efficient factoring of large numbers and implications for cryptography (e.g., RSA), highlight the immense power of quantum computing and inspire new techniques in QML. Despite this progress, QML is still in its early stages. Continued research and development are needed to unlock its potential fully. This includes the creation of new algorithms tailored explicitly to diverse QML applications. Given the complexities and potential of QML, organizations must gear up to meet the challenges and seize the opportunities it offers. Gearing Up for Quantum Machine Learning Organizations must prioritize developing in-house quantum expertise, collaborating with quantum startups, partnering with quantum hardware providers, and creating quantum-ready software. Investment in research and development is also essential. It is crucial to foster a culture of innovation within these organizations. Promoting collaboration between quantum and classical ML experts will help harness the potential of quantum technology and gain a competitive advantage. Additionally, understanding the unique challenges and limitations of quantum computing is important. Issues such as qubit coherence and error rates present complexities in this emerging field. Gaining a firm grasp of these challenges will help organizations navigate and make significant strides in quantum machine learning. However, while gearing up for QML, organizations must also prepare to confront several challenges in this field. Unmasking the Challenges of Quantum Machine Learning Key challenges and limitations facing Quantum Machine Learning (QML) include hardware constraints, short qubit coherence times, error correction, and talent shortages. There is also a need for more practical, large-scale use cases. Addressing these challenges requires a multi-faceted approach. Investment in next-generation quantum hardware and quantum error correction codes is necessary. There is also a need for standardized tools, programming languages, and training and education programs. Moreover, developing efficient quantum algorithms tailored to specific applications is essential. Cybersecurity and privacy concerns present another challenge that must be addressed to ensure successful QML integration. Policymakers, researchers, and businesses must collaborate to create an enabling environment for developing and deploying QML. This collaboration fosters innovation while mitigating potential risks. Beyond these technical and practical challenges, ethical considerations also play a major role in widely adopting technologies like QML. Ethical Considerations As quantum machine learning (QML) advances, it raises significant data security concerns, such as the potential to crack widely used cryptographic schemes like RSA. Beyond security, ethical considerations surrounding QML are diverse, encompassing data privacy, algorithmic bias, and equitable access to quantum technologies. For example, improperly designed QML applications might inadvertently exacerbate existing biases, resulting in unfair consequences for certain groups. Business leaders and policymakers must prioritize the responsible development and deployment of QML technologies to address these concerns. Their goal should be to foster innovation while ensuring that benefits are broadly shared and potential risks mitigated. Regulatory frameworks and guidelines must be established to promote fairness, accountability, transparency, and privacy. These measures will help protect users’ rights and build trust in these cutting-edge systems. By working together, stakeholders can harness the power of QML while effectively addressing its complex ethical challenges. To ensure the ethical use and continued development of QML, attracting and retaining skilled professionals in the field is crucial. Talent Acquisition and Workforce Development Companies and educational institutions must adopt strategies to attract, retain, and develop top talent in QML. This includes introducing specialized training and education programs and establishing collaborations with research organizations and universities. Encouraging interdisciplinary collaboration, particularly among physics, computer science, and mathematics, is another critical aspect of workforce development and drives progress in QML. In this highly competitive field, offering competitive compensation and benefits is essential for attracting and retaining skilled professionals. With a culture of innovation and collaboration, organizations can ensure they have a skilled workforce well-prepared to navigate the complexities of quantum technologies. Addressing these challenges and capitalizing on the opportunities provided by QML requires more than individual talent; it demands fostering global cooperation. Fostering Global Cooperation Global cooperation and collaboration at an international level between academia, industry, and governments are vital for propelling research, innovation, and the responsible development of Quantum Machine Learning (QML). Stakeholders must establish international research centers, public-private partnerships, and regulatory frameworks that foster knowledge sharing and collaboration. Developing ethical guidelines on a global scale is also crucial to ensure the responsible deployment of QML applications. Noteworthy international initiatives and organizations, such as Quantum Economic Development, can fast-track the development and implementation of quantum technologies. This coordination can help maximize societal benefits while mitigating risks and unintended consequences. Conclusion Quantum Machine Learning (QML) has immense potential to transform industries and aid environmental sustainability. However, as we unlock its potential, significant challenges must be addressed, including developing advanced quantum hardware, talent acquisition, and privacy protection. The limits of traditional computing power could constrain the future of Machine Learning (ML). QML provides a pathway to overcome these constraints and accelerate our digital transition, opening new horizons for ML. To responsibly leverage QML, we must foster innovation and collaboration across businesses, academia, and governments, ensuring ethical considerations are at the forefront. By navigating these complexities, we can ensure that Quantum Machine Learning does not just become a part of our future but shapes it, driving us towards a more sustainable and technologically advanced society. “Quantum Machine Learning is our North Star in the vast cosmos of technology. It stands at the unique intersection of quantum physics and machine learning, illuminating our path beyond the limits of classical computing. Like a guiding light piercing through complexity, it promises advancement and a radical transformation of our world. Yet as we navigate this uncharted universe, we must be the astronomers, explorers, and ethicists, ensuring our journey brings us to a sustainable, inclusive, and profoundly human future. Quantum Machine Learning is not just the next chapter in our story—it’s a whole new epic waiting to unfold.”  – Amitkumar Shrivastava. This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.
By navigating these complexities, we can ensure that Quantum Machine Learning does not just become a part of our future but shapes it, driving us towards a more sustainable and technologically advanced society.
["AI Features"]
[]
Amitkumar Srivastava
2023-05-18T10:00:00
2023
1,558
["data science", "machine learning", "artificial intelligence", "AI", "neural network", "ML", "RAG", "Aim", "analytics", "quantum machine learning"]
["AI", "artificial intelligence", "machine learning", "ML", "neural network", "data science", "analytics", "Aim", "quantum machine learning", "RAG"]
https://analyticsindiamag.com/ai-features/shaping-tomorrow-the-transformative-potential-of-quantum-machine-learning/
3
10
5
false
true
true
8,600
Netflix launch in India & use of analytics
The much-awaited launch of Netflix in India has been met with a mixed response. While a lot of netizens have welcomed it, many critics have their doubts about its potential for success with Indian customers. In a country where a large majority is happy with affordable yet mediocre content available on cable TV, it will be a challenge to lure these customers to the original and quality content that Netflix is famous for offering. Key concerns and opportunities The dependence of Netflix on high speed broadband connections will impede its penetration. Piracy is another issue it will need to tackle head-on. Yet, it cannot be denied that there surely exists a significant niche of progressive, affluent internet users to be tapped. Understanding the needs of this niche group, then, is crucial for Netflix’s success in India. The average Indian netizen is known for actively voicing opinions on social media platforms. These platforms rich in large amounts of customer data, which if mined and interpreted correctly, can provide valuable insights for businesses – particularly those operating in the consumer internet space. Engaging with one’s customers (existing and potential) on social media kills multiple birds with one stone – like improving brand value, strengthening customer relations, validation of strategic decisions with customer feedback, understanding customers’ sentiments, hidden aspirations and pain-points, and generation of sales leads. Capturing the viewers’ engagement A number of tools and methods are available for making sense out of social media data. Tracking simple metrics – like number of followers, shares, comments and likes – is useful but can only scratch the surface, since it is a passive strategy. An active strategy involves real-time engagement with audiences, and is therefore much more useful. This involves monitoring trending topics and modifying posts accordingly, studying the time-dependent variance in user outreach to optimize the timing of posts, investing in text-mining tools to analyze and summarize large volumes of user comments, appointing social media managers to instantly respond to users, conduct campaigns, generate awareness and forward potential leads to the sales team. Social media is also a powerful sandbox to test hypotheses by analyzing the response and sentiment of customers. Role of analytics Netflix is known for heavily using analytics on user data to optimize their content offerings. The same capabilities can be used to promote its newly launched services in India. It can run a campaign around discounts for referrals, to achieve exponential user-base growth. It can monitor users’ posts to identify which shows are talked about the most, and can also conduct a campaign around letting users add their favorite shows to a crowd-sourced wish-list. It can gauge the success of content experiments by analyzing sentiments in comments. It can also monitor forums on piracy websites to identify content which is highly in demand but not available on mainstream channels. It can also use feedback to introduce features specific to India – for example, enabling users to download episodes to view later when internet connection is not available, for example – during a train journey. It should also monitor social media accounts of competitors like HotStar to gain more insights into user preferences. [divider] India’s internet community is growing fast to become the world’s largest. It is therefore vital for companies to use social media platforms to derive value for their businesses. At GlobCon, we specialize in providing social media solutions. Our strengths include customer engagement, brand enhancement, lead generation and analytics. Let us understand how we can create value for you!
The much-awaited launch of Netflix in India has been met with a mixed response. While a lot of netizens have welcomed it, many critics have their doubts about its potential for success with Indian customers. In a country where a large majority is happy with affordable yet mediocre content available on cable TV, it will […]
["IT Services"]
[]
GlobCon Technologies pvt ltd
2016-01-14T05:31:30
2016
582
["programming_languages:R", "AI", "RAG", "ViT", "analytics", "R"]
["AI", "analytics", "RAG", "R", "ViT", "programming_languages:R"]
https://analyticsindiamag.com/it-services/how-netflix-can-leverage-social-media-to-capture-the-indian-market/
2
6
1
false
true
true
37,899
6 Reasons Why MachineCon 2019 Is The Perfect Event For Analytics Leaders To Participate
The leader in delivering top-notch and trusted insights on analytics ecosystem in India, Analytics India Magazine brings you the exclusive gathering of Analytics and Data Science Leaders, MachineCon 2019. To be held on May 24, Mumbai and May 31 in Singapore, this second edition of the Machine Conference will provide a powerful platform to recognize those who know the nuts and bolts of data and know how to transform that data into a competitive advantage. An invite-only conference, it will host most influential leaders from the world of analytics and will recognise Asia’s leading 100 technology visionaries in the field of analytics and data science with the prestigious Analytics100 Awards. With so much in the store, MachineCon  2019 ensures that the event brings you a whole new level of experience as this is also the first time that the conference is going international — to Singapore. Here Is What All The Attendees Can Expect From This MachineCon 2019 Get Solutions To The Data Problems There is no doubt that data today is the backbone of almost every business. However, there lies a challenge — to use this data and transform it into customer insights. And MachineCon 2019 is dedicated towards people who have overcome this challenge. Having some of the prime personalities from the data domain onboard, the machine conference will provide a platform where the delegates can connect to the leaders from the industry and get the top-notch solutions to their data problems. Have A Closer Look At Data Science Advancements While we read on a daily basis about how the technology and analytics space is rapidly advancing, MachineCon will give an opportunity to get the first-hand experience of the developments taking place in the industry. MachineCon 2019 has some of the best speakers from the industry who will deliver talks that matter to the industry. Not just that, it will a presence of analytics leaders from various industries who can provide a closer look at how the analytics sector is reaching a whole new level and how are companies adopting it. Knowledge Sharing With talks, discussions and real-time examples around analytics and data science, the conference will host extensive knowledge sharing. Being a close-knit event, it will provide a platform to share experience and knowledge from the analytics heads, senior professionals, leaders and more. Make Some Of The Strongest Connections The event will provide you with an opportunity to meet and greet some of the most influential personalities from the domain of analytics. With the best of the industry from India and Singapore at one place, MachineCon will serve as a platform to make the best industry connections from leading companies. With more than 400 attendees across the two countries, it is a platform to make some of the strongest connections in the world of data science and AI. Witness An Enthusiastic Crowd Of Data Science Experts With an agenda to top the adoption of analytics, the Machine Conference is dedicated to bringing all the enthusiastic leaders from the world of analytics under one roof. With an expected attendance of about 400 attendees, MachineCon 2019 is going to have a significantly huge crowd of people who relentless towards bringing analytics to the mainstream. Whether you are a speaker, or an attendee or an award winner, MachineCon 2019 won’t dissatisfy you. Meet, greet and share your experience and knowledge with the gurus of analytics from Asia. Get Inspired, Watching The Leaders Getting Recognized For Their Wonders In Analytics In order to recognize the best of the minds in the Analytics industry and to celebrate their success of Data Science in India & Singapore, the Machine Conference brings Analytics100 awards. The applicants are handpicked by the editors of Analytics India Magazine and industry veterans. This prestigious award is also focused on motivating and inspiring all the delegates of MachineCon to work harder towards analytics domain and do something really incredible that the world remembers.  The winners will be recognized by the industry at The Machine Conference – May 24, 2019, at Novotel Juhu Mumbai & May 31, 2019, at Novotel Clarke Quay Singapore.
The leader in delivering top-notch and trusted insights on analytics ecosystem in India, Analytics India Magazine brings you the exclusive gathering of Analytics and Data Science Leaders, MachineCon 2019. To be held on May 24, Mumbai and May 31 in Singapore, this second edition of the Machine Conference will provide a powerful platform to recognize […]
["Deep Tech"]
["MachineCon"]
Harshajit Sarmah
2019-04-17T09:34:23
2019
683
["data science", "Go", "API", "programming_languages:R", "AI", "programming_languages:Go", "MachineCon", "analytics", "ViT", "Rust", "R"]
["AI", "data science", "analytics", "R", "Go", "Rust", "API", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/deep-tech/6-reasons-why-machinecon-2019-is-the-perfect-event-for-analytics-leaders-to-participate/
3
10
2
false
true
false
10,077,652
NO, UPI is Not Killing Candy Business
When out on tea breaks, we AIM journalists often return with a handful of candies and gums for the team in a ritual of sorts – all paid through UPI. So when we came across the whole ‘UPI killing candy’ saga, we were drawn in. Abhishek Patil, the founder of GrowthX®, appears to have initiated this discussion on Linkedin, which spread like wildfire on social and mainstream media. Patil claimed that with the implementation of UPI, consumers have stopped taking toffees casually because they no longer need to ask for change (a request which was inadvertently or by design almost always met with toffees). After his LinkedIn post, renowned publications carried the discussion forward. However, what most forgot to consider was whether the discussion is based on correlation or causation. Yes, a few players may have recorded a decline in profits since the adoption of UPI, but a lot of candy companies have actually reported record profits since the adoption of UPI. So, taking those handful of companies with declining profit margins as the basis for making such blanket statements would be unfair – a mere assumption. Pulse candy, for instance, was launched around the same time as UPI and yet managed to be one of the largest hard-boiled candy brands in India. Correlation The strongest argument supporting the theory that UPI has spelt doom for the toffee companies is that people no longer get toffees as loose change. In recent years, the use of UPI has undoubtedly increased, so much so, that it made India the country with the highest number of digital transactions. Additionally, Patil said, “During the pandemic, everyone wanted contactless payments. This gave a soft push to digital payments and toffee went off the picture.”  Yes, contactless payments did increase during the pandemic and it surely did help take India towards the digital ecosystem. But, is it causation? In 2019, Parle Products Pvt Ltd, one of the biggest confectionery manufacturers in India, made the decision to stop producing its 50-paise retail chocolates, like Kismi Toffee, Orange Bite, London Derry, and Mango Bite. But, UPI was not the cause. In the twenty-first century, the business just couldn’t turn a profit on its 50-paise candies. Inflation will eventually cause the confectionery industry to lose money until they periodically raise the prices of their products. Given that the cost of making candies increased during the pandemic, this may be one of the reasons why foreign brands experienced a fall in revenues in India. The growth in market share of domestic candy brands may also be a factor in some candy companies’ declining revenues. The parent company of Pulse candy, SG group, had a growth in profits in 2021, and the company’s profit margin also increased. (SG group saw an increase in Net profit Margin) Similarly, the stock price of Sampre Nutritions Ltd, the company that owns the Eclairs candy brand, is at a record high. (Sampre Nutritions Ltd stock is trading at lifetime high) Moving on to Lotte India, the company behind popular brands like Coffee Bite, Lacto King and Lotte Eclairs reported record profit in the FY 2020-21. Correlation is not causation Even though the point of contention right now is toffee instead of change, candies are much more than that. One of the major target markets for candy makers is India, which has about 444 million of its population under 18. Children purchase sweets, chocolates, snacks, and other items on demand rather than in return for change. TechSci Research‘s data indicates that the Indian candy market had a valuation of $1643.64 million in FY2020 and that it would increase at a CAGR of 15.40% to reach USD 3661.68 million by FY2026. Certainly, the implementation of UPI may have curbed the practice of palming off toffees as change, but that’s a positive development rather than cause for concern. However, some kirana stores AIM spoke to claimed that individuals continue to round off the amount and purchase candies in its place. Overall, UPI is not destroying the sweet sector.
Yes, a few players may have recorded a decline in profits since the adoption of UPI, but most candy companies have actually reported record profits since the adoption of UPI
["IT Services"]
["UPI"]
Lokesh Choudhary
2022-10-20T10:00:00
2022
670
["Go", "ELT", "programming_languages:R", "AI", "llm_models:PaLM", "programming_languages:Go", "Git", "UPI", "Aim", "R"]
["AI", "Aim", "R", "Go", "Git", "ELT", "llm_models:PaLM", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/no-upi-is-not-killing-candy-business/
3
9
3
false
false
false
10,115,912
What’s Devin Up to?
Devin, the world’s first AI software engineer, has been quite busy performing endlessly various end-to-end tasks, from debugging code repositories to fine-tuning large language models. It has also been helping select developers work more efficiently by automating tasks and assisting in testing, debugging, and deploying applications. Devin’s capabilities span multiple domains, making it a versatile tool for software development. As AI continues to advance, tools like Devin will play an important role in the future of software development. Let’s look at what it is capable of and what it has been doing so far: Devin Likes to Debug and Test Devin excels at debugging and testing code in open-source repositories. It seamlessly navigates through the codebase, writes comprehensive test cases, and employs advanced debugging techniques to identify and resolve issues when presented with a specific bug. By leveraging print statements and re-running tests, the AI software engineer ensures that fixes are effective and no new problems are introduced, saving developers valuable time and effort. Devin Likes to Fine-tune Large Language Models Fine-tuning large language models, such as the 7B llama model, becomes a breeze with Devin. By cloning repositories, setting up dependencies, and running training jobs, it streamlines the process of adapting models to specific tasks. When faced with challenges like CUDA issues, Devin troubleshoots by examining the environment and reinstalling packages, ensuring smooth training progress and providing regular status updates. Devin Knows How to Set Up Computer Vision Models Devin proves its worth by taking on complex Upwork jobs, such as setting up computer vision models. Given a job description, it sets up the necessary repository, resolves versioning issues, and processes images from the internet to run through the model. Through meticulous debugging and code fixes, the AI software engineer generates sample outputs and provides comprehensive reports, delivering high-quality work that exceeds client expectations. Devin Enhances User Experience in Open-Source Tools Open-source tools often face user experience challenges, but Devin is here to help. By cloning repositories, understanding codebases, and addressing specific issues, it improves user experiences in minutes. With its ability to install dependencies, make code changes, and thoroughly test modifications, the AI software engineer ensures open-source tools become more user-friendly and accessible to a wider audience. Devin Generates Images from Blog Posts Devin demonstrates its versatility by generating images based on blog post instructions. By reading and comprehending blog content, it identifies and fixes edge cases and bugs, creating stunning visuals like personalised desktop backgrounds. With its ability to generate bonus images, the AI software engineer adds creativity and originality to the output. Devin Can Develop Web-Based Games Devin demonstrates its proficiency in creating engaging web-based games, such as the Game of Life. When given specific requirements, it efficiently sets up a React application, writes clean and efficient code, and deploys the game using platforms like Netlify. It continuously enhances the game based on user feedback, adding features and fixing bugs. Devin ensures the game is responsive and interactive across devices, allowing developers to focus on the creative aspects of game design while it handles the technical implementation, bringing game ideas to life quickly. Devin Knows How to Fix Bugs in Open-Source Libraries Devin shines when fixing bugs in open-source libraries. It diagnoses issues precisely by setting up repositories, reproducing buggy outputs, and identifying relevant code. Through careful code modifications, debug output cleanup, and thorough testing; the AI software engineer ensures that bugs are squashed and libraries remain stable and reliable. Devin Does Data Analysis and Simplifies Visualisation Devin simplifies data analysis and visualisation tasks, even when faced with challenging data formats and geospatial complexities. By reading documentation, performing exploratory data analysis, and processing data from various sources, it can create informative and visually appealing visualisations. With its ability to respond to user requests and deploy applications, the AI software engineer makes data insights accessible and interactive. View all stories Top 5 Devin AI Alternatives for Coders and Developers 10 Best AI Code Generator Tools to Use for Free in 2024 8 Things Developers Must Know About Devin Generative AI in Software 8 Must Have Skills to Become AI Engineer
Inside the World’s First AI Software Engineer’s Latest Breakthroughs.
["AI Trends"]
["cognition labs", "Devin", "Scott Wu"]
K L Krithika
2024-03-17T11:00:00
2024
683
["CUDA", "TPU", "AI", "ETL", "Devin", "ML", "computer vision", "RAG", "cognition labs", "generative AI", "Scott Wu", "R"]
["AI", "ML", "computer vision", "generative AI", "RAG", "TPU", "CUDA", "R", "CUDA", "ETL"]
https://analyticsindiamag.com/ai-trends/whats-devin-up-to-inside-the-worlds-first-ai-software-engineers-latest-breakthroughs/
2
10
0
false
true
false
24,342
Product-Based Mindset Is Key For Indian Companies To Successfully Harness AI
Market hype and the growing popularity of artificial intelligence has pushed companies to introduce the new technology into their product strategy. It has resulted into a growing presence of AI in almost every new product and service. Companies are extensively exploring AI and machine learning as a part of their digital business strategy, making it a high investment sector in the country. The popularity is growing to an extent of ‘AI washing’ — where the companies are applying AI labels to products and companies a little too generously. Past trends have shown that although AI offers exciting possibilities, companies are more focused on building and marketing AI-based products rather than first identifying the need, potential uses and the business value they can generate. There is a tremendous increase in the number of startups and companies claiming to offer AI-based solutions, but are failing to execute. The underlying fault here could be largely attributed to the failure of adopting the right product mindset for AI. Why Having A Product-Based Mindset Is The Key A successful product is expected to have a consistent behaviour and to contribute to the over-the-top growth for a business. It is important to set and manage expectation of the users, gather their feedback and communicate these observations into new product offerings. However, while doing the drill for AI products, it may differ significantly from the traditional products. For example, hardware or software products showcase a ‘deterministic’ behaviour, which means that a user’s behaviour is determined by product’s initial state and inputs, or is predetermined in most cases. However, in case of AI-driven products, it may not always have a deterministic behaviour, and may produce counter-intuitive results. This is due to the fact that a personalised recommender system may produce different results to a user action after learning additional preferences. It is therefore important for a product manager to have particular focus on strong product ideation and prototyping when it comes to AI products. Also, hype around AI use cases are projecting more false-positive results than actual results. It therefore requires critical thinking to separate the hype from the real world. It is important to understand which products at the realm of AI can be commoditized and provide highest return on investment while overcoming the challenges. “Using immature AI-based technologies and products is one of the several challenges to implementing AI”, Mosche Krank, CTO at Ness Digital Engineering had said in an interview with AIM. “Companies always overstate the actual capability, and it does not tell you those cases where the AI falls flat on its face.” He added that there is no substitute for experience. He also shared that your AI algorithm may work well on an engineer’s desktop but there is a need to productise the AI so that it can be deployed reliably in a system. Having a clear product mindset also attract investors. They are on a lookout for product-driven companies with strong product-based mindset. Building a revenue generating product, bursting with innovative ideas and paying tiniest details to scale up the product are some of the characteristics that they are looking for in an AI-based company. It can be said that having a product mindset for AI does therefore does three things: Fosters a culture that embraces disruption Brings out the creativity in organisation Eliminates barriers to innovation Road Less Taken It goes without saying that AI is a comparatively nascent field. Not all AI experiments may translate into field results. It may be challenging to work without any prior experience in the field and is important to have a thorough understanding of the current research to be able to build products. In the overall process, getting the right amount data can be a hindrance. For AI products, access to data is the key. Once you have the data, it has to be cleaned, structured and labelled to train models. AI and machine learning concepts operate at a fundamental level and can lead to false positive it data fed is not robust. Organisations must therefore look at their long term vision and embrace a holistic perspective to improve the field service industry with AI. On A Concluding Note With a surge in products like cloud, smartphones, IoT and others, the definition of products has changed quite significantly. Given the tech disruption, these trends will continue and they, in-turn will change the way products are made and used. It is therefore important to shift our thinking to view artificial intelligence as an evolution rather than as a revolution. With a growing popularity in AI, it is important to build a mindset where AI professionals need the creativity to imagine how the technology can be applied, paired with the analytical acumen to measure results to determine success over time. They must be willing to take risks and perform experiments while being resilient enough to fail fast and move on faster.
Market hype and the growing popularity of artificial intelligence has pushed companies to introduce the new technology into their product strategy. It has resulted into a growing presence of AI in almost every new product and service. Companies are extensively exploring AI and machine learning as a part of their digital business strategy, making it […]
["IT Services"]
["AI Companies", "AI India"]
Srishti Deoras
2018-05-08T07:49:16
2018
815
["Go", "machine learning", "artificial intelligence", "AI", "innovation", "Git", "GAN", "Aim", "ViT", "AI Companies", "AI India", "R"]
["AI", "artificial intelligence", "machine learning", "Aim", "R", "Go", "Git", "GAN", "ViT", "innovation"]
https://analyticsindiamag.com/it-services/product-based-mindset-is-key-for-indian-companies-to-successfully-harness-ai/
3
10
4
false
true
true
2,396
How IoT And ERP Are A ‘Power Couple’: Oracle’s Abhimanyu Prabhavalkar Explains
Abhimanyu Prabhavalkar, vice president of IoT product development at Oracle, talked to IoT India Magazine about how they have seen the world of Internet of Things (IoT) evolve as well as change the way people do business. According to Gartner, 5.5 million new things are being connected to the internet every single day — thermostats, kitchen appliances, smoke detectors and devices that can detect when an elderly person falls, among others. In a detailed chat, Prabhavalkar covered some of the fascinating topics from the world of IoT and ERP. Here are some excerpts: Modernising ERP systems — what does that mean for CFOs? On what basis does it work? The current generation of CFOs are struggling to adapt to the changing technological landscapes. Many, for instance, do not yet fully trust the findings of big data analytics. As soon as they’re willing to place their confidence in advanced analytics, they’ll be ready to move forward with adaptive intelligence and automated, Artificial Intelligence-enabled approaches to finding insights from data. Robotic Process Automation is slowly making inroads into CFO’s priority list. This requires a shift in the mindset, and CFOs need to consciously accept cutting-edge technologies to realise maximum value. The adoption of real-time and predictive intelligence is now imperative amongst all CFOs to retain a competitive advantage. If CFOs fail to accept this, they risk being left behind. By using data and data-driven collaborative tools, the CFOs will be able to break down the traditional siloes that have stifled communication and understanding between business functions. Drawing on connected and collaborative data, CFOs will have a complete view of the entire business as well as external conditions that may impact the business. Modern IoT enabled businesses can provide newer and profitable business models such as ‘product as a service’ with recurring revenue streams. Due to sensors embedded in products being used by customers, CFOs will also be able to understand the quality of customer experience which in turn determines the health of revenue streams. Due to IoT, CFOs would also be able to keep track of the distributed assets accurately and depreciate them appropriately rather than not being able to show these as accountable assets on the balance sheet due to tracking inability. Additionally, automating non value added tasks will ensure greater compliance and faster processing. One such example is the chargeback and cost allocation, across different subsidiaries, for usage of resources such as connectivity and communication. As a result, CFOs will be able to orchestrate a holistic, efficient and effective operation from back office, to supply chain right through customer experience and have a positive impact on reduction of costs, increased cash flow and predictability. How are IoT and ERP are a power couple leading to the success of an organisation? According to a market forecast research from IDC, Asia Pacific is leading the charge for IoT globally, with around 8.6bn connected devices predicted to be installed in the region by 2020 (figure excludes Japan). The Asia Pacific market is expected to report its highest growth yet from 2017-2023. The proliferation of IoT devices in the region will be an important driver for data-driven business transformation and will enable businesses with access to real-time insights to make better decisions. This year, we expect to see businesses in Asia Pacific starting to increasingly use IoT applications that can deliver IoT data for use across enterprises. IoT technology will provide greater insights to all parts of business. From raw materials supply to inventory tracking, asset information for predictive maintenance, predicting quality of goods being produced, accurate tracking of transport and fleet, quality of customer service experience, there are many use cases that have the potential of increasing efficiency of business operations, especially around supply chain management. Cloud-based IoT applications give CFOs direct access to external as well as internal data in real time, helping them make decisions faster. For example, due to sensors attached to machines in factories and the sensors used at point of sale, CFO would be able to always have the latest multi-dimensional analytics about the production & the market demand and thus be able to easily identify and solve issues that impact the bottom line. Previously, the supply chain management wouldn’t have been able to communicate the correlation between these business events in a timely manner to the CFO – but thanks to cloud and IoT capabilities, the CFO can proactively access wider insights, more quickly, across the entirety of business operations. How will the future ERP look like? Where is the current system heading with IoT, AI, Machine learning and so on? According to Allied Market Research, the global ERP software market will surpass a value of $41 billion by 2020. At the same time, IDC is predicting that in the similar time frame, global IoT spend will total nearly US$1.4 trillion as organisations continue to invest in the hardware, software, services and connectivity required to enable IoT. As both these processes are showing a huge propensity to grow, they are also overlapping. By 2022, IoT enabled ERP is poised to become a huge opportunity for the organisations as this market is expected to reach close to $50 billion by 2022. This overlapping of ERP and IoT seems inevitable. Data from IoT will further enhance ERP systems’ efficiency. Insights from AI and machine learning will further strengthen the ERP systems. This culmination will enable business leaders to take better decisions based on data driven insights. For example, sensors can communicate details about lack of or excess of inventory, allowing supervisors to better manage ordering and replenishment while minimizing the possibility of human error. This parlance can be extended to all the functions of organisations and the advantages that can be reaped are not difficult to imagine – production efficiency, quality control, customer service etc. However, there are a few considerations that organisations need to make while opting for ERP and IoT integration. The ERP platform must be able to handle the wealth of information created by IoT sensors, in addition to the data it already processes. Another consideration is data security capabilities in place across both platforms. To support information-driven decisions as well as protection, businesses will need to ensure end-to-end safeguarding, especially as data assets travel from one system to another. What is Oracle’s business plan and strategy in the area of ERP & IoT? Oracle’s strategy is focused on IoT-fying business applications by extending them to the physical world as well as integrating organisational silos (Design, Manufacturing, Logistics, Transportation, Service) in real-time throughout the digital supply chain. Oracle’s IoT Applications eliminate manual processes by creating ‘Digital Thread’ and workflows from enterprise assets to SCM, ERP & CX applications and, thus, provide an end-to-end view of the entire manufacturing lifecycle. IoT uses the assets master data, productions plans and prebuilt workflows etc from ERP systems. Automated template workflows allow a manufacturer to track items from procurement and product design, to manufacturing and product life cycle management, to warehousing and transportation, through to logistics and procurement. As well as providing better visibility, this enables new business models, such as dynamic demand planning and a very responsive supply chain. Predictive machine learning models are built using asset sensor data along with business data from ERP and SCM applications (manufacturing, maintenance, service, logistics, warehouse, financial etc). Because Oracle has a very deep understanding of these business applications, they have the ability to bring the analytical insight from the IoT application into the enterprise applications to power desirable business outcomes. Can you please share a couple of customer cases? What benefits your customers gain through marrying ERP & IoT? Noble Plastics specializes in injection molding, decorating, assembly, and contract manufacturing services. Their desire was to differentiate company from shoot-and-ship job shops as a creative design & manufacturing partner. Using Oracle IoT Applications and the sensors on the FANUC robots on the production line, they have been able to integrate robot monitoring data to ERP processes and have achieved the ability to modify the production process in real time. They have been able to deliver better customer service experience through higher supply chain transparency. Gemu is a market leader in developing, manufacturing and selling diaphragm valves, actuators and control systems. Gemu’s Oracle IoT Cloud Application receives the valve operation and health events. A service ticket is automatically created in service ticketing system and at the same time, the required spare part is reserved in the ERP system as well as the CRM is updated to send a message to the customer about the service task. Thus they have been able to achieve proactive and timely parts replacements, avoids production downtime as well as enhanced knowledge of product usage to improve product quality and functionality. Vinci Facilities, a provider of Digital Customer Service & Workforce management solutions, was looking to adapt on-site workforce activities to real demand (e.g. cleaning, refills etc.) as well as enable customers to request maintenance via smart phones and reduce repair times. Oracle IoT analyzes sensor data from various building assets in real-time and automatically creates service request in Oracle Service Cloud including contextual data. They have been able to improve user satisfaction and KPI transparency as well as achieve higher workforce efficiency. Vinci has also been able to introduce new services such occupancy monitoring. Softbank has used Oracle IoT to track usage of electric scooters by tourists on a Japanese island. They have they been able to enhance the customer experience by advising the users on no-go zones and nearest charging stations as well as have integrated ERP workflows for their billing and compliance. Security is a big worry for organisations. How does Oracle solutions ensure security of the data? As a top priority for Oracle, security has been designed into the Oracle IoT Cloud from the ground up to facilitate the creation of identity and trust relationships with the device and application endpoints. The lifecycle of all connected endpoints and devices (direct or indirect) is securely managed by the Oracle IoT Cloud Service. Within this process the endpoints are uniquely registered and authenticated, according to policies set by the user and implemented using OAuth2, and all messaging encrypted using HTTPS. It allows for employing encryption and obfuscation at the sensors and gateways using declarative edge policies. Additionally, Oracle has partnered with specialized companies such as Gemalto to provide hardware tamper resistance capabilities for the sensors/devices. Combining these capabilities with the underlying security capabilities of the Oracle Public delivers on Oracle’s prioritization of security for IoT.
Abhimanyu Prabhavalkar, vice president of IoT product development at Oracle, talked to IoT India Magazine about how they have seen the world of Internet of Things (IoT) evolve as well as change the way people do business. According to Gartner, 5.5 million new things are being connected to the internet every single day — thermostats, […]
["AI Features"]
["erp", "Interviews and Discussions", "Oracle"]
Prajakta Hebbar
2018-02-14T10:14:42
2018
1,748
["big data", "Go", "artificial intelligence", "erp", "machine learning", "AI", "Git", "Oracle", "RAG", "analytics", "Rust", "R", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "analytics", "RAG", "R", "Go", "Rust", "Git", "big data"]
https://analyticsindiamag.com/ai-features/iot-erp-power-couple-oracles-abhimanyu-prabhavalkar-explains/
3
10
4
true
true
false
10,043,295
Amazon Announces Development Center To Boost Autonomous Delivery Tech
Tech giant Amazon has announced the plans to launch an Amazon Scout Development Center in Helsinki, Finland. The development center will be focused on autonomous delivery technology. Amazon has deployed a dedicated team with twenty-four engineers in the Helsinki center to facilitate the research and development for Amazon Scout– the tech giant’s fully electric autonomous delivery service first deployed in 2019, and presently operates in the US. Amazon Scout In January 2019, Amazon had announced it was ready to field test six of its new delivery system– Amazon Scout– to get packages to its customers using autonomous delivery devices. Developed at Amazon’s research and development lab in Seattle, Amazon Scouts are six-wheeled robots about the size of a small cooler, designed to roll along the sidewalks. It autonomously follows the delivery route, and is safe and efficient to navigate around pets, pedestrians, or anything else that comes in their path. The electric delivery system, also called the Adora bots, were first deployed in Snohomish County in Washington. Travelling at a walking pace, Amazon Scouts were initially accompanied by Amazon Scout Ambassadors– humans designated to keep an eye on the bots and answer customers’ questions. It delivered packages in daylight hours, irrespective of the weather conditions. For the test, Amazon handed out delivery assignments to the robot on a random basis, regardless of the delivery option selected by the customer. What to expect? Last year, Amazon Scouts were deployed to deliver packages across four locations in the USA, including Atlanta and Franklin. Sean Scott, former Vice President of Amazon Autonomous Delivery (Scott) had said delivery of packages using the robots had helped the company fulfill customer demands during the pandemic, and helped reduce human-to-human contact. Going forward, the new team of engineers in Helsinki will be working closely with the Amazon Scout research and development labs in Seattle, along with teams in Cambridge, UK, and Tubingen, Germany. The teams together will be responsible for developing a 3D software to simulate real-life complexities and ensure safe navigation and deliveries by Amazon Scout. This comes about six months after Amazon announced that it bought Umbra– a Finnish 3D tech company responsible for creating advanced technologies to manage giant 3D models. Umbra spun off from Hybrid Graphics, when the latter was acquired by NVIDIA. Growing the community The ecommerce giant is also investing in the local communities of Helsinki by creating job opportunities, forging philanthropic partnerships and building a sustainable business to reduce the impact of climate change. While the Amazon Scout team in Helsinki is hiring engineers who are at the forefront of robotics and autonomous system technology, it intends to further grow the team over time. Amazon plans on creating local jobs– employing, training and upskilling them. Last year, Amazon created 20,000 jobs, growing its employee size to over 1,35,000 across 15 countries in Europe. Interested candidates can check the job openings at Amazon Scout, here. Summing up The Amazon Scouts were deployed with the vision of reducing delivery time of packages. In June 2019, as a part of its Prime Air programme, Amazon had unveiled the design of its fully electric drones which can fly up to 15 miles and deliver packages under five pound, under 30 minutes. The Prime Air programme was being tested across the USA, the UK, Austria, France and Israel.
Tech giant Amazon has announced the plans to launch an Amazon Scout Development Center in Helsinki, Finland. The development center will be focused on autonomous delivery technology.  Amazon has deployed a dedicated team with twenty-four engineers in the Helsinki center to facilitate the research and development for Amazon Scout– the tech giant’s fully electric autonomous […]
["Global Tech"]
["drone delivery", "Robots"]
Debolina Biswas
2021-07-09T14:00:00
2021
553
["Anthropic", "Go", "drone delivery", "programming_languages:R", "AI", "programming_languages:Go", "ai_applications:robotics", "R", "Robots"]
["AI", "Anthropic", "R", "Go", "programming_languages:R", "programming_languages:Go", "ai_applications:robotics"]
https://analyticsindiamag.com/global-tech/amazon-announces-development-center-to-boost-autonomous-delivery-tech/
3
7
1
false
false
false
10,111,274
UP Police Enlists AI to Bolster Security in Ayodhya
The security measures for visitors attending the ‘pran prathishtha’ ceremony at Sri Ram Mandir in Ayodhya on January 22 have reached unprecedented levels. Undoubtedly, UP Police is relying heavily on AI to ensure smooth operations. The police have deployed over 10,000 CCTV cameras, many of which are AI-powered. “To ensure better security arrangements at the programme venue in Ayodhya, technology is being used on a large scale. In some of these CCTV cameras, we are using AI-based technology so that we can maintain a strict vigil on the commuters,” the director general for law and order, Prashant Kumar told PTI. These cameras have been installed in hotspots such as Kanak Bhawan, Hanuman Garhi, Shri Nageshwar Nath Mandir, Ram Ki Paidi and Ram Janmabhoomi. In addition to the millions of devotees thronging Ayodhya, approximately 506 prominent individuals, including politicians, industrialists, film stars, sportspersons, diplomats, judges, and esteemed priests, are all attending the event. Further elevating its significance on the global stage, the consecration ceremony will host 92 specially invited dignitaries representing 50 countries as state guests. As a security measure, reports suggest over 15,000 police personnel, along with paramilitary forces, ATS commandos and sniper teams, have been deployed across the city. Facial recognition tech UP Police will use the CCTV cameras along with an AI-powered audio-video analytics platform to monitor the event for threats and suspicious activities. The platform, called Jarvis, is developed by a Gurgaon-based startup called Staqu Technologies. JARVIS leverages AI and computer vision to analyse video footage and extract valuable insights like detecting and tracking objects, recognising faces, identifying anomalies, and performing various other video analytics tasks. It provides short and crisp real-time alerts based on the analysed video data. In Ayodhya, it will scan the cities and premises for threats and relay real-time alerts to the authorities. The platform has been fed with a comprehensive digital database of 8,00,000 criminals in UP. The CCTV cameras will utilise advanced high-resolution facial recognition capabilities, enabling the identification and monitoring of suspects across various locations with an impressive accuracy rate of up to 99.7%, according to the startup. The AI system will recognise and promptly alert the authorities in the event that someone from the database or any other high-risk individuals is detected in the camera feed. Additionally, these cameras are equipped with reverse facial recognition functionality, which means it can identify a person by scanning a photo of that particular person. AI-powered number plate recognition The cameras will also monitor the vehicles entering and exiting the city during the event. UP Police will use an advanced AI-powered Automatic Number Plate Recognition (ANPR) system for traffic management, law enforcement, and public safety. In Ayodhya, ANPR will be specifically used by UP Police to assist in the identification of vehicles involved in criminal activities or traffic violations. Authorities can integrate ANPR with databases of wanted vehicles to identify and apprehend them automatically. The AI system will have real-time access to the government’s vehicle registration database, which includes information from stolen vehicle databases. While only authorised vehicles are permitted to enter Ayodhya, the system can immediately detect unauthorised vehicles. Moreover, the system can also identify vehicles sporting fake number plates. Attribute-based searches The UP Police will harness Staqu’s JARVIS platform to enable surveillance cameras to conduct attribute-based searches. The AI-powered system can identify individuals within a crowd based on distinctive attributes like clothing, colour, accessories, or the presence of accompanying children. Real-time monitoring using attribute-based AI searches can help manage large crowds and ensure public safety. It will help UP Police not only locate criminals but also find lost people or children in a crowd. AI-powered anti-mine drones Authorities are also using AI-powered drones to keep an eye on the movement of people visiting the holy place. Equipped with sensors and detection technology, these AI-driven drones are designed to scan the ground for concealed landmines or explosive devices. “Ayodhya is now under the watchful eye of drones equipped with AI alongside the utilisation of anti-mine drones, as part of the concerted efforts to enhance security in the temple town,” a senior police official said.
Authorities have deployed AI-powered CCTV cameras and drones across the city
["AI Features"]
["Facial Recognition", "Jarvis"]
Pritam Bordoloi
2024-01-22T17:00:00
2024
682
["Go", "Facial Recognition", "AI", "Git", "computer vision", "Jarvis", "RAG", "ViT", "analytics", "R", "analytics platform", "startup"]
["AI", "computer vision", "analytics", "RAG", "R", "Go", "Git", "ViT", "analytics platform", "startup"]
https://analyticsindiamag.com/ai-features/up-police-enlists-ai-to-bolster-security-in-ayodhya/
3
10
1
true
false
false
10,103,553
Adobe Acquires Indian AI Video Creation Platform Rephrase.ai
U.S.-based technology giant Adobe has made its first generative AI acquisition of an Indian startup, Rephrase.ai, a Bengaluru-based AI-driven video creation company. According to reports from Economic Times, Adobe aims to integrate Rephrase’s technology stack and generative AI video capabilities into its proprietary video-editing platform, Creative Cloud. This move is expected to enhance Adobe’s offerings in the video creation space. The exact value of the deal remains undisclosed. https://twitter.com/_shivammangla/status/1727299136238837924?s=12 As part of the acquisition, Adobe will integrate the majority of Rephrase.ai’s workforce. The founders of Rephrase.ai will also collaborate closely with the software company moving forward. Currently, Rephrase.ai has a team consisting of approximately 45 employees. As a result of the acquisition, Rephrase’s investors are expected to achieve a full cash exit, with the founders receiving compensation in both cash and Adobe stock, ET report added. “The Rephrase.ai team’s expertise in generative AI video and audio technology and experience-building text-to-video generator tools will extend our generative video capabilities — and enable us to deliver more value to our customers faster — all within our industry-leading creative applications,” Ashley Still, senior vice president and general manager, Creative Cloud, wrote in an internal memo to employees. “A huge shoutout to our team – our technology teams for pushing the frontiers of AI, and our business teams for writing a playbook to sell GenerativeAI in India. Your dedication and hard work have been everything to us, and the company. This is your success, and nobody else’s,” Shivam Mangla, co-founder at Rephrase.ai said on X. Founded by Ashray Malhotra, Nisheeth Lahoti, and Shivam Mangla, the startup has garnered a total funding of $13.9 million to date. In September of the previous year, it secured $10.6 million in a funding round led by Red Ventures. Additional supporters of the startup encompass Lightspeed India, Silver Lake, 8VC, and Techstars.
The exact value of the deal remains undisclosed
["AI News"]
["AI Tool", "Mergers and Acquisitions"]
Siddharth Jindal
2023-11-22T23:07:42
2023
303
["funding", "programming_languages:R", "AI", "Ray", "Aim", "generative AI", "AI Tool", "Mergers and Acquisitions", "R", "startup"]
["AI", "generative AI", "Aim", "Ray", "R", "startup", "funding", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/adobe-acquires-indian-ai-video-creation-platform-rephrase-ai/
2
8
3
false
false
false
10,096,417
ChatGPT Coding Review – Is ChatGPT Good at Coding?
Until two years ago, schools and colleges were toiling hard to teach the students C\C++ languages from scratch by printing ‘Hello World’ but now it’s a thing of the past. Following the launch of ChatGPT, English emerged as the new programming language. Lately, a meme has been making the rounds on the internet suggesting that codes generated by ChatGPT take longer for the developers to debug. On Twitter too, several users expressed disappointment in how difficult it has become to debug the code created by ChatGPT. One of the users on Twitter said, “ChatGPT is good for code generation, but it generates codes that require debugging, so blindly using it would be a waste of time.” After using ChatGPT for so long in development, I can say-> Since chatgpt has sessions so sometimes asking your query in a new session might help-> Chatgpt is good for code generation, but it also generates debugging required code so blindly using it will be time wasting only pic.twitter.com/ozb6rPYWwm— Parth Verma (@v_parth7) July 5, 2023 However, is this reason enough to stop them from using ChatGPT for coding? The answer is a big no, because coding and thinking simultaneously puts a break on your chain of thoughts. Even though it takes longer, people would still use ChatGPT for coding because it allows them to be creative, solve problems, and discover new coding ideas. With ChatGPT, our critical thinking ability is not limited by the speed at which we can convert thoughts into codes. GPT 3.5 vs GPT4 It is a fact that even the most expert human programmer cannot always get the program right on the first try. Large language models (LLMs) have proven to be highly skilled at generating codes, but still face difficulties when it comes to complex programming tasks. To overcome these challenges, researchers have explored a technique called self-repair, where the model can identify and correct errors in its own code. This approach has gained popularity as it helps improve the performance of LLMs in programming scenarios. A research paper, called ‘Demystifying GPT Self-Repair for Code Generation’, quantifies GPT-4’s self-debug capabilities against other LLMs. According to the paper, GPT-4 has an extremely useful and emerging ability that is stronger than any other model — self-debug. One of the key findings from the paper was that GPT-3.5 can write much better code given GPT-4’s feedback. GPT-4’s exceptional ability to self-repair stems from its remarkable feedback mechanism. Unlike other models, GPT-4 possesses a unique capacity for effective self-reflection, allowing it to identify and rectify issues within code. This distinguishing feature sets it apart from its counterparts in the AI landscape. Notably, the feedback model and the code generation model in GPT-4 do not necessarily have to be the same. For example, you can debug the code created by GPT-3.5 using GPT-4. In this case, GPT-3.5 acts as a code generation model and GPT-4 acts as a feedback model. This approach empowers GPT-4 to continuously improve and refine its coding capabilities, making it a standout solution in the field of AI-driven programming. In an interesting insight from the research, it was seen that GPT-4’s self-generated feedback, along with the feedback provided by an experienced programmer, increased the number of repaired programs. It means human critical-thinking still needs to be a part of the debugging process. AI can assist you with debugging, but in the end, it all boils down to your skills. What’s next? The code created by ChatGPT will be as efficient as the prompt. If your prompt is not up to the mark, you will not be able to produce the desired output. Prompting is mostly just trial-and-error, i.e., if one prompt doesn’t work, you try another one. Going ahead, there is a possibility that like coding, you might even not need to create prompts on your own. Developers are coming up with open source models that can be integrated on top of the ChatGPT API that would dish out the best possible prompts for you. Introducing `gpt-prompt-engineer` ✍️An agent that creates optimal GPT prompts.Just describe the task, and a chain of AI systems will:– Generate many possible prompts– Test them in a ranked tournament– Return the best promptAnd it's open-source: https://t.co/nrivU2BWmn pic.twitter.com/rcnlJ5g5ZN— Matt Shumer (@mattshumer_) July 4, 2023 An example of this AI agent is ‘GPT prompt engineer’. It is a constraint agent, which means that its behaviour is highly-controlled, leading to better results than open-ended agents. It chains together lots of GPT-4 and GPT-3.5-Turbo calls that work together to find the best possible prompt. Often, it has even outperformed the prompts written by humans. ChatGPT, a powerful language model, demonstrates strengths in code conversion, elaboration, and quick prototyping, providing valuable assistance to developers. Its natural language processing capabilities aid in explaining code snippets and fostering collaboration among team members. However, it has limitations, lacking genuine comprehension and context awareness, often producing suboptimal code and errors. Human oversight remains crucial to ensure code alignment with specific project requirements and best practices. Human coders play a vital role in logic formulation, algorithm design, debugging, and troubleshooting. They possess creativity, strategic thinking, and domain expertise that cannot be replicated by AI models. While ChatGPT enhances productivity and efficiency, it should be seen as a complementary tool rather than a replacement for human coders. By leveraging ChatGPT’s strengths and understanding its limitations, developers can achieve enhanced productivity and innovative solutions in software development while retaining the human touch necessary for success.
GPT-4 has an extremely useful and emerging ability that is stronger than any other model — self-debug
["AI Features"]
["ChatGPT"]
Siddharth Jindal
2023-07-06T11:13:39
2023
908
["Go", "ChatGPT", "API", "TPU", "AI", "RAG", "GPT", "C++", "chain of thought", "R"]
["AI", "ChatGPT", "RAG", "chain of thought", "TPU", "R", "Go", "C++", "API", "GPT"]
https://analyticsindiamag.com/ai-features/is-chatgpt-good-at-coding/
4
10
0
true
true
true
10,116,886
After India, Microsoft Takes AI to the Schools of Sri Lanka
Microsoft and the Ministry of Education, Sri Lanka recently signed a Memorandum of Understanding (MoU) to integrate AI with the national school curriculum starting grade 8, thereby equipping students with essential skills for the future. Puneet Chandok, Microsoft’s President of India and South Asia, made his inaugural visit to Sri Lanka. He met with Sri Lankan Hon. President Ranil Wickremesinghe and Hon. Prime Minister Dinesh Gunawardena to discuss the company’s commitment to partner with the country on its digital transformation journey. “It was truly inspiring to witness the steps Sri Lanka is taking to ensure inclusivity in innovation. As AI continues to be the defining technology of our time, Microsoft is committed to being Sri Lanka’s copilot for economic and societal transformation,” Puneet Chandok, President, Microsoft India and South Asia, said in a press release. Earlier this year, Microsoft revealed its plans to expand the reach of Microsoft Research India’s initiative, the AI copilot, Shiksha CoPilot, to 100 schools by the end of the academic year. The tech giant announced Shiksha CoPilot in November last year and was being tested in 10 schools in Bengaluru, India. Shiksha copilot was built on Microsoft Azure OpenAI Service and harnessed Azure Cognitive Services to ingest the content in textbooks, including how the content is organised. The project, implemented in collaboration with the Sikshana Foundation, an NGO dedicated to enhancing the quality of public education, has been initially deployed at several public schools in Karnataka.
Earlier this year, Microsoft revealed its plans to expand the reach of Shiksha CoPilot to 100 schools in India
["Global Tech"]
["Microsoft"]
Pritam Bordoloi
2024-03-21T16:01:45
2024
241
["Go", "OpenAI", "AI", "Azure", "innovation", "digital transformation", "Git", "ViT", "GAN", "R", "Microsoft"]
["AI", "OpenAI", "Azure", "R", "Go", "Git", "GAN", "ViT", "digital transformation", "innovation"]
https://analyticsindiamag.com/global-tech/after-india-microsoft-takes-ai-to-the-schools-of-sri-lanka/
2
10
0
false
false
false