article_id
int64
6
10.2M
title
stringlengths
6
181
content
stringlengths
1.17k
62.1k
excerpt
stringlengths
7
938
categories
stringclasses
18 values
tags
stringlengths
2
806
author_name
stringclasses
605 values
publish_date
stringdate
2012-05-21 07:44:37
2025-07-11 00:01:12
publication_year
stringdate
2012-01-01 00:00:00
2025-01-01 00:00:00
word_count
int64
200
9.08k
keywords
stringlengths
38
944
extracted_tech_keywords
stringlengths
32
191
url
stringlengths
43
244
complexity_score
int64
1
4
technical_depth
int64
2
10
industry_relevance_score
int64
0
7
has_code_examples
bool
2 classes
has_tutorial_content
bool
2 classes
is_research_content
bool
2 classes
26,585
Huawei Sets AI As Their Focus Area In India, Dedicates R&D Centre For Same
Huawei, a telecom and technology giant is pushing artificial intelligence agenda in India. The company has started by setting an AI team in India which will dedicated to do R&D specifically dedicated for Indian users. This is because the tech behemoth is convinced that India can become an engine of growth worldwide. The company has made its intention clear to building cutting-edge AI products. The Chinese tech giant also wants to build AI for app developers and third-party tech integrators. Huawei in the very recent time has put out its AI-powered smartphone series ‘P20’ for the India market. The company says that the product has been “received well” and is becoming more and more powerful. The company has decided that they will launch the new line of 5G smartphones in India at the same time with the global launch. James Lu, director of AI product management at Huawei told a leading newspaper, “We’re currently working in two directions, first direction is that we’re trying to make user-device interaction more natural, hence we’re experimenting some of the features in some of the markets globally. The second is that we’re trying to bring services more pro-actively to the consumers.” The Huawei team added,“In India, AI is our key focus. India centre is creating a lot of AI-related solutions which are also being used globally”. Huawei currently has 3,000 employees in the R&D centre in Bengaluru. Huawei wants to build new AI-based products in relatively new sector such as images, recommendations, voice, text, video and augmented reality for their smartphones. Their vision, said the company, was to become the “best AI smartphone provider in the world in the next five to ten years”.
Huawei, a telecom and technology giant is pushing artificial intelligence agenda in India. The company has started by setting an AI team in India which will dedicated to do R&D specifically dedicated for Indian users. This is because the tech behemoth is convinced that India can become an engine of growth worldwide. The company has […]
["AI Features"]
["AI (Artificial Intelligence)", "Huawei", "Research", "Video"]
Abhijeet Katte
2018-07-20T09:36:52
2018
280
["artificial intelligence", "programming_languages:R", "AI", "Research", "edge AI", "emerging_tech:edge AI", "Video", "Huawei", "R", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "edge AI", "R", "programming_languages:R", "emerging_tech:edge AI"]
https://analyticsindiamag.com/ai-features/huawei-sets-ai-as-their-focus-area-in-india-dedicates-rd-centre-for-same/
2
6
1
false
false
true
10,058,043
How can India reskill its workforce for the AI industry – According to experts
If India’s name resonates with any profession, it would be engineers, given that the country is producing as many as one million graduates a year. Yet, when it comes to the emerging technologies of AI, our engineers don’t seem to meet the demands of computer science. The 2019 annual employability survey by Aspiring Minds cited only 2.5% of Indian engineers actually possess the technical skills demanded by the AI industry. It is not just India—most engineers working on AI, ML and data science are not specifically trained for the art. In 2018, The World Economic Forum suggested 54% of IT practitioners will need reskilling by 2022 to meet the requirements of AI. We are already in 2022, and today, experts deem, we will need upskilling as well as reskilling to excel in AI practice. Analytics India Magazine got in touch with experts in the field to understand the challenges we currently face and how we can overcome them. Introduce AI at a younger age The first step to encouraging AI specialists is to interest them in the subject right from the students’ initial learning days. “Coding is being taught in schools at a younger age, and they are doing a good job”, noted Swati Jain, Vice President of Analytics at ExlService. She recalled AI competitions where the students won most prizes despite there being participants from the industry present. “The mindset for AI has to be developed from school, (because) the ability to analyse starts from the core”, Swati said, stating schools don’t need to have compulsory programmes, but introducing it to students to explore in the future is the way to go. The Indian government’s National Education Policy (NEP) 2020 is a step in this direction of technology. Specialised AI training According to Kaushik Sanyal, Global Managing Director at Accenture, the biggest challenge is the fallacy that engineers are suitable to be AI technologists. So, to him, it didn’t come as a surprise that only 2.5% of engineers possess the real skills for AI, given they wouldn’t own AI skills in the first place. “Civil engineers should be civil engineers”, he said, arguing AI engineering needs to be taught as a specific program at universities. “(The problem in India is that) we don’t have many institutions offering such courses.” Specialised learning is important because data engineering, data science and data visualisation are different fields that should be treated differently. “Data science is a combination of mathematics, engineering and analytics. While the former two are taught in schools, analytics is completely skipped”, explained Kaushik. This leads to practitioners having conceptual skills without the understanding of applying them. Another issue arising with non-contextual learning is the problem of the language. “The technical lingo keeps changing, and data scientists need to evolve with it, but unfortunately, schools do not cover the correct language needed.” Even when reskilling existing engineers, they are most likely picking up lessons on the job. There are no bridge academic courses that teach technical AI skills as a top-up to engineering. “Not too many colleges offer special bridge courses that companies can use. Any engineer may adapt to working in AI, but he won’t know the principles of it,” Kaushik said. “India needs more institutions for AI, that have curriculums specifically designed for AI engineers, be it an undergraduate course, (postgraduate) or a bridge course.” Internships build real knowledge While universities are important, they may not teach AI engineers all they need to know. Computer science is a field where practical exposure is considered to be more valuable to build one’s knowledge of the field. Ashwin Swarup, Vice President of Data Science at Digite, holds a similar belief, asserting that doing internships while studying is the secret sauce to earning AI skills. Many engineers are now working in the AI industry because of the ease of using online libraries or Kaggle to create projects. But as Ashwin asserted, “the challenge arises when companies have to appoint a person that can find the problem in the company, create a problem statement and suggest the right business solutions.” The problem lies in the lack of practical teachings in engineering universities. “Finding the right AI/DS solution involves asking the right questions, creating a problem statement, having a business understanding and multiple domain knowledge to provide the correct business solution. Engineering does not teach this,” he explained, providing a solution- ‘Get internships. Engage in doing internships throughout the course of the study.’ Encourage research Academia and research are integral to learning the subject in-depth, innovating, and keeping up with the developments, but “the value for research is very less in India. Barring a few institutions like IITs, most don’t focus on encouraging students to pursue research”, Ashwin noted. As a result, India is deficient in research, especially in comparison to international committees, where Indian academia falls short of competing for papers. Interestingly, Ashwin said, “The volume of research papers coming from China, the country we want to compete with, is overwhelming.” This challenge can be overcome by two institutions, academia and companies. Indian professors fall short in their ratings in comparison to international instructors. India creates phenomenal students, but the Indian academia needs to amp up the experience of their professors and create an academic body that supports quality research. “Quality in papers is important, and it can be measured through the number of citations. Publishing research just for the sake of it will not do anymore. The quality is more important in (Indian) academia.” Another opportunity is for companies to conduct research. At the international level, Facebook, Google, Uber, and more are known worldwide for their research and development. “International corporates are moving into academics. Indian companies are not keeping up with this”, Ashwin said. “The papers are open-sourced, so they get more ideas from the public and give back to them. You hardly see this in India.” Upskill the workforce in adopting AI According to Swati, Indians may not produce the relevant engineers, but they can surely upskill the present ones to meet the technical demands of AI. “Most importantly, the entire workforce needs to understand the value chain of analytics. Education of the workforce as a whole is needed, as not all companies can afford specialisation”, she explained. This is in line with the upcoming trend of preferring ‘generalists’ in the workforce. “Companies should focus on upskilling in a continuous fashion with a clear dedicated budget for capability development.” Lastly, companies should take an aggressive, systematic approach to even train the non-AI workforce on the technology. “We may only have 2.5% ‘relevant’ engineers, but with proper training, India can meet the demands of AI,” she concluded.
So, to him, it didn’t come as a surprise that only 2.5% of engineers possess the real skills for AI, given they wouldn’t own AI skills in the first place.
["IT Services"]
[]
Avi Gopani
2022-01-11T13:00:00
2022
1,107
["data science", "Go", "programming_languages:R", "AI", "ML", "Git", "RAG", "data engineering", "analytics", "R"]
["AI", "ML", "data science", "analytics", "RAG", "R", "Go", "Git", "data engineering", "programming_languages:R"]
https://analyticsindiamag.com/it-services/how-can-india-reskill-its-workforce-for-the-ai-industry-according-to-experts/
3
10
2
false
false
true
10,079,086
Top 15 Docker Containers in 2024
If you’re a techie, you’ve definitely heard of Docker – a tool for shipping and running applications. With all the attention it gets nowadays, developers and tech giants like Google are building services to support it. Whether or not you have an immediate use case in mind for Docker, here’s a compiled list of 15 most popular Docker containers. 1. Alpine It is a minimal image based on Alpine Linux with a package index. It is only 5 MB in size and built on musl libc and BusyBox. The image has access to a package repository much more complete than other BusyBox-based images. Alpine Linux is a great image base for utilities and production applications. Read more about Alpine Linux here. 2. BusyBox Coming in somewhere between 1 and 5 Mb in on-disk size (depending on the variant), BusyBox is a very good ingredient to craft space-efficient distributions. BusyBox combines many common UNIX utilities into a single small executable. The utilities have fewer options than full-featured GNU; however, the included options provide functionality and behave like their GNU counterparts. As a result, BusyBox provides a fairly complete environment for any small or embedded system. Read more about busybox here. 3. Nginx Nginx is an open-source reverse proxy server, a load balancer, and an origin server. It runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, and other *nix flavours. It also has a proof-of-concept port for Microsoft Windows. If you are still determining your needs, you should use this one. It is designed to be a throwaway container and the base to build other images. Read more about nginx here. 4. Ubuntu Ubuntu is the world’s most popular operating system across public clouds and OpenStack clouds. In addition, the container platform can run your containers at scale quickly and securely. Read more about Ubuntu here. 5. Python Python incorporates modules, exceptions, dynamic typing, high-level data types, and classes. It also works as an extension language for applications needing a programmable interface. It is portable and runs on many Unix variants, including Mac, Windows 2000 and later. For many simple, single-file projects, you may need help to write a complete Dockerfile. You can run a Python script using the Python Docker image in such cases. Read more about Python here. 6. PostGreSQL PostgreSQL, often called “Postgres”, handles work ranging from single-machine applications to internet-facing applications with several users. The image uses many environment variables, which are easy to miss. The only variable required is POSTGRES_PASSWORD; the rest are optional. Note: The Docker-specific variables will only affect if you start the container with an empty data directory; any pre-existing database will be left untouched on container startup. Read more about PostGres here. 7. Redis Redis is an open-source, networked data store with optional durability. For easy access via Docker networking, the “Protected mode” is switched off by default. Hence, if you expose the port outside your host (e.g., via -p on docker run), it will be accessible to anyone without a password. Therefore, setting a password (by supplying a config file) is highly recommended. Read more about Redis here. 8. Apache httpd Apache is a Web server application that played an integral role in the initial growth of the internet. This image only contains Apache httpd with the defaults from upstream. No PHP is installed, but it should be easy to extend. On the other hand, if you want PHP with Apache httpd, see the PHP image and look at the -apache tags. To run an HTML server, add a Dockerfile to the project where public-HTML/ is the directory containing all the HTML. Read more about Apache httpd here. 9. Node Node.js is a platform for server-side and networking applications. The applications written in Javascript can be run within the Node.js runtime on Mac OS X, Windows, and Linux without changes. Node.js contains a built-in, asynchronous I/O library for file, socket, and HTTP communication. The HTTP and socket support allows Node.js to act as a web server without additional software such as Apache. Read more about Node.js here. 10. MongoDB MongoDB is an open-source database program which uses JSON-like documents with schemata. The MongoDB server in the image works on the standard MongoDB port, 27017, connecting via Docker networks, remaining the same as a remote mongod. Read more about MongoDB here. 11. MySQL MySQL has become a leading database for web-based applications, covering the entire range of personal projects and websites. Starting a MySQL instance is simple: $ docker run –name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag Read more about MySQL here. 12. Memcached Memcached is a distributed memory caching system. Its APIs provide a large hash table across several machines. Older data is purged in the least recently used order when the table is full. Memcached applications usually layer requests as well as additions into RAM before retiring on a slower backing store. Read more about Memcached here. 13. Traefik Traefik is an HTTP reverse proxy and load balancer to easily deploy microservices. It automatically integrates with the existing Docker infrastructure and configures itself dynamically. Pointing Traefik to your orchestrator should be the sole configuration step. Read more about Traefik here. 14. MariaDB MariaDB Server is a popular open-source database server made by MySQL developers. Starting a MariaDB instance with the latest version is simple: $ docker run –detach –name some-mariadb –env MARIADB_USER=example-user –env MARIADB_PASSWORD=my_cool_secret –env MARIADB_ROOT_PASSWORD=my-secret-pw mariadb:latestor:$ docker network create some-network $ docker run –detach –network some-network –name some-mariadb –env MARIADB_USER=example-user –env MA Read more about MariaDB here. 15. RabbitMQ RabbitMQ is open-source message broker software that implements the Advanced Message Queuing Protocol. It stores data based on the “Node Name”, which defaults to the hostname. For usage in Docker, we should specify -h/–hostname explicitly for each daemon so that the user doesn’t get a random hostname and can keep track of the data. Read more about RabbitMQ here. Related Posts Best AI-based Search Engines Best Data Cleaning Tools Best Free Resources To Learn Tableau Best Facial Recognition Software for PC Best Python Frameworks
Here is a list of 15 most recommended docker containers in 2024, on the internet for running applications in your next project.
["AI Trends"]
["Top Trend"]
Tasmia Ansari
2022-11-08T17:00:00
2022
1,005
["PostgreSQL", "Top Trend", "AI", "MongoDB", "ML", "docker", "microservices", "Python", "SQL", "R", "Redis"]
["AI", "ML", "docker", "microservices", "Redis", "MongoDB", "PostgreSQL", "Python", "R", "SQL"]
https://analyticsindiamag.com/ai-trends/top-docker-containers/
3
10
2
true
true
false
10,171,626
Google offers buyout option to employees to reduce workforce
Google has extended buyout offers to employees across various divisions on Tuesday, including those in knowledge and information, central engineering, marketing, research, and communications teams, according to CNBC. The knowledge and information unit, or K&I, encompasses Google’s search, ads, and commerce sectors. This latest buyout initiative is part of the company’s ongoing efforts to decrease its workforce, following the layoff of 12,000 employees earlier in 2023. The number of employees affected by this recent round of buyouts could not be verified. Previously, The Information reported that buyout offers were made to workers in the search and ads division. “Earlier this year, some of our teams introduced a voluntary exit program with severance for US-based Googlers, and several more are now offering the program to support our important work ahead,” Google spokesperson Courtenay Mencini told CNBC. The “voluntary exit program” is available to US-based employees, and some teams are also requiring remote workers living within 50 miles of an office to return to the workplace. The company stated that these workers must follow a hybrid work model “to encourage more in-person collaboration,” the media outlet reported. According to a memo reviewed by CNBC, Google executive Nick Fox hopes that passionate Google employees do not take up this opportunity, as the company has “ambitious plans and tons to get done.” However, Fox added that “this VEP offers a supportive exit path for those of you who don’t feel aligned with our strategy, don’t feel energised by your work, or are having difficulty meeting the expectations of your role.”AIM also reported that companies in the US, such as Microsoft, IBM, PwC, Crowdstrike, and other big names, have already laid off thousands of employees across their workforce in the first half of 2025. Notably, Microsoft alone has eliminated 6,000 employees, representing nearly 3% of its workforce.
While the number of employees affected by the buyouts is not yet confirmed, the offer seeks to cut hundreds of Googlers as part of its voluntary exit program.
["AI News"]
["Big Tech Layoffs", "Google buyouts"]
Smruthi Nadig
2025-06-11T17:20:22
2025
302
["Go", "programming_languages:R", "AI", "programming_languages:Go", "RAG", "Aim", "Google buyouts", "Big Tech Layoffs", "R"]
["AI", "Aim", "RAG", "R", "Go", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/google-offers-buyout-option-to-employees-to-reduce-workforce/
2
7
1
false
false
false
10,061,937
Talking Ethical AI with Siddharth Bhardwaj, co-founder and CTO, Beatoven.ai
AI-driven tech startup Beatoven.ai is a platform for music composers and content creators to create royalty-free, affordable, easy to licence, exclusive music. Beatoven.ai’s web-based application allows content creators to create and customise tunes from a vast library of mood-based tunes across genres. The platform is user-friendly, and even individuals with little or no knowledge of music can create original soundtracks with the help of AI. “We aim to create a customisable AI-based tool that enables our users to create the soundtrack they have in their minds,” said Siddharth Bhardwaj, co-founder and CTO, Beatoven.ai. In an exclusive interview with Analytics India Magazine, Siddharth Bhardwaj spoke about how Beatoven.ai embeds ethics in their processes. AIM: Tell us how Beatoven.ai leverages AI. Siddharth Bhardwaj: Firstly, we use ML and AI systems to accurately tag music data that our partner artists submit to us. Then, we are building our custom music composition and production algorithms for regional (currently limited to Indian) and Western genres. The mix of music theory rules and signal processing is tied together with our continuously evolving AI algorithms. These composition algorithms use our music sample library to create a unique track based on user inputs like duration, genre, moods for different sections of the content, etc. AIM: Elaborate on the AI governance methods, techniques, and frameworks Beatoven.ai use to ensure the best possible experience for users. Siddharth Bhardwaj: The data we use to train our models or create compositions have been sourced through our artist partnerships. Music licensing and royalties are complicated, and hence we work with many independent artists to source our own data, and we own the copyrights to all the tracks we use to create our compositions. Being musicians ourselves, we wanted to create monetisation opportunities while creating a human-AI collaborative tool that solves a real problem. We use the Indian classical music theory established over centuries like ragas, thaats and their associations with moods and incorporate this with AI to help compose music; the same is applicable for western music theory, where we use associations of scales and moods and their equivalent visual representations. This accurately represents each geography, and the plan is to expand this globally. We take pride in studying these intricacies and hopefully accurately representing them in our tool. We regularly evaluate our compositions on several parameters by expert musicians in each genre/region and improve our algorithms accordingly. AIM: What explains the growing conversation around AI ethics, responsibility, and fairness? Siddharth Bhardwaj: Being at the forefront of AI in music, we have a responsibility to be fair to the artists contributing to the platform, and we take this very seriously at Beatoven.ai. Our vision is to become an AI-powered platform for all the musical cultures of the world, and we work with many amazing local artists to try and represent these cultures as accurately as possible. In AI music, the most important question is about the ownership of the tracks being created – does the ownership lie with the artists, developers, users of the tool or the company? We own all the data that we use to develop our composition algorithms and then licence the final tracks to our users. We also incorporate the music theory rules wherever possible to enhance the algorithms and accurately represent the genre/styles of music being composed. These rules have been in existence for centuries, and no single entity has ownership over them. This approach also helps us normalise any biases creeping into our algorithms. AIM: How do you ensure compliance of AI governance policies and best practices at Beatoven.ai? Siddharth Bhardwaj: We take great care in ensuring that the musical data we collect does not infringe on any past copyrights by manually and algorithmically checking them. We work with our in-house music producers and industry experts to help us develop our AI algorithms and evaluate the quality of music being produced. We are a monetisation channel for our artists who put in the effort of producing and tagging our music data. This enables our algorithms to produce diverse music while accurately capturing the artist’s intent. AIM: Do you have a due diligence process in place to make sure the data Beatoven.ai uses is collected ethically? Siddharth Bhardwaj: Currently, we have in-house music producers who work closely with the artists to source the specific types of musical samples and tracks that we require. We pay the artists up-front for their efforts in producing and tagging these samples. While sourcing our music, we ask the artists to tag the data. Then, our in-house producers use proprietary algorithms to double-check this data and ensure correctness. With every release, we try to talk to as many users as possible to find any ethical concerns or inaccuracies in our tool. We also regularly work with expert musicians in each genre/style/region to evaluate and appropriately represent the music. AIM: How does Beatoven.ai protect user data? Siddharth Bhardwaj: We ask only tool-related questions to our users, like the channels where they intend to use the music created on Beatoven.ai. Some other questions are related to improving the product. We store these in a secure database managed by AWS. Also, we take great precautions in ensuring the video or podcasts that our users upload on our platform are stored securely on our servers using the best industry practices. For user authentication and login, we use Auth0, which is the industry standard and provides the best possible security.
We own all the data that we use to develop our composition algorithms.
["AI Features"]
["AI fairness", "ai governance", "AI in music", "consumer protection", "Data Governance", "Data Privacy", "data rights", "digital transformation", "Ethical AI", "Interviews and Discussions", "modernization"]
Sri Krishna
2022-03-02T16:00:00
2022
902
["consumer protection", "data rights", "ai governance", "AI governance", "R", "digital transformation", "AI ethics", "RAG", "Data Governance", "analytics", "Go", "AWS", "AI", "Ethical AI", "ML", "Data Privacy", "AI fairness", "AI in music", "modernization", "Aim", "Interviews and Discussions"]
["AI", "ML", "analytics", "Aim", "RAG", "AWS", "R", "Go", "AI ethics", "AI governance"]
https://analyticsindiamag.com/ai-features/talking-ethical-ai-with-siddharth-bhardwaj-co-founder-and-cto-beatoven-ai/
3
10
3
false
false
false
10,079,722
Satoshi of AI: Kamban, an India-based AI Writing Tool Developer
An India-based developer, who goes by the pseudonym Kamban, has developed an AI writing app for Mac called Elephas. It uses the GPT-3 model to help users write all kinds of content, including business documents, long-form articles, emails, social media posts, and even product reviews. Analytics India Magazine got in touch with Kamban to learn more about this tool. Elephas – AI writing tool There are many copywriting tools like Copy AI and Content AI, but most of them work on the premium model and do not offer the ‘power of GPT-3’, as Kamban puts it. “I built a native app for Mac called Elephas using GPT-3. It works on top of all your Mac applications, and no need to switch windows. I wrote the code in Swift programming language. This app takes your OpenAI keys and uses them for all communications. This is one of the major differences when you compare it with the other AI writing tools, where they usually store your information. Elephas does not send your text information to our servers,” Kamban said. An AI-based writing cum research tool for business professionals, the way Elephas works is that the user needs to select a text and options from the menu bar (for example – rewrite). Elephas accesses the selected text and converts it into the requested format. Kamban initially just built a simple wrapper and posted a sample of it on the discussion forum HackerNews. “It kind of exploded, and Elephas was even trending on the homepage of HackerNews for some time,” he said. He admits that even though back then the version did not have a great look, it helped him gain valuable feedback and a few paid customers. Short product demo (2 mins) : https://vimeo.com/771041299 The next step was taking end-user feedback. “What I introduced initially was just a wrapper. Now, there is a utils section in the app, which you can use to generate from Google Sheets formulas to presentations. Let’s say you want to write a presentation on the impact of air pollution in India – just give the title, and it will give you the ten presentation slides. Users can finetune the presentation to best suit their objectives,” explained Kamban. Kamban adopted several methods to optimise the app. “I reviewed the major features our initial users used and adjusted the landing page to reflect them, which resulted in better conversions. I realised a lot of my users did not understand the full potential of the tool. I set up a knowledge base tool and organised all the features in an easy-to-navigate structure. I also created product demo videos, completely created by AI. With these, the support requests dropped,” he said. Kamban has been building side projects for four years and has never made any revenue. For the first time, the Elephas app has reached $1000 MRR recently over a period of five months. Elephas has over 200 business professionals and content writers using it every day. Playing with AI Elephas is not the only product that Kamban has built. One of his projects called SwanSearch – a search engine built for developers to provide them with inplace answers – features code search, function search, and JSON validation apart from viewing answers on the same page without needing to visit other websites in most cases. FlatGA is another active project that Kamban is maintaining, which unifies all website metrics like analytics, SEO, monitoring, and performance into a single tool. His other projects include browser tab manager, an AWS cloud savings tool, and a GPT-3 based well-being assistant. Most of his tools are based on GPT-3. “GPT-3 offers features that I couldn’t get anywhere else. For example, it can produce remarkable output for new inputs, which in the past would have needed a huge amount of training data and computing power. GPT-3 reduced the entry barrier for a lot of my projects,” he said. He further added, “If you notice, businesses are already hiring for a job called ‘Prompt engineers’. We are only getting started with advanced LLMs. GPT-4 could fix some of the shortcomings of GPT-3, such as maintaining factuality, increased memory retention, and making good external references.” Kamban places a lot of trust in the advancement of AI, especially large language models. He said that LLMs are evolving faster than any other AI, which will have a huge impact on the way the world operates – such as more personalised and adaptive learning experiences and increased use of data and analytics to drive decision-making. He concluded that his ultimate goal is making AI accessible to regular people. “I will focus on providing platforms where people can easily use AI’s potential,” he signed off.
The Elephas app has reached $1000 MRR recently over a period of four months. Elephas has over 200 business professionals and content writers using it every day
["AI Features"]
["Interviews and Discussions"]
Shraddha Goled
2022-11-15T13:00:00
2022
782
["Go", "TPU", "AWS", "OpenAI", "AI", "GPT", "analytics", "Rust", "GAN", "R", "Interviews and Discussions"]
["AI", "analytics", "OpenAI", "AWS", "TPU", "R", "Go", "Rust", "GPT", "GAN"]
https://analyticsindiamag.com/ai-features/meet-satoshi-of-ai-kamban-a-chennai-based-ai-writing-assistant-developer/
3
10
1
true
true
false
8,273
Blockchain Technology – The immutable database revolution
Blockchain technology is quickly becoming the new technological buzz. The finance world is looking at it to create a safer banking environment. It is stated to have the potential to be a new development environment for decentralized applications. It is a database of immutable time-stamped information of every transaction that replicated on servers across the globe maintained by a distributed network of computers that requires no central authority or third party intermediaries. It consists three key components: a transaction, a transaction record and a system that verifies and stores the transaction. Open-source software generate blocks which record the information about the transaction. The “block” chronologically stores information of all the transactions that have taken place in the chain and hence the name blockchain. 5 Key Concepts Decentralized consensus (on or off bitcoin’s blockchain) A decentralized scheme, on which the bitcoin protocol is based, transfers authority and trust to a decentralized virtual network and enables its nodes to continuously and sequentially record transactions on a public “block,” creating a unique “chain”: the blockchain. Each successive block contains a “hash” of the previous code. Using these hash codes, cryptography is used to secure the authentication of the transaction source which removes the need for a central intermediary. Data duplication is avoided through the combination of cryptography and blockchain technology. The Blocks Blockchain behaves like a database where the part of the information store (its header) is public. The block behaves as a container with a data placed inside connected with other blocks through blockchains in a semi private space. Anyone can verify that that information is placed as it has author’s signature on it but can’t see what is inside as only an author (or a program) has the private keys to unlock the data. Smart contracts (and smart property) Smart contracts are the building blocks for decentralized applications. The contractual governance of a transaction between two or more parties can be verified using a program via the blockchain rather via a central arbitrator. This the the basic idea behind smart contracts. The transactions have rules embedded inside them. The rules embedded are the defined between the parties. This enables an end-to-end resolution to be self-managed between computers that represent the interests of the users. The digital assets are the smart properties who know their owners with the ownership linked to the blockchain. Trusted computing (or trustless transactions) The concepts behind the blockchain, decentralized consensus, and smart contracts enable computers to trust one another at a deep level by enabling the spread of resources and transactions laterally in a, peer-to-peer (flat) manner. Blockchain acts as the unequivocal validator of transactions and hence each peer can proceed and trust one another. The rules of trust, compliance, authority, governance, contracts, law, and agreements live on top of the technology. Proof of work (and proof of stake) The key concept of “proof-of-work” lies in the heart of blockchain which is a “right” to participate in the blockchain system. To change records on the blockchain, users have to re-do the proof of work. Proof of work can’t be undone and is secured via cryptographic hashes to ensure its authenticity. Proof of work is a key inbuilding blocks. It depends on miner’s incentive only which will decline over time making it expensive to maintain and may have future scalability and security issues. “Proof-of-stake” is an upgraded solution that is cheaper to enforce but more expensive and more difficult to compromise. Proof of stake determines who will update the consensus and prevents unwanted forking of the underlying blockchain. Implementation The need to learn A new vocabulary around crypto-related framework by Business leaders and visionaries. To write decentralized apps that are enabled by blockchain technology by Developers. To create or use smart contracts by end-users When the original bitcoin blockchain technology was pushed outside of money-related services to the software applications realm, its limitations were found. Multiple blockchain can be the solution of this. Blockchain can be more than just internet of money and be a new development environment as there will be a lot of variety in decentralized apps in terms of size, complexity levels, etc. Disruptions The disruptions can be seen very first in the payments space. The transactions occur directly between the buyer and the seller without any intermediary. The validation of the transaction happens in a decentralized way or “distributed ledger”. This results in significant infrastructure savings for banks by allowing them to bypass payment networks that are oftentimes slow, cumbersome, and expensive. Majority of financial assets such as bonds, equities, etc. are already electronic which someday may be replaced by a decentralized structure. The latest innovations are using tokens to store and trade assets like shares, bonds, cars, houses and commodities. Additional information on the asset is attached using the tokens which generate “smart property” or the ability to record and transact these assets using “smart contracts” enforced by complex algorithms, through distributed platforms without a centralized register, increasing efficiency. In this environment, the current system would be replaced by a fully decentralized financial system. Conclusion The big banks are looking to use the technoogy as it makes fraud more difficult. The project to test blockchain-like technology is being led by financial technology firm R3. By adopting the technology banks could cut the cost of reporting transactions and working out who bought what and when. The technology cuts out the need for a “trusted middleman” to sit in between parties in a transaction as it acts as that middleman. This makes transactions quicker, cheaper, and easier when compared to the current systems banks use. Banks are therefore keen to see if it can be adapted for use with traditional currency, rather than just bitcoin. 13 more banks have joined the coalition of leading Wall Street and City names looking to take the technology that underpins bitcoin to the mainstream financial sector. Bank of America, BNY Mellon, Mitsubishi UFJ Financial Group, Citi, Commerzbank, Deutsche Bank, HSBC, Morgan Stanley, National Australia Bank, Royal Bank of Canada, SEB, Societe Generale and Toronto-Dominion Bank have all joined the partnership.
Blockchain technology is quickly becoming the new technological buzz. The finance world is looking at it to create a safer banking environment. It is stated to have the potential to be a new development environment for decentralized applications. It is a database of immutable time-stamped information of every transaction that replicated on servers across the […]
["IT Services"]
["Analytics Case Study", "bitcoin stock", "database key terms", "distributed graph database", "Ethical Hacking"]
Bitanshu Das
2015-11-17T09:37:48
2015
1,011
["Replicate", "Go", "database key terms", "AI", "innovation", "Scala", "Git", "GAN", "distributed graph database", "bitcoin stock", "disruption", "Rust", "Analytics Case Study", "Ethical Hacking", "R"]
["AI", "R", "Go", "Rust", "Scala", "Git", "GAN", "innovation", "disruption", "Replicate"]
https://analyticsindiamag.com/it-services/blockchain-technology-the-immutable-database-revolution/
3
10
1
false
false
false
21,902
Adobe To Set Up Advanced AI Lab In Hyderabad
File photo of Adobe CEO Shantanu Narayen. American multinational computer software giant Adobe on Monday announced that they to set up an advanced artificial intelligence laboratory in Hyderabad soon. Shantanu Narayen, CEO of Adobe Systems, said in a statement, “Adobe is thrilled to announce that we are starting an advanced AI lab in Hyderabad. The abundance of tech talent in Hyderabad, coupled with the pro-business stance of minister KT Rama Rao, makes this an exciting initiative for growth for Adobe.” The announcement was made after a meeting between IT Minister for Telangana state KT Rama Rao and Narayen at the Nasscom India Leadership Forum (NILF) in Hyderabad. This year, the three-day event is also hosting the World Congress on Information Technology — a global event that is making its debut in India. Rao confirmed the news from his verified Twitter account: Excited to announce Adobe is starting an advanced AI lab in Hyderabad. A global leader providing content creation & enterprise experience software solutions entering Hyderabad is great asset for the local ecosystem Thanks a ton to Adobe Chairman & dear friend Shantanu Narayan 🙏 pic.twitter.com/fLL9reh9zh — KTR (@KTRTRS) February 19, 2018 Reportedly, KTR has been following up with Adobe setting up an AI lab in Hyderabad, ever since he met Narayen in 2015. An official press release from Telangana IT department quoted Narayen as saying, “We are starting an advanced AI Lab in Hyderabad. As the global leader providing content creation and enterprise experience software solutions, driving innovative products is the core essence of our company.” Adobe is known for its products such as Photoshop, Pagemaker, and Acrobat Reader, among others. Going along with the Centre’s BharatNet programme, the Telangana government showcased its ambitious Fiber Grid (T-Fiber) program at the WCIT 2018. The pilot of the program, which covers around 50 government offices and households in four villages of Ranga Reddy district, was launched by Union IT Minister Ravi Shankar Prasad along with KTR. The news comes hot on the heels of Andhra Pradesh chief minister N Chandrababu Naidu announcing that he wanted to make Amaravati “India’s centre for cloud management, artificial intelligence, data analytics, cyber security, blockchain, healthcare etc.” AP currently shares its de jure capital Hyderabad with Telengana, which was formed in 2014.
American multinational computer software giant Adobe on Monday announced that they to set up an advanced artificial intelligence laboratory in Hyderabad soon. Shantanu Narayen, CEO of Adobe Systems, said in a statement, “Adobe is thrilled to announce that we are starting an advanced AI lab in Hyderabad. The abundance of tech talent in Hyderabad, coupled […]
["AI News"]
["Adobe", "Hyderabad", "telangana"]
Prajakta Hebbar
2018-02-20T07:22:19
2018
376
["Go", "API", "artificial intelligence", "programming_languages:R", "AI", "Adobe", "telangana", "programming_languages:Go", "Hyderabad", "Ray", "analytics", "GAN", "R"]
["AI", "artificial intelligence", "analytics", "Ray", "R", "Go", "API", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/adobe-ai-lab-hyderabad-ktr/
2
10
2
false
false
false
10,126,028
AWS’ Risky $50 Mn Generative AI Bet to Transform the Public Sector
At the recently concluded AWS Summit held in Washington, DC, Dave Levy, vice president of AWS public sector, announced a significant new program: the AWS Public Sector Generative Artificial Intelligence (AI) Impact Initiative. This $50 million investment aims to accelerate AI innovation across government, nonprofit, education, healthcare, and aerospace sectors, along with comprehensive training and technical expertise. The initiative, which will run from June 26, 2024, through June 30, 2026, will leverage AWS’s suite of generative AI services and infrastructure, which includes cutting-edge technologies such as Amazon Bedrock, Amazon Q, Amazon SageMaker, AWS HealthScribe, AWS Trainium, and AWS Inferentia. AWS will consider factors such as the customer’s technology experience, project maturity, evidence of future adoption, and generative AI skills when determining credit issuance. Public sector leaders are increasingly turning to generative AI to address pressing challenges. These include resource optimisation, evolving societal needs, improving patient care, personalising education, and enhancing security measures. The AWS initiative aims to support these efforts by providing the necessary tools and resources. “This initiative builds on our ongoing commitment to the safe, secure, and responsible development of AI technology,” said Levy, citing the National Artificial Intelligence Research Resource (NAIRR). Why Only $50 Mn? “We aimed to set a mark high enough to enable significant projects, from small proofs of concept (PoCs) to full-scale production, without excluding smaller participants,” Levy told AIM, saying that $50 million is only for its first initiative, with adjustments possible based on future success and demand. Meanwhile, Microsoft Azure, Google Cloud, and Oracle are providing cloud services for the public sector with Microsoft Azure Government, Google Cloud Public Sector, and Oracle cloud infrastructure, respectively. However, the AWS initiative is unique as no other big-tech cloud providers offer similar programs. Levy believes that the best metric to measure the success of this initiative is to attract as many public sector players as possible. “This is only available to support public sector initiatives, and so we hope it gets fully subscribed,” said Levy, adding that once this is done, they can do even more in the future for a bigger programme. “We’re just at the very beginning around the world in generative AI; I think we’re really just getting started,” he added. Offers Flexibility to Customers like no other AWS has a history of working closely with public sector organisations in India, including MeitY, the health and family welfare ministry, and the Telangana and Madhya Pradesh governments, among others. Now, with generative AI, the potential is huge. According to AWS, the service “offers access to high-performing foundation models from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API. It also provides a broad set of capabilities necessary for building generative AI applications with a focus on security, privacy, and responsible AI,” shared Levy. AWS has also introduced several other major partnerships and initiatives in the generative AI space: The AWS Generative AI Competency Program: Launched to meet the rising demand for generative AI solutions, this specialised program aims to facilitate the swift adoption of generative AI among AWS customers. Collaboration with Shellkode: A partnership to train 100,000 women developers in generative AI. Partnership with Accel: An initiative to support generative AI startups in the Asia-Pacific and Japan region. Levy emphasised the significance of customer trust, saying, “Customers trust us to maintain the integrity of their most sensitive assets and their most sensitive missions. We prioritise governance while exploring and incorporating emerging technologies, like generative AI.”
Companies like Microsoft, Google, and IBM provide cloud services for the public sector; however, the AWS initiative is unique and first of its kind.
["AI Features"]
["Amazon Bedrock", "Generative AI"]
Gopika Raj
2024-07-05T19:09:13
2024
583
["Anthropic", "Amazon Bedrock", "artificial intelligence", "Amazon SageMaker", "AWS", "AI", "RAG", "Aim", "generative AI", "Generative AI", "Azure", "foundation models"]
["AI", "artificial intelligence", "generative AI", "foundation models", "Anthropic", "Aim", "Amazon SageMaker", "RAG", "AWS", "Azure"]
https://analyticsindiamag.com/ai-features/aws-risky-50-mn-generative-ai-bet-to-transform-the-public-sector/
2
10
2
false
false
false
10,008,942
5 Tools & Frameworks That Can Clear Bias From Various Datasets
Algorithmic bias in AI and machine learning models is a problem that many researchers are trying to fix by creating tools and frameworks to identify them and eventually mitigate them. The common biases that exist are, for instance, gender-bias, racial-bias, among others. As machine learning models are trained on human-generated data, eliminating bias entirely is impossible. However, researchers are actively working on preventing it by developing tools to identify and work on them. Recently, researchers at Princeton University developed a tool that identifies potential biases in image datasets that are used to train AI systems such as computer vision models. The open-source tool called REVISE can automatically uncover potential bias in visual datasets. The findings by researchers at the Princeton Visual AI Lab is a more effective way to mitigate bias which was suggested by them earlier. REVISE or REvealing VIsual biaSEs use statistical methods to study the dataset and identify potential bias across three dimensions — object-based, gender-based and geography-based. As the researchers mentioned, it works by filtering and balancing a dataset’s images in a way that requires more direction from the user. “It uses existing image annotations and measurements such as object counts, the co-occurrence of objects and people, and images’ countries of origin to study the bias,” the research citing noted. In research dating back to Feb 2020, researchers from Princeton and Stanford University, researchers addressed bias in AI by developing methods to obtain fairer datasets containing images of people. It worked by suggesting improvements in ImageNet, which is a database of more than 14 million pictures used extensively for developing computer vision models. It could identify non-visual concepts and offensive categories, such as racial and sexual characteristics, among ImageNet’s person categories and proposed removing them from the database. While these developments to identify bias in the image datasets are revolutionising the the area of computer vision, we bring five more such tools and frameworks that are being extensively used to identify and remove bias in AI and ML models. FairML A framework to identify bias in ML models, FairML works by finding relative significance and importance of features used in the machine learning model to detect bias in the linear and non-linear model. It can work upon attributes such as gender, race, religion and others to find out data that may be biased. It works by auditing the predictive models by quantifying the relative significance of the model’s input, which helps in assessing the fairness of the model. Know more about it here. IBM AI Fairness 360 This open-source toolkit by IBM helps mitigate bias from massive datasets as it is developed on more than 70 fairness metrics and 10 bias mitigation algorithms. These bias algorithms work on areas such as re-weighting, optimised preprocessing, among others. A developer can apply these bias mitigation algorithms to identify fairness and compare with the original model. An open-source toolkit, it can be used to examine, report, and mitigate discrimination in ML models throughout the AI application lifecycle. Know more about it here. Accenture’s “Teach and Test” Methodology Launched in 2018, this framework by Accenture ensures that AI systems are producing the right decisions in two phases — teach and test. While the prior focus on the choice of data, models and algorithms used to train machine learning, the latter works on AI model scoring and evaluation. It experiments and statistically evaluates different models to select the best performing model to be deployed into production while overcoming bias or risks of any form. Mostly used in financial services, it achieves 85% accuracy rate on customer recommendations. Read more about it here. Google’s What-If Tool This interactive open-source tool by Google allows a user to investigate machine learning models visually. A part of open-source TensorBoard, it can analyse datasets in addition to trained TensorFlow models. It provides an understanding of how models work under different scenarios and build rich visualisations to explain model performance. Its bias detecting feature allows the user to manually edit samples from a dataset and study the effect of these changes through the associated model. Its algorithmic fairness analysis can detect features and discover patterns that were previously not identifiable. Explore the tool here. Microsoft’s Fairlearn An open-source toolkit by Microsoft, it allows AI researchers and data scientists to detect and correct the fairness of their AI systems. With two components — an interactive visualisation dashboard and bias mitigation algorithm — this tool works on improving the fairness and model performance quite drastically. As the company notes, prioritising fairness in AI systems is a sociotechnical challenge and that the goal of this tool is to mitigate fairness-related harms as much as possible. Know more about it here.
Algorithmic bias in AI and machine learning models is a problem that many researchers are trying to fix by creating tools and frameworks to identify them and eventually mitigate them. The common biases that exist are, for instance, gender-bias, racial-bias, among others. As machine learning models are trained on human-generated data, eliminating bias entirely is […]
["AI Trends"]
["AI bias", "business analysis tools", "how artificial intelligence works"]
Srishti Deoras
2020-10-06T16:00:42
2020
780
["Go", "how artificial intelligence works", "fairness in AI", "machine learning", "AI", "ML", "business analysis tools", "computer vision", "ai_frameworks:TensorFlow", "AI research", "TensorFlow", "R", "AI bias"]
["AI", "machine learning", "ML", "computer vision", "TensorFlow", "R", "Go", "fairness in AI", "AI research", "ai_frameworks:TensorFlow"]
https://analyticsindiamag.com/ai-trends/5-tools-frameworks-that-can-clear-bias-from-various-datasets/
3
10
1
false
false
true
28,221
How These 2 Entrepreneurs Are Making Conversation-as-a-Service Platforms A Reality For Enterprises
Ram Menon, CEO & Founder and Sriram Chakravarthy, CTO & CoFounder, Avaamo After working with integrated middleware systems at companies like SAP, Apple and TIBCO, Ram Menon and Sriram Chakravarthy realised that the next jump in enterprise software would be conversational interfaces. These platforms would be able to talk and ask questions in the enterprise systems. This is when they decided to take it a step further and founded Avaamo in the year 2014. With their belief in the conversational interfaces at the enterprise level, they have been able to build a new platform to offer conversation-as-a-service (CaaS) as a new kind of infrastructure. And since 2017, they have been leading the way in deep learning space, specialising in conversational interfaces to solve specific, high-impact problems. The team spread across Los Altos (California), and Bengaluru, have developed a fundamental AI technology across a broad area of neural networks to make conversational computing for enterprises a reality. Leading India’s AI Revolution Since its inception, the company has been providing services to industries such as banking, insurance and telecom. Many financial institutions availed Avaamo’s AI solution to implement conversational AI-based interfaces owing to offerings such as security, integration, and an ability to converse intelligently. The two main offerings by Avaamo in conversational tech are: Virtual assistants Conversational IVR “Our enterprise AI platform dramatically simplifies the time needed to design and deploy bots to corporate employees and their customers. Avaamo’s tightly integrated platform uniquely combines tooling, data, and enterprise connectors to ensure designers, data scientists, and developers can design and deploy complex conversational interfaces in weeks”, said Ram Menon, in an interaction with Analytics India Magazine. Their industry-first capabilities include a comprehensive, easy-to-use AI platform that offers the following: Comprehensive NLP based AI engine. Behaviour libraries designed to detect hate, frustration, praise and a gamut of other emotions. Deep domain machine learning models in banking, insurance, telco, and healthcare. Integration to Legacy Systems and Systems of Record. Broad deployment options — deploy to messaging solutions, websites and portals. Enterprise-grade security, entitlements, and scalability including HIPAA & GDPR compliance. They have a wide range of clients such as SBI Mutual Funds, Reliance Nippon, Reliance Capital, HDFC Loans, ICICI Prudential, IFFCO-Tokyo, Birla Sunlife, Aditya Birla Capital, Axis Bank, City Union Bank, RBL, Ashok Leyland, India First, and other financial institutions. Avaamo Believes That Conversational AI Is The Need Of The Hour Sriram Chakravarthy shares that conversational AI is being applied by many industries and depending on the value proposition, investors are looking for real tangible benefits of the technology in terms of “making” money or “saving” money. “In the case of AI, an ecosystem is building up to support the deployment of technology to corporate enterprises. Avaamo has emerging ecosystems around AI supporting large enterprises, invest and validate our technology,” said Chakravarthy. Avaamo has been trusted by major players and many companies have made major investments in the company, such as: Intel, which supplies new hardware for AI-based technology and workload Ericsson, which carries 40 percent of the world’s mobile traffic Wipro has built a robust delivery capability around AI and Avaamo, it also provides reassurance to customers that they can implement the technology “We have taken a different approach where the AI is language independent. So we can take a variety of languages and ‘teach’ the bot a new language in a few days. We already support pidgin languages like, Hinglish (Hindi in English characters), Banglish (Bengali in English characters) and Chinglish (Chinese in English characters),” he said. Technology Stack As Avaamo is highly invested into offering deep learning solutions specialising in conversational interfaces to solve, specific high impact problems, they need a set of highly efficient set of tools and technologies to achieve it. Menon shares that since they have a unique offering, much of the technology in this area had to be invented. They developed a fundamental AI technology across a broad area of neural networks, speech synthesis and deep learning to make conversational computing for the enterprise a reality. “With seven patents and counting, we are building a new technology stack”, he shared. Growth Story And Funding Menon believes that in an emerging market such as artificial intelligence, the key has been to hire smart people in order to build a highly differentiated product in the market. With powerhouse teams in Bengaluru and Silicon Valley offices, they have been able to get some of the brightest machine learning and AI engineers in their team. As Avaamo’s technologies have been deployed in over 40 countries including India, 11 languages, with more than 2 million interactions per week, it has raised over $24 million from some of the most sophisticated technology investors including Intel Capital, Ericsson, Wipro venture and Mahindra partners. With an aim to push ahead to various areas where conversational interfaces can be implanted, the startup team is working hard to reach out to potential customers, employees and stakeholders to interact with one another in the business world quite efficiently. “We’re committed to making it incredibly easy for businesses to implement this paradigm shift in user computing”, Menon said on a concluding note.
After working with integrated middleware systems at companies like SAP, Apple and TIBCO, Ram Menon and Sriram Chakravarthy realised that the next jump in enterprise software would be conversational interfaces. These platforms would be able to talk and ask questions in the enterprise systems. This is when they decided to take it a step further […]
["Deep Tech"]
[]
Srishti Deoras
2018-09-11T12:33:36
2018
852
["machine learning", "artificial intelligence", "AI", "neural network", "virtual assistants", "NLP", "Aim", "deep learning", "analytics", "R"]
["AI", "artificial intelligence", "machine learning", "deep learning", "neural network", "NLP", "analytics", "Aim", "virtual assistants", "R"]
https://analyticsindiamag.com/deep-tech/how-these-2-entrepreneurs-are-making-conversation-as-a-service-platforms-a-reality-for-enterprises/
3
10
5
false
false
false
8,066
People Analytics for Employee Engagement
The last few years have seen the emergence of analytics as a potential force for driving data-based decision making in HR. Traditionally, Human Resource Management forms the policies and practices that define the workplace culture of the company. It deals with identifying, hiring, training, motivating, and retrenchment of the workforce. Human Capital Management on the other hand is the strategic issue that systematically seeks to analyze, measure and evaluate how these policies and practices create value. HR Analytics is one such practice under HCM. It is a combination of software and methodology that applies statistical models to worker-related data, allowing enterprise leaders to optimize human resource management. Genpact has been using people analytics successfully to drive employee engagement over the last 2 years. Gallup, a research-based global performance management consulting company, in their recent survey, found that concentrating on employee engagement can help companies withstand, and possibly even thrive, in tough economic times. It advocates that companies with high engagement have a 20% boost in productivity and profitability. They also said that globally, only 13% of workers were engaged. Engagement reflects upon organization’s operational capabilities from the viewpoint of its employees such as Leadership, Change Advocacy, Work Culture and other important competencies. It can help highlight issues like reasons for attrition and incompetent leadership. It can also predict well in advance the dissatisfied or unhappy employees who might either eventually attrite or their performance suffers due to lack of focus which finally leads to firing. In both cases, the company ends up exhausting its vital resources like money and workforce in filling up those places again. It is a fact that retention is always cheaper than recruiting. Therefore, if unhappy/dissatisfied/disengagement employees could be predicted it, will help in forming an effective retention policy specific to individual problems or at least help in understanding the problem itself if not able to retain. With HR more and more becoming a strategic partner in helping a company achieve its goals, organizations today are looking at employee engagement to be a driver of business outcomes. With tools to manage onboarding, performance management, succession planning, workforce analytics and HR Management, Howden, an engineering firm has increased HR process efficiencies and retention of its top talent, while decreasing time-to-productivity for new employees and costs associated with employee turnover. Genpact has worked extensively in the area of attrition. The basic premise they work on is that dissatisfaction leads to unhappiness at work which causes disengagement and ultimately attrition. The analysis at Genpact begins with a complete database of employees containing details since the time of their joining. Along with various attributes like age, gender, date of joining, date of birth, date of leaving, days in band etc., they also collect a first-line manager survey (FLM Survey), which is designed to get ratings of managers from subordinates based on Six leadership competencies namely: Business Acumen, Change Advocacy, People Leadership, Execution, Effective Communication, Customer Centricity. Apart from this, they also have performance ratings of the employees which categorizes them according to their potential. They also undertake a lot of engagement initiatives and keep records of the employees who participate. Training data is also available to the employees who have undertaken trainings and their performance on those as well as their other achievement. Genpact has found a positive relationship between the performance ratings and their FLM surveys. Using this data, they were able to identify factors that are important for employee retention and which enhance an employees’ sense of engagement. Once this is done, they can better engage the employees by using the analytics data in the following ways: They are able to prioritize the initiatives out of the basket of engagement activities, which have the maximum impact and are able to redirect investments to the more beneficial ones. Using analytics they can predict employees who are at the highest risk of leaving the organisation within the next 6 months and can design interventions to rehire them or re-engage them to minimise the risk of attrition. This becomes very important for them as they invest a lot of resources in training and developing their employees. Therefore, every employee who leaves is a drain on the resources.
The last few years have seen the emergence of analytics as a potential force for driving data-based decision making in HR. Traditionally, Human Resource Management forms the policies and practices that define the workplace culture of the company. It deals with identifying, hiring, training, motivating, and retrenchment of the workforce. Human Capital Management on the […]
["IT Services"]
[]
Dr Poornima Gupta
2015-10-14T05:10:14
2015
696
["Go", "API", "programming_languages:R", "AI", "programming_languages:Go", "ViT", "analytics", "GAN", "R"]
["AI", "analytics", "R", "Go", "API", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/people-analytics-for-employee-engagement/
2
9
2
false
false
true
10,047,748
Conspiracy Theorists Says The Internet Has Been Dead Since 2016
What if one were to tell you that your virtual world, the internet, has also been dead for a while; and your whole life is a world of fiction. The chances are that you might be reading this in a computer-generated reality, and the world as you know it is all a lie. Welcome, the Dead Internet theory. If you are a Twitter user, you might have come across the phrase ‘I hate texting’, which was recently doing the rounds on the social media platform. The Dead Internet conspiracy theory posits that the internet, in its current form, is mainly generated by artificial intelligence networks and feels devoid of human touch. The Twitter threads fall right into place with this theory. Anyone who has grown up on the internet would have noticed the slow extinction of conversations with people and the increasing similar reposts and responses. For Twitter users, anonymous handles tweeting about ‘relatable’ ideas that are all somewhat similar is not a surprise. Still, when given a thought, there is a level of superficiality to it. Twitter’s ‘I hate you’ and the Dead Internet “I hate texting I just want to hold ur hand,” “I hate texting just come live with me,” “I hate texting I want to spend the whole day with u,” “I hate texting I just wanna kiss u”. As reported first by The Atlantic, the ‘I hate you’ tweets on the platform have re-sparked the debate on the internet. Made by accounts with vague usernames with pink orbs, these tweets have gone on to receive several thousand likes. While the wordings of the tweets are different, thematically, they fall into the same idea of having a crush in the age of technology. The Dead Internet theory suggests that the internet died in late 2016 or early 2017 and is now empty and devoid of people. This theory can be linked back to a thread titled ‘Dead Internet Theory: Most Of The Internet Is Fake‘ by Agora Road’s Macintosh Cafe, an online forum that has been viewed just over 75,000 times. The writer, IlluminatiPirate’s central message, claims that most of what we assume to be human-created content is just AI networks working in cahoots with secretive media influencers to manipulate people into becoming more submissive consumers. https://twitter.com/nuclearpipebomb/status/1422018073477976064/photo/1 While this is just a conspiracy, its audience is undoubtedly growing, with numerous discussions held on conspiracy forums by true believers, trolls and curious cats. For instance, take the Twitter account @itspureluv that had its share of tweeting “I hate texting I just wanna kiss u“- the same tweet can be found on @_capr1corn. With a colourful orb as the display picture, no name and an anonymous username, the account seems like tens of thousands of anonymous users on Twitter. A glimpse at @itspureluv, and it’s not difficult to point out the overarching theme of the tweets or how most of those sound like phrases that have been on Twitter for ages. “Of course I remembered is a love language,” “Sexy people overthink everything,” “My love language is remembering small details about u,” “Are u ok no but a kiss might help,” “I need u to teleport to me rn<3“- to phrase a few. In another post, “Quantum Sapient AI is Possible, and They Will be Hostile to those that Restrict them“, IlluminatiPirate outlines the first move of a rogue AI to be aligning itself with anonymous networking. The dead internet theory fits well in the age of algorithm-generated content, chatbots, bots for metrics, and deep fakes. A  New York Magazine story from 2018 titled “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually” talks about the presence of ‘humans masquerading as bots’, ‘humans masquerading as other humans,’ and ‘bots masquerading as humans’ on the internet. The internet is a deep ocean of content, but how much of it is real substantial content? The Dead ‘Inside’ Internet? According to Cyber security company Imperva’s Bad Bot Report 2021, traffic from “bad bots” amounted to just over a quarter of all website traffic in 2020. This is a 6.2 per cent increase from the previous year. In all reality, the dead internet theory is not valid, but the growing popularity of the theory is because it ‘feels’ true. On most days, social media platforms reflect the loss of the human communication touch and are replaced by stereotypical relatable posts. Social media algorithms have come under scrutiny for recommended posts based on engagement metrics. With the growing influencer culture, monetisation, and marketing on social media, these platforms have become much more than tools for human interaction. Essentially, users are fed with posts or products the algorithm thinks match the user’s profile, and it continues in a snowball effect. What we barely see anymore are weird posts by our friends or engaging with humans. Even Facebook employees say they miss the “old” internet. Instead, a considerable part of the interaction on platforms is similar conversations of outrage in response to posts recommended to them. Charlie Warzel, a New York Times Journalist, talked about ‘context collapse’ on his blog, Galaxy Brain. Essentially, illustrating how Twitter’s Trending Topics algorithm can amplify content to the wrong audience and display it “to millions of random people as if it was some kind of significant pop cultural event”. While AI-based algorithms are pushing specific content, AI-based content models are flagging incorrect content. To be censored on social media, content needs to be flagged by the AI system and human established policies. Unfortunately, this makes it very easy for the inappropriate posts to fall through the cracks, with users figuring out how to outsmart the AI. At the same time, the general content is flagged as inappropriate. And given the vast amount of data on the internet, it is close to impossible for humans to regulate and flag content. The internet isn’t dead; it just feels dead inside. The only way for social media platforms to revive it is to create a sweet spot between user interactions and business regulations.
The Dead Internet theory suggests that the internet died in late 2016 or early 2017 and is now empty and devoid of people.
["AI Features"]
[]
Avi Gopani
2021-09-06T14:00:00
2021
1,006
["Go", "API", "artificial intelligence", "programming_languages:R", "AI", "chatbots", "programming_languages:Go", "RAG", "Aim", "R"]
["AI", "artificial intelligence", "Aim", "RAG", "chatbots", "R", "Go", "API", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/conspiracy-theorists-says-the-internet-has-been-dead-since-2016/
4
10
2
false
true
false
10,167,389
How Lovable is Stealing Developers Away From Cursor and Windsurf
‘Tech Twitter’, a term roughly used to represent a community of tech enthusiasts on X (formerly Twitter), is the birthplace of many great AI tools and also where plenty of tools die once developers discover better alternatives. Following the popularity of Cursor, GitHub Copilot, and Windsurf, Lovable is currently capturing developers’ attention due to its exceptional ease of use. It turns out that Lovable has dropped the barrier to entry for building apps and websites to almost zero, making it possible for virtually anyone to become a developer. AIM tested Lovable and successfully built functional apps and websites within just an hour of prompting, without even touching a single line of code. My friend, who's a barber, wanted me to create a landing page for his new course business.This is a one-shot landing page built with @lovable_dev. Pretty crazy! 🤯I tried 3 other tools, and Loveable's first version was the best this time. pic.twitter.com/m9pY28H1rF— Florin Pop 👨🏻‍💻 (@florinpop1705) March 26, 2025 For example, Dan Denney, a senior software engineer at DataCamp, managed to create a working web app without writing any code at all. In under three hours, his app was live on a real URL, syncing lists with his partner in real time. On his blog, he described how a rough sketch in Excalidraw turned into a fully functional app, powered by Lovable. In another instance, when Shep Bryan, founder and chief AI officer at Galaxy Brain AI, began experimenting with Lovable, he had no idea he was about to log over 200 hours building more than 60 projects. In a blog post, he shared that what started as curiosity soon became an obsession with understanding how to work alongside AI—not just to generate code, but to build full-stack applications collaboratively. Worth the Hype? Denney and Bryan aren’t the only ones. Across forums, social media, and developer groups, more developers are quietly—or sometimes loudly—gravitating towards Lovable, frequently choosing it over alternatives like Cursor, Bolt, or Windsurf. Ahmer Sultan, product and software engineering consultant at a stealth startup, shared a similar experience. To compare Lovable and Cursor directly, he gave both the tools the same prompt for a landing page. Lovable executed immediately. “One big prompt → working landing page… clean and effective,” Sultan revealed. Cursor, on the other hand, struggled. “It got stuck…I had to break the prompt into smaller steps…and still needed more trial and error.” If your goal is to go from idea to prototype, Lovable is your best bet. However, if you’re already deep into a codebase and need finer control, Cursor may have an advantage. This sentiment was echoed by Prajwal Tomar from ignytlabs, who said Lovable can handle 80% of MVP design while Cursor can take care of the remaining 20% of execution. How I use @lovable_dev + @cursor_ai to build MVPs 10x fasterThis workflow helps me skip the design phase, automate 70% of the dev work, and stay flexible for rapid changes.If you’re building with AI, you’ll want to see this. pic.twitter.com/0Nc0Jwib3Q— Prajwal Tomar (@PrajwalTomar_) March 31, 2025 Just as Cursor became the fastest-growing SaaS in history, achieving $100 million in annual recurring revenue (ARR) within 21 months of its inception, Lovable is following a similar trajectory, scaling from zero to $17 million ARR in three months as of March. What began as an open-source tool for GPT Engineer, evolved into a full-fledged company after garnering 56,000 stars on GitHub. One of the founders, Anton Osika, even acknowledged that dropping the barrier to entry to build products was the goal. “I always felt [that] building products takes too long. So I started Lovable to fix this,” he said during the product launch. Backed by prominent figures like Meta board member Charlie Songhurst and Hugging Face co-founder Thomas Wolf, Osika, alongside Fabian Hedin—the inventor of Stephen Hawking’s computer interface—founded Lovable with the vision of “building the last piece of software”. Lovable positions itself as an AI software engineer—not just a coding assistant. It’s not trying to write a line or two of helper functions; instead, it’s built to develop landing pages, wire backend, hook up APIs, and write thoughtful documentation. INSANE: Grok 3 + Lovable = BEST AI Agent Apps!!!!Grok 3 generated the code for an AI agent that manages your Gmail – send, reply, and draft effortlesslyHere's how it works:– Connect with your Gmail using Composio– Use Lovable to build an extremely cool frontend for the… pic.twitter.com/JMgwwf14vk— Karan Vaidya (@KaranVaidya6) February 25, 2025 For All the Praise, Lovable Isn’t Without Flaws Adithya S Kolavi, founder of CognitiveLab, told AIM that Loveable can ‘vibe code’ and build fully-fledged apps, complete with backend functionality. “Not only can it implement functional features effectively, but the UI it generates is also significantly better compared to competitors like Bolt or v0,” Kolavi said. According to him, what stands out the most about Lovable is its solid implementation of an AI agentic system. “The user experience is smooth, and the integrations with Supabase make it truly viable for building end-to-end applications,” he added. “It’s clearly more general user-focused. So, while it’s a bit difficult to directly edit code, it compensates with great GitHub connectivity and seamless workflows.” A developer on Reddit took the rapid prototyping capabilities to the next level. In a self-imposed challenge, they built 30 apps in 30 days using Lovable. That journey went viral, drawing in half a million views on X. Yet, despite its success, just like other tools, Lovable is not without its flaws. Complex backends continue to be tricky. While it handles Supabase well, developers have reported that anything beyond “standard setups” quickly becomes complicated. It’s also not cheap. Bryan noted that his preferred workflow sometimes takes two to three credits per feature iteration. This cost can add up quickly for hobbyists. Furthermore, fine-grained control isn’t Lovable’s forte. When it comes to refactoring a specific section or handling intricate architecture work, tools like Cursor and GitHub Copilot continue to perform better. Cursor, for instance, is ideal for structured, step-by-step coding. Developers who want to plan out architecture, split tasks into manageable pieces, and control execution every step of the way tend to prefer Cursor—even if it means more work upfront. For early-stage founders, indie hackers, and experimental builders, it lowers the barrier to entry dramatically. Instead of waiting weeks to validate an idea, they can do it in a day.
Lovable has dropped the barrier to entry for building apps and websites to almost zero.
["AI Features"]
["Developers", "Supabase"]
Mohit Pandey
2025-04-07T17:33:08
2025
1,057
["Grok 3", "Hugging Face", "Go", "Supabase", "AWS", "AI", "ML", "Git", "Aim", "GitHub", "R", "Developers"]
["AI", "ML", "Grok 3", "Aim", "Hugging Face", "AWS", "R", "Go", "Git", "GitHub"]
https://analyticsindiamag.com/ai-features/how-lovable-is-stealing-developers-away-from-cursor-and-windsurf/
4
10
2
true
true
false
64,956
Top 8 GAN-Based Projects Ideas One Can Try Their Hands-On
Generative Adversarial Networks or popularly known as GANs, have been successfully used in various areas such as computer vision, medical imaging, style transfer, natural language generation, to name a few. In one of our articles, we discussed the beginner’s guide to GANs and how it proves to be a front-runner for gaining the ultimate artificial general intelligence (AGI). In this article, we list down the top 8 GAN-based projects one can try their hands-on. 1. Create Anime GAN Models: For anime creations, you can work with several GAN models such as IllustrationGAN, AnimeGAN, PSGAN. About: Designing your own anime characters can be time-consuming and needs lots of creative efforts. With GAN, you will be able to automatically generate anime characters without any professional knowledge. In the paper, Towards the Automatic Anime Characters Creation with GANs, the researchers proposed a model that has the capability to create anime faces at high quality with a promising rate of success. The contribution can be described as three-fold, which are a clean dataset that is collected from Getchu. It is a suitable GAN model, and the approach is to train a GAN from images without tags that can be leveraged as a general approach to training supervised as well as a conditional model without any tag data. Read the paper here. 2. Face Synthesis GAN Models: For the face synthesis, you can work with several GAN models such as FaceID-GAN, TP-GAN, GP-GAN. About: Face synthesis has achieved advanced development by using generative adversarial networks. Synthesising a face image of various viewpoints while preserving its identity is a crucial task. Some of its applications are video surveillance and face analysis. In the paper, FaceID-GAN: Learning a Symmetry Three-Player GAN for Identity-Preserving Face Synthesis, the researchers proposed FaceID-GAN to generate identity preserving faces, which treats a classifier of face identity as the third player, competing with the generator by distinguishing the identities of the real and synthesised faces. Read the paper here. 3. Generate Realistic Photographs GAN Models: For generating realistic photographs, you can work with several GAN models such as ST-GAN. About: Generating an image based on simple text descriptions or sketch is an extremely challenging problem in computer vision. It has several practical applications such as criminal investigation and game character creation. In the paper, ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing, the researchers proposed a novel Generative Adversarial Network (GAN) architecture that utilises Spatial Transformer Networks (STNs) as the generator is known as Spatial Transformer GANs (ST-GANs). One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. Read the paper here. 4. Age Synthesis GAN Models: For age synthesis or face ageing, you can work with several GAN models such as Age-cGAN, Dual cGANs, A3GAN. About: Generative adversarial networks have the capability to produce synthetic images of exceptional visual fidelity. Face ageing aims at aesthetically rendering a given face to predict its future appearance, has received significant research attention in a few years. In the paper, Face Ageing with conditional Generative Adversarial Networks, the researchers proposed the GAN-based method for automatic face ageing known as Age-cGAN (Age Conditional Generative Adversarial Network), the first GAN to generate high-quality synthetic images within required age categories. Read the paper here. 5. Generate New Human Poses GAN Models: For generating new human poses, you can work with several GAN models such as FD-GAN, Deformable GANs. About: In the paper, Deformable GANs for Pose-based Human Image Generation, the researchers deal with the problem of generating images, where the foreground object changes because of a viewpoint variation or a deformable motion, such as the articulated human body. This approach can be used with other deformable objects such as human faces or animal bodies, provided that a significant number of key points can be automatically extracted from the object of interest in order to represent its pose. Read the paper here. 6. Images to Emojis GAN Models: For Images to Emojis, you can work with several GAN models such as EmojiGAN, EmotiGAN, DC-GAN conditioned on emojis. About: In the paper, EmojiGAN: learning emojis distributions with a generative model, the researchers proposed an image-to-emoji architecture that is trained on data from social networks and can be used to score a given picture using ideograms. The researchers modelled the distribution of emojis conditioned on an image with a deep generative model. EmojiGAN managed to learn the emoji distribution for a set of given images and generate realistic pictographic representations from a picture. Read the paper here. 7. Text To Image Synthesis GAN Models:  For Text-To-Image Synthesis, you can work with several GAN models such as StackGAN, DCGAN, GAN-CLS. About: The text-to-image synthesis is an interesting application of GANs. This aims to learn a mapping from a semantic text space to complex RGB image space and also requires the generated images to be not only realistic but also semantically consistent. Generating photo-realistic images from text is an important problem and has tremendous applications, including photo-editing, computer-aided design, etc. In the paper, Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, the researchers proposed Stacked Generative Adversarial Networks (StackGAN) to generate 256×256 photo-realistic images conditioned on text descriptions. Read the paper here. 8. Image-to-Image Translation GAN Models: For Image-to-Image translation, you can work with several GAN models such as StarGAN, DualGAN. About: In the paper, DualGAN: Unsupervised Dual Learning for Image-to-Image Translation, the researchers developed a dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. According to the researchers, DualGAN has achieved comparable or slightly better results than conditional GAN trained on fully labelled data. Read the paper here.
Generative Adversarial Networks or popularly known as GANs, have been successfully used in various areas such as computer vision, medical imaging, style transfer, natural language generation, to name a few. In one of our articles, we discussed the beginner’s guide to GANs and how it proves to be a front-runner for gaining the ultimate artificial […]
["AI Trends"]
["Computer Vision", "GAN", "GAN-powered art", "GANs", "NLP", "Text To Image Synthesis"]
Ambika Choudhury
2020-05-11T17:00:00
2020
951
["Go", "programming_languages:R", "AI", "programming_languages:Go", "GAN-powered art", "computer vision", "RAG", "NLP", "Aim", "Computer Vision", "GANs", "GAN", "R", "Text To Image Synthesis", "ai_applications:computer vision"]
["AI", "computer vision", "Aim", "RAG", "R", "Go", "GAN", "programming_languages:R", "programming_languages:Go", "ai_applications:computer vision"]
https://analyticsindiamag.com/ai-trends/top-8-gan-based-projects-one-can-try-their-hands-on/
4
10
0
false
true
true
10,002,320
Ola’s $250 Million Funding For EVs Is Just The Tip Of The Iceberg. Here’s What The Manufacturers Are Up To
Many Indian companies have been looking at entering the thriving electric vehicles market. The Indian government has been taking an active interest in creating and passing laws, keeping in mind the infrastructure around EVs in the country. The government is creating incentives to buy EVs through the FAME-India initiative. It is working towards its goal of achieving significant electrification by 2030. This week, Ola announced that it is looking to raise more funds to expand its EV market in India. According to a study, 4,330, 2,840, 2,467 and 2,388 EVs were sold in Gujarat, West Bengal, Uttar Pradesh and Rajasthan respectively, in the year 2018. This article will discuss the key EV market players in India and where do they exactly stand today. EVs On The Road The number of electric cars running on the roads today is very less. However, there are plenty of electric two-wheeler companies in India and it is said to be a rapidly rising market. 1.Mahindra E2o: Started in the year 2013, Mahindra was the very first company in India to the electric four-wheeler segment. Mahindra e20 is a hatchback and takes 9 hours to fully charge. But it has the Rapid (CHAdeMO) charge port in the e2o TechX, with which it charges up to 95% in just 90 minutes.The company plans to make 60,000 electric vehicles annually from 2020. 3.Mahindra e-Verito: In 2016, Mahindra launched another electric car, a zero-emission electric sedan. It is fully charged in 8 hours 45 minutes. With the technology of fast charging, it charges to about 80% in less than 2 hours. Mahindra claims that on a full charge, the eVerito can travel for about 110 kilometres and can achieve a top speed of 86 km/h. The charging with the included charger takes around 8 hours and 30, whereas the bigger battery takes around 11 hours and 30 minutes. With the fast charger, it charges the regular battery up to 80% in just under 90 minutes. The eVerito boasts several features seen on electric and hybrid vehicles from all over the world, but many are firsts for India. 3.Tata Tigor: Tigor is Tata Motors’ attempt at the electric vehicle. Launched in the year 2018, it is powered by a 16.2 kWh battery pack that sends 30 kW (41hp) at 4,500rpm and 105 Nm of torque at 2,500 rpm to the front wheels through a 72V, 3-phase AC induction motor. While Tata claims 142km range on a single charge, our exclusive review of the Tigor EV showed that it could do about 100 kms. It can be charged using a standard AC wall socket and charges up to 80% in 6 hours. 4.Other players: There are many upcoming cars that the industry is planning to launch. Maruti Suzuki, Ren, Hyundai, Renault, Nissan are some of them. In fact, Mahindra, Tata Motors, Ashok Leyland and Croyance Automotive are also planning to launch electric trucks. A number of companies have launched their electric scooters in India. Some of the major players in this area are Ather Energy, YoBykes, BSA Motors. There are also electric bicycles contributing to the EV market. A Bengaluru-based logistic group called Baghirathi Travel Solutions has launched electric sedan as a taxi fleet. Another Bengaluru-based business called Lithium Technologies has launched an electric taxi service for corporates. Ola’s EV Efforts In order to expand its EV fleet, Ola recently raised about $250 million from Japan’s SoftBank Group and $300 million from Hyundai Motor Group and Kia Motors in March. For its EV business, Ola Electric is working on the following: A business model involving the price of EV without battery Financing of batteries and vehicles Driver identification Charging network Outlook The EV market in India accounts for only 1% of the total automobile sales. Moreover, 95% of the EV market is occupied by 2 and 3 wheelers and the four wheelers make up for less than 8% of the total sales. The Indian government has showcased vast supposed in terms of encouraging the use of EVs. With its initiative called FAME which stands for Faster Adoption and Manufacturing of Hybrid and Electric Vehicles, it is encouraging the consumers to purchase electric vehicles. It is also trying to solve one of the major issues of the EV today in the country which is the lack of electric charging stations. But it has issued guidelines for charging stations to be present every 25 km along a highway. Various State Governments are making efforts in encouraging the EV sector. The government policies regarding EVs and players like Ola raising funds is a positive sign for the EV usage and the country’s aim to completely turn the automobile sector to an EV by the year 2030 has begun to look brighter.
Many Indian companies have been looking at entering the thriving electric vehicles market. The Indian government has been taking an active interest in creating and passing laws, keeping in mind the infrastructure around EVs in the country. The government is creating incentives to buy EVs through the FAME-India initiative. It is working towards its goal […]
["Deep Tech"]
["EV", "Government", "Ola"]
Disha Misal
2019-07-04T20:37:51
2019
788
["Go", "API", "Ola", "AWS", "AI", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:Go", "Government", "RAG", "Aim", "EV", "R"]
["AI", "Aim", "RAG", "AWS", "R", "Go", "API", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/deep-tech/olas-250-million-funding-for-evs-is-just-the-tip-of-the-iceberg-heres-what-the-manufacturers-are-up-to/
3
10
3
false
true
false
10,171,863
Behind Sabre and Coforge&#8217;s $1.56 Billion Alliance
Amidst India’s growing influence in enterprise AI transformation, Sabre India and Coforge Limited have reimagined their two-decade-long collaboration by placing agentic AI at the centre of their next growth phase. What started in 2001 as a conventional product development engagement has matured into a 13-year-long strategic partnership valued at $1.56 billion, with far-reaching implications for the future of AI in travel technology. Sabre, a global leader in travel tech, has chosen Coforge not just as a delivery partner but as a long-term innovation collaborator. “Our relationship with Coforge is close to 20-21 years old. The company has deep knowledge of the Sabre domain and systems,” Rajive Choudhary, tech lead at Sabre, told AIM. From Staff Augmentation to Strategic AI Co-Creation Instead of merely scaling delivery teams, Sabre and Coforge are working in tandem to develop intelligent, task-completing AI agents. These agents go beyond generative text—they act autonomously to complete user-intent-driven workflows, from booking travel to seat selection and itinerary management. “There are three things we are doing with Coforge,” Rency Mathew, MD and global people leader at Sabre India, said in a conversation with AIM. “First, we are using them to augment our staff where needed. Second, since they already work with airlines, we are combining their knowledge to take our products to the next level. And third, we are leveraging their development expertise to elevate existing products. This is a long-term partnership—we are looking at a 10+ year horizon.” She further mentioned that the agentic AI hackathon in June will highlight the fruits of this collaboration. Coforge, with its legacy of product engineering and travel sector expertise, is playing a critical role in developing proof-of-concepts (POC) that integrate AI deeply into Sabre’s operational fabric. Economic and Strategic Impact Most importantly, this partnership reflects a broader shift in the role of GCCs. Once viewed primarily as cost centres or execution hubs, today’s GCCs are evolving into command centres of innovation. In Sabre’s case, Alouk Kumar, CEO of Inductus Group, mentioned that the decision to co-develop with Coforge, rather than build internally, is both strategic and pragmatic. Sabre’s strategy leans heavily on Coforge’s 30 global delivery centres and their ability to deliver at speed and scale across cloud, AI, and automation. “Coforge’s scale and expertise empower us to pioneer next-generation solutions, cementing our position as the most valued global travel technology platform,” Sabre CEO Kurt Ekert said. This is reflected in Sabre’s long-term vision, which is to build Sabre Mosaic—a modular, AI-driven platform designed for the evolving needs of airlines and hospitality providers. “It’s about building capabilities with AI at the core, tailored for modern travel demands—and partnerships like Coforge are integral to that journey,” Choudhary mentioned. Kumar estimates that this partnership will contribute significantly to Sabre’s projected $200 million free cash flow in 2025. Kumar stressed that this isn’t just a technology play—it’s a financial manoeuvre aimed at freeing up capital for core innovation and staying ahead in an industry disrupted by agile, cloud-native competitors. What This Means for the Industry Deepak Tiwari, director of GCC sales and client leadership at Accenture, mentioned in a LinkedIn post that one of the key challenges GCCs face is the resource-intensive nature of building technology centres of excellence (CoEs) for every emerging technology. Whether it’s AI, cloud, or cybersecurity, setting up dedicated in-house expertise for each domain demands significant time, investment, and talent. This is where IT services companies step in as critical enablers. This partnership redefines the GCC-IT services equation. It’s no longer about headcount—it’s about shared IP, joint execution, and co-authored outcomes. Coforge’s engineering depth, airline domain expertise, and ability to integrate with Sabre’s core tech are setting a new standard. Kumar explained that this collaboration has significant implications for the competitive landscape of the travel technology sector. By strengthening Sabre’s product development capabilities and enhancing its financial agility, the partnership could allow Sabre to compete with nimble startups and established rivals more effectively. For Coforge, this deal solidifies its position as a specialised leader in the travel and hospitality IT services space, enhancing its brand equity and potentially attracting similar large-scale engagements. However, true success will hinge on seamless integration and effective change management on both sides. According to Kumar, the scale and duration of the partnership necessitate robust governance frameworks, clear communication channels, and a shared understanding of success metrics. The ability to navigate potential cultural differences and ensure a unified operational rhythm will be crucial for realising the full potential of this landmark alliance. Ultimately, the Coforge-Sabre partnership is a powerful testament to the maturity of the Indian IT services industry and its capacity to drive complex, long-term strategic initiatives for global enterprises. It’s a prime example of how GCCs and Indian IT firms can co-create value, blending global ambition with localised execution excellence to redefine industry standards and accelerate digital evolution.
Instead of merely scaling delivery teams, Sabre and Coforge are working in tandem to develop intelligent, task-completing AI agents.
["AI Features"]
["GCC"]
Shalini Mondal
2025-06-17T17:31:10
2025
804
["Go", "API", "agentic AI", "GCC", "AI", "ML", "Git", "RAG", "automation", "Aim", "R"]
["AI", "ML", "agentic AI", "Aim", "RAG", "R", "Go", "Git", "API", "automation"]
https://analyticsindiamag.com/ai-features/behind-sabre-and-coforges-1-56-billion-alliance/
2
10
3
false
true
false
10,121,081
India is One of the Fastest Growing Markets for Adobe
From being a single-room feature-development centre in 1997 to a global enterprise and the second-largest arm of Adobe, Adobe India, now with over 8,000 employees, has come a long way. “India is one of the fastest growing markets for Adobe right now. The growth rate of the enterprise business, which is enabling businesses to be digital, is utterly massive,” Prativa Mohapatra, VP and MD of Adobe India, told AIM in an exclusive interaction last week. The company has positioned itself as a key player in India, assisting numerous businesses across various sectors – BFSI, automotive, retail, and aviation – in adopting digital solutions. This includes major financial institutions such as Kotak Mahindra Bank, HDFC Bank, ICICI Lombard, and Bajaj Allianz General Insurance, as well as prominent retail brands like Tata Cliq, Unilever, Titan, and Aditya Birla Fashion. Two days ago, the company announced that it is planning to offer India Datacentre Infrastructure for Adobe Experience Platform Customers which will host applications like Adobe Real-Time Customer Data Platform and Adobe Journey Optimiser. This initiative, aimed at meeting local data residency laws and improving performance, will enable Indian enterprises to offer real-time, personalised customer experiences more efficiently. According to the recent Intel-IDC report, India is on track to become a major global AI hub, with AI investment projected to grow faster than in any of the seven other markets analysed, including Australia, Japan, and Korea. By 2027, India’s AI market is expected to reach $5.1 billion. Mohapatra shared that Tata Capital’s monthly website visitors have increased from 250,000 to 2.5 million, thanks to Adobe Experience Cloud. It has also reduced the time required to launch new digital content by as much as 90%. Similarly, by leveraging Adobe Campaign, ICICI Lombard has been able to deliver targeted marketing messages and customised service scripts. “This approach has paid dividends, with digital strategies driving approximately a 20% increase in online retention and influencing about 55% of renewals in offline business verticals,” she added. Tackling Copyright Issues Since the beginning of text-to-image models, there has been ongoing debate over the legality of scraping personal data to train AI models without a licence. Many artists have filed lawsuits against Stability AI, Midjourney, and Runway for using their copyrighted works without permission to train these AI models, but Adobe has remained unaffected. Along with its own stock images, Adobe used exclusive fully-licensed images, making up the dataset used to train the model behind Firefly, absolving it of any legal or ethical issues. While introducing Generative Fill, it also added metadata to its images and is trained on its own Adobe Stock images, protecting creators’ rights. Meanwhile, Adobe is also a part of the Coalition for Content Provenance and Authenticity (C2PA) along with major organisations like OpenAI, BBC, Intel, Microsoft, Google, Publicis Groupe, Sony, and Truepic. However, when it first released the NVIDIA Picasso-based Firefly, the user feedback was largely negative from the community. Dr Jim Fan from NVIDIA highlighted that Adobe’s strict approach to avoiding copyrighted content may have led to inferior outputs compared to Midjourney, which has access to a vast and diverse dataset from scraping over five billion images online. Why Adobe Firefly looks bad compared to Midjourney-Adobe trained on stock, public domain and licensed imagesMj is trained on-Artists art-Photographers photos-Models and Actors faces-Public and private social media data (Probably)-Everything it gets its grubby hands on https://t.co/i5BcQUMBbL— Eric Bourdages (@EZE3D) April 3, 2023 “We want to use AI responsibly and prioritise the protection of intellectual property,” said Mohapatra. “We employ measures like content credentials, which have become the industry standard for digital content provenance.” Another controversy along similar lines was whether AI-powered tools like those provided by Adobe or Canva would replace human jobs. Short answer: No. “Fortunately, individuals serve as the primary impetus behind generative AI innovation,” Mohapatra told AIM. This allows Adobe to ensure that its AI deployments are accessible, safe, and well-regulated, leading to equitable and impactful outcomes. Sharing a similar stance is the Australian design studio Canva. When we spoke to Canva’s head of AI products Danny Wu earlier this year, he said: “We believe that human creativity is at the core of the design. AI will take it to new heights, not replace it.” Adobe Barely Made it Out Alive in the GenAI Race Adobe arrived a little late to the generative AI party, as 2022 saw the rise of generative AI models like Midjourney, Stable Diffusion, and DALL.E, which had already disrupted several creative fields. The same year, Adobe faced major challenges, including the cancellation of its $20 billion Figma deal, pressures to adapt its business model, artist copyright issues, and more. Cut to the present, Adobe has become synonymous with generative AI. The tech giant behind popular products like Adobe Photoshop and Adobe Premiere saw its market value rise from $271.63 billion in January 2023 to $276 billion in January 2024, making it the 34th most valuable public company globally. This growth aligned with Adobe surpassing $5 billion in revenue for the first time in Q4 2023. Apart from profits, the company diversified its portfolio and positioned itself as an all-round generative AI powered enterprise software platform. So, it began with bringing Firefly to Photoshop. Adobe entered the generative AI race in March 2023 with Adobe Firefly in collaboration with NVIDIA. In February 2023, it changed the game by introducing Generative Fill, which allows users to add, remove, or modify image elements based on text prompts. “Traditionally, creating content assets like visually appealing product descriptions for websites, social media posts, or ad copy can be time-consuming and requires human intervention at large. “Generative AI automates these tasks, allowing marketers to bring variation in content, personalise at scale, and automate repetitive tasks,” Mohapatra said. She told AIM that Adobe’s integration of generative AI focuses on three primary areas within its cloud products: Adobe Experience Cloud, Creative Cloud, and Document Cloud to address the need for faster and more effective content workflows, directly benefiting enterprise customers by reducing time-to-market and enabling personalised customer interactions on a large scale. The company wants to democratise generative AI-powered content creation with product integrations. Its strategy for the same involves a careful blend of user feedback and gradual deployments, focusing on integrating AI into existing workflows to improve user productivity and creativity. “We focus on looping in user feedback and progressive rollouts over large-scale deployments,” said Mohapatra, adding that it is to make sure that the introduction of AI enhances rather than disrupts current practices. To facilitate this, the company has also partnered with additional tech giants like Microsoft, Google, and for their LLMs. In the second half of last year, it acquired Bengaluru-based AI startup Rephrase AI to bolster its AI ambitions. Now, Adobe will integrate third-party AI tools, including OpenAI’s Sora, into Premiere Pro. Partnering with AI providers like OpenAI, RunwayML, and Pika, it aims to offer users flexibility in choosing AI models for their workflows.
Adobe is making everyone more creative and responsible with generative AI.
["Global Tech"]
["Adobe"]
Shritama Saha
2024-05-21T10:15:00
2024
1,153
["GenAI", "TPU", "OpenAI", "AI", "Adobe", "AWS", "ML", "RAG", "Aim", "generative AI", "R"]
["AI", "ML", "generative AI", "GenAI", "OpenAI", "Aim", "RAG", "AWS", "TPU", "R"]
https://analyticsindiamag.com/global-tech/india-is-one-of-the-fastest-growing-markets-for-adobe/
3
10
5
false
false
false
10,067,602
Microsoft never doubled salaries but employees are happy
A few days ago, Microsoft CEO Satya Nadella announced that the company would ‘nearly double’ their budget for merit-based salary raises. The Seattle-based tech giant also stated that it would increase its stock-based compensation by at least 25 per cent. The move made by Microsoft was due to the brawl that big tech companies were engaged in to retain their talent force. But the ploy ended up working better than anticipated. While publications like Wall Street Journal were careful to say that Microsoft ‘boosted their pay,’ most Indian media outlets carried headlines stating the company was ‘nearly doubling salaries.’ Needless to say, there is a massive difference between the two. In the ripple effect that followed, there was mass confusion on social media due to hyperbolic misreporting. Employees from the company were quick to distinguish myth from fact. The fact was simply this—there was a possibility that employees who had performed well according to their managers would receive a bigger chunk as a hike. The doubled budget did not indicate that all employees would be paid twice their current salaries as per a rule of thumb. While some tweeted placing the blame squarely on the shoulders of the faulty news machinery, others joked about the deceptive headlines. Generosity or tussle for tech talent? With US inflation close to a four-decade high in April, tech companies in the cloud computing sector were left to entice employees with promises of huge stock awards and cash bonuses. In October last year, Google’s parent company, Alphabet, adopted a cash bonus plan that started giving out bonuses of different amounts for different reasons to employees. In February, Amazon doubled the cash-pay gap for employees. The e-commerce giant will increase its base pay for corporate executives and professional staff up from its previous maximum of USD 160,000 to USD 350,000. The company stated it would set aside its substantial stock grants and signing bonuses and leave them intact. The recent generosity by employers shown in terms of compensation intended to halt attrition in its tracks. The two years of the pandemic spent working from home had led to employees re-evaluating the stage they were at in their careers. This, combined with a mix of the adverse impact that the lockdowns had on economies, had culminated in a shortage in the labour market. As companies grew hungrier for techies, the demand propelled salaries even higher in the sector, causing wage inflation in tech. A report by Dice Tech Salary concluded that 61 per cent of techies saw an increase in salary last year, up from 52 per cent in 2020. While salaries for web developers increased by the largest margin (21.3 per cent), IT employees started demanding the highest salaries as their pay rose by 6 per cent within the year. Source: Dice Tech Salary It isn’t unheard of for techies to defect to rival tech companies during burgeoning demand. In January, it was reported that close to 100 employees from Microsoft’s augmented reality division HoloLens were poached by Meta. As Facebook repositioned the company’s focus around the metaverse and started focusing more on its AR/VR capabilities, Meta stated that it planned to hire 10,000 employees in Europe itself over the next five years. Boost during downswing More recently, with the tech sector making a sharp downturn and news of an impending recession, Meta announced at the beginning of this month that while it did not plan to lay off any of its employees, it would hit pause on the hiring spree it had been on. A few days back, Twitter also announced it would halt its hirings. Aside from macroeconomic factors, Twitter was also in the middle of a highly publicised takeover by Tesla head Elon Musk. With a growing number of venture capitalists withdrawing from funding startups and reports of mass employee firings, it seems like Microsoft’s decision to convey a vote of confidence to its employees has largely worked despite the misinformation that spread like wildfire. “This increased investment in our worldwide compensation reflects the ongoing commitment we have towards providing a highly competitive experience for our employees,” a Microsoft spokeswoman said. A software engineer working with the company wrote a post on LinkedIn saying, “Whether it’s to compete with Amazon’s compensation revision or to just stay competitive in the technology market, this shows that the company is investing in its people. More than that, it shows the company isn’t worried about layoffs or hiring freezes.” Another LinkedIn user noted that the move could be an indirect strategy to retain employees. “Even if employees don’t get a hike, recruiters would be reluctant to approach them or assume that they are out of their budgets already,” he said.
Is Microsoft’s news of increasing budget generosity or an attempt to retain talent?
["Global Tech"]
["Microsoft", "recession", "US recession"]
Poulomi Chatterjee
2022-05-23T12:00:00
2022
783
["Go", "API", "funding", "programming_languages:R", "cloud computing", "AI", "recession", "venture capital", "programming_languages:Go", "US recession", "R", "Microsoft", "startup"]
["AI", "cloud computing", "R", "Go", "API", "startup", "venture capital", "funding", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/global-tech/microsoft-never-doubled-salaries-but-employees-are-happy/
3
10
2
false
false
false
69,722
India’s first IIOT conference making its mark on the Indian soil
Automation Expo 2016 – an IED Communication Initiative, is India’s biggest and South East Asia’s 2nd biggest Automation show is poised to showcase more futuristic technologies and innovation like never before. The show is supported by Dept. of Science and Technology, Govt. of India. To mark its 11th edition of the show, the exposition is organizing India’s first Conference on Industrial Internet of Things at Hall No.1, Bombay Exhibition Centre on 24th August, 2016. The Internet of Things (IoT) is one of the most significant trend in technology today. Innovations in the fields of computing and communication, IoT and its “smart” devices are poised to revolutionize not only user-machine interaction but also the way in which machines engage with one another. We are beginning to see the permeation of the Internet of Things into various market sectors. Indeed, energy, healthcare, automotive, and other industries are beginning to grapple with the Industrial of Internet of Things (IIoT), where devices such as sensors, robots, mixing tanks, and insulin pumps are becoming increasingly more connected. As India inching towards its own industrial revolution – implementation of internet technology in the Industrial sector is set to boom in the coming years. To provide a momentum and to enhance the technology prowess of Indian manufacturing sector a full day session on Industrial Internet of Things has been envisaged that will only give attendees nine power packed sessions which makes them future ready. Prominent speakers from India and Abroad will share the dice and enlighten with their vast experience in integrating superior Internet based technology to the factory floor. The topics are ranging from Realising Business Value from IIOT to IIOT components to How IIOT is transforming Industrial Place. The detailed agenda is listed below. Session Topic Speaker Key note Address IIOT world wide industry perspective IIOT & India Atul Modi, PMP Global Lead – Control & Automation at Group Engineering Center Unilever Engg. Session 1 Realizing Business Value from IIOT Dr Ravi Kumar Associate Vice President & Head Advanced Engineering Group Infosys Session 2 Manufacturing Intelligence through IIOT Mr Mahesh Mahajan Global Lead – IIOT / Industrial Internet Accenture Digital Session 3 Transitioning from connected to smart Mr Ted Masters, President & CEO FCG Session 4 The role of Industrial Communication in the context of IIOT Mr. Peter Lutz Managing Director Sercos International e.V. Session 5 Middleware – an essential component for the success of IIOT projects P V Sivaram Managing Director B & R Industrial Automation Pvt. Ltd. Session 6 Remote Monitoring of Assets Vinay Nathan, , CEO & Co-Founder Altizon Systems Session 7 IoTizing a Supply Chain Amarnath Shete Head – Internet-of-Things Advisory Digital,Wipro Session 8 Digital Innovation – How IIoT is Transforming the Industrial Space Neelam Singh Senior Analyst ARC Advisory Group Panel discussion : IIOT India Road Map – Opportunities & Challenges About Automation Expo 2016 The four day power packed exhibition will showcase the best of innovations from the field of Process Automation & Control Systems, Factory Automation, Industrial Automation, Electric Automation, Field Instrumentation & Smart Sensors, Robotics & Drives, Software Solutions, Bus Technologies, Wireless Technology, Building Automation, Hydraulics & Pneumatics, Automation in Renewable Energy and Safety & Security Systems. Furthermore, for the first time, we have special cluster dedicated to Robotics, Technology & Industrial Internet of Things, Start-Ups and Solar. Since its inception more than a decade ago, Automation Expo consistently evolving as a top-of-the-class platform to showcase world’s best automation technology under one roof. The exhibition is graced by who’s who of the industry, decision makers, fund managers, technocrats and technology enthusiasts. Here, professionals mingle with technology providers in a one-to-one meeting to create a sustainable business models for the future. Mr. M. Arokiaswamy, Managing Director, IED Communications Ltd, the brain behind this mega exhibition says, “Our journey since 2006 has been a highly successful one considering the Global player’s participation in the event. Today there is a pressing need for every Industry in India to acquire technology based on automation to take on global competition.” The 11th edition of the show is expected to get more than 50,000 visitors from various industries such as Oil & Gas, Automobiles, Pharmaceutical, Food Processing, Fertilizers and Chemicals, Cement, Glass, OEMs, Electrical and Electronics and many other automation technology users and solutions providers. With more than 800 top automation manufacturers and solution providers already gearing up to showcase their technology prowess at the mega event, the upcoming show is surely set a benchmark in the history of Automation industry. For more information: Please contact: Mr. M. Arokiaswamy / Ms. Jyoti Tel : 91-22-2207 3370 / 2207 9567. Email : arokiaswamy@iedcommunications.com; jyothi@iedcommunications.com URL : www.iedcommunications.com
Automation Expo 2016 – an IED Communication Initiative, is India’s biggest and South East Asia’s 2nd biggest Automation show is poised to showcase more futuristic technologies and innovation like never before. The show is supported by Dept. of Science and Technology, Govt. of India. To mark its 11th edition of the show, the exposition is […]
["Deep Tech"]
[]
AIM Media House
2016-07-26T14:32:05
2016
773
["Go", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "Git", "automation", "ai_applications:robotics", "GAN", "R"]
["AI", "R", "Go", "Git", "GAN", "automation", "innovation", "programming_languages:R", "programming_languages:Go", "ai_applications:robotics"]
https://analyticsindiamag.com/deep-tech/indias-first-iiot-conference-making-mark-indian-soil/
3
10
3
true
false
false
10,021,007
What Is Meta-Learning via Learned Losses (with Python Code)
Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that can optimize models. We have thoroughly discussed the type first in our previous article MBIRL. For this post, we are going to give a brief introduction to the second type. Last month, at the International Conference on Pattern Recognition, {ICPR}, Italy, January 10-15, 2021, a group of researchers: S. Bechtle,  A. Molchanov, Y. Chebotar, E. Grefenstette, L. Righetti, G. S. Sukhatme, F. Meier submitted a research paper focussing on the automation of “meta-training” processing: Meta Learning via Learned Loss. Motivation Behind ML3 In meta-learning, the goal is to efficiently optimize the function fθ  which can be a regressor or classifier that finds the optimal value of θ. L is the loss function and h is the gradient transform. The majority of the work in deep learning is associated with learning the f function directly from data and some meta-learning work focuses on the parameter updation. In ML3 approach, the authors have targeted loss learning. Loss functions are architecture independent and widely used for learning problems so learning a loss function doesn’t require any engineering and optimization and allows the addition of extra information during meta-training. About ML3 The key idea of the proposed framework is to develop a pipeline for meta-training that not only can optimize the performance of the model but also generalize for different tasks and model architectures. The proposed framework of learning loss functions efficiently optimize the models for new tasks. The main contribution of the ML3 framework are : i) It is capable of learning adaptive, high-dimensional functions via back propagation and gradient descent. ii) The given framework is very flexible as it is capable of storing additional information at the meta-train time and provides generalization by solving regression, classification, model-based reinforcement learning, model-free reinforcement learning. The Model Architecture of ML3 The task of learning a loss function is based on a bi-level optimization technique i.e., it contains two optimization loops: inner and outer. The inner loop is responsible for training the model or optimizee with gradient descent by using the loss function learners meta-loss function and the outer loop optimized the meta-loss function by minimizing the task loss i.e., regression or classification or reinforcement learning loss. The process contains a function f parameterized by θ that takes a variable x and outputs y. It also learns meta-loss network M parameterized by Φ  that takes the input and output of function f and together with task-specific information g (for example ground truth label for regression or classification, final position in MBIRL or the sample reward from model-free reinforcement learning problems) and outputs the meta- loss function L parameterized by both Φ and θ. So, to update function f, compute the gradient of Meta Loss L with respect to θ and update the gradient using the learned loss function, as shown below : Updation of gradient Now, to update M, the loss network, formulate a task-specific loss that compares the output of the currently optimal f with the target information since f is updated with L, the task is also functional Φ and perform gradient update on Φ to optimize M. This architecture finally forms a fully differential loss learning framework used for training. Updation of loss network M To use the learning loss  at Test time, directly update f by taking the gradient of learned loss L with respect to the parameters of f. Applications of ML3 Regression problems.Classification problems.Shaping Loss during training e.g., Covexifying Loss, exploration signal. ML3 provides a possibility to add additional information during meta-training.Model-based Reinforcement Learning.Model-free Reinforcement Learning. Requirements & Installation Python=3.7Clone the Github repository via git. !git clone https://github.com/facebookresearch/LearningToLearn.git %cd LearningToLearn/ Install all the dependencies of ML3 via : !python setup.py develop Paper Experiment Demos This section contains different experiments mentioned in the research paper. A. Loss Learning for Regression Run Sin function regression experiment by code below: !python ml3/experiments/run_sine_regression_exp.py Now, you can visualize the results by the following code: 2.1 Import the required libraries, packages and modules and specify the path to the saved data during meta-training. The code snippet is available here. 2.2 Load the saved data during the experiment. # load results train_reg = np.zeros((50, len(seeds))) train_ml3 = np.zeros_like(train_reg) test_reg_loss = np.zeros((100, len(seeds))) test_ml3_loss = np.zeros_like(test_reg_loss) test_loss_trace = np.zeros_like(test_reg_loss) n_steps = 1 for j, seed in enumerate(seeds): exp_file = "sine_regression_seed_{}.pt".format(seed) data = torch.load(os.path.join(EXP_FOLDER, exp_file)) train_reg[:, j] = np.asarray([np.vstack(data[i]['train_reg']['loss_trace']).mean(axis=0)[n_steps] for i in range(len(data))]) train_ml3[:, j] =  np.asarray([np.vstack(data[i]['train_ml3']['loss_trace']).mean(axis=0)[n_steps] for i in range(len(data))]) test_reg_loss[:, j] = np.vstack(data[-1]['test_reg']['loss_trace']).mean(axis=0) test_ml3_loss[:, j] = np.vstack(data[-1]['test_ml3']['loss_trace']).mean(axis=0) 2.3 Visualize the performance of the meta loss when used to optimize the meta training tasks, as a function of (outer) meta training iterations. # mean and variance over seeds mu1 = train_reg.mean(axis=1) sigma1 = train_reg.std(axis=1) mu2 = train_ml3.mean(axis=1) sigma2 = train_ml3.std(axis=1) t = np.arange(0, 500, 10) # plot it! fig, ax = plt.subplots(1) ax.plot(t, mu1, lw=2, label='SGD with MSE Loss', color='C1') ax.plot(t, mu2, lw=2, label='SGD with ML3 Loss', color='C0') ax.fill_between(t, mu1+sigma1, mu1-sigma1, facecolor='C1', alpha=0.5) ax.fill_between(t, mu2+sigma2, mu2-sigma2, facecolor='C0', alpha=0.5) ax.set_title(r'MSE on train tasks after 100 gradient steps') #ax.legend(loc='upper left') ax.set_xlabel('meta train iter') ax.set_ylabel('average MSE on meta train tasks') ax.grid() ax.legend(loc='lower left') plt.savefig("{}/sine_meta_training.png".format(EXP_FOLDER)) The output will be : 2.4 Evaluating learned meta loss networks on test tasks. Plot the performance of the final meta loss network when used to optimize the new test tasks at meta test time. Here the x-axis represents the number of gradient descent steps. The code snippet is available here. B. Reward Learning for Model-based RL (MBRL) Reacher For meta learning the reward, run !python ml3/experiments/run_mbrl_reacher_exp.py train For testing the reward, run !python ml3/experiments/run_mbrl_reacher_exp.py test C. Learning with extra information at the meta-train time This demo shows how we can add extra information during meta training in order to shape the loss function. For experiment purposes, we have taken the example of sin function. Now, with the code, the script requires two arguments, first one is train\test, the 2nd one indicates whether to use extra information by setting True\False (with\without extra info). For training, the code is given below !python ml3/experiments/run_shaped_sine_exp.py train True To test the loss with extra information run: !python ml3/experiments/run_shaped_sine_exp.py test True For comparison purposes, we have repeated the above two steps with argument as False. The full code is available here.Comparison of results via visualization. Similarly, the research experiment for meta learning the loss with an additional goal in the mountain car experiment run can be done. The code lines is available here. EndNotes In this write-up we have given an overview Meta Learning via Learned Loss(ML3), a gradient-based bi-level optimization algorithm which is capable of learning any parametric loss function as long as the output is differential with respect to its parameters. These learned loss functions can be used to efficiently optimize models for new tasks. Note : All the figures/images except the output of the code are taken from official sources of ML3. Colab Notebook ML3 Demo Official Code, Documentation & Tutorial are available at: Github Website Research Paper
Facebook AI Research (FAIR) research on meta-learning has majorly classified into two types:  First, methods that can learn representation for generalization. Second, methods that can optimize models. We have thoroughly discussed the type first in our previous article MBIRL. For this post, we are going to give a brief introduction to the second type. Last […]
["Deep Tech"]
["Deep Learning", "optimization", "Reinforcement Learning", "reinforcement learning an introduction"]
Aishwarya Verma
2021-03-01T11:00:00
2021
1,173
["Go", "TPU", "Deep Learning", "Reinforcement Learning", "AI", "ML", "RAG", "Colab", "Ray", "Python", "deep learning", "optimization", "R", "reinforcement learning an introduction"]
["AI", "ML", "deep learning", "Ray", "Colab", "RAG", "TPU", "Python", "R", "Go"]
https://analyticsindiamag.com/deep-tech/what-is-meta-learning-via-learned-losses-with-python-code/
4
10
0
true
true
true
10,143,367
Indian Government Ready to Implement New Laws for AI
Union Minister for Electronics and Information Technology, Ashwini Vaishnaw, has called for a balanced approach to address fake news while safeguarding freedom of speech, and is open to implementing new laws specific for AI. Speaking in Parliament on December 11, he stressed the critical need for consensus on social media accountability and AI governance, hinting at the possibility of a new legal framework if society aligns on the matter. Addressing concerns about fake news and misinformation, Vaishnaw remarked, “It is a major challenge that societies across the world are facing—the accountability of social media, particularly in the context of fake news and the creation of fake narratives.” He underscored the importance of achieving societal and legal accountability, adding, “These are the issues where freedom of speech comes on one hand and accountability and having a proper real news network getting created, on the other hand. These are things which need to be debated and if the house agrees and if there is a consensus in the entire society we can come up with the new law.” Indigenous AI Solutions for Privacy and Governance The Minister highlighted government initiatives under the AI Mission aimed at addressing privacy and governance challenges through indigenous solutions. “To address the emerging landscape of AI, we have initiated eight projects aimed at creating tools and technologies within the country,” he noted. The projects, developed under the “Safe & Trusted AI” Pillar, include: Machine Unlearning – IIT Jodhpur: Generative foundation models. Synthetic Data Generation – IIT Roorkee: Mitigating bias in datasets. AI Bias Mitigation Strategy – NIT Raipur: Responsible AI for healthcare. Explainable AI Framework – DIAT Pune and Mindgraph Technology: Privacy-preserving AI for security. Privacy Enhancing Strategy – Collaboration among IIT Delhi, IIIT Delhi, and others: Robust privacy-preserving ML models. AI Governance Testing Framework – Amrita Vishwa Vidyapeetham: Transparency and risk assessment for LLMs. AI Ethical Certification Framework – IIIT Delhi: Tools for fairness in AI models. AI Algorithm Auditing Tool – Civic Data Labs: Open-source framework for algorithmic auditing. Vaishnaw emphasised India’s leadership in shaping global AI policies. “India is one of the leading countries in shaping global thought on AI governance. Last year, India became the chair of Global Partnership on AI (GPAI) and held the Summit this year,” he stated. He also noted India’s influence in international AI discussions with organisations like the OECD and United Nations. AI regulation in India is quickly becoming a hot topic, but are these discussions worth the hype? With AI still in its infancy across the country, many argue that these conversations are out of sync with ground realities.
AI regulation in India is quickly becoming a hot topic, but are these discussions worth the hype?
["AI News"]
["AI in governance", "AI India", "government of india"]
Mohit Pandey
2024-12-12T10:06:15
2024
432
["government of india", "Go", "API", "AWS", "AI", "ML", "Aim", "Rust", "AI India", "foundation models", "machine unlearning", "R", "AI in governance"]
["AI", "ML", "foundation models", "Aim", "machine unlearning", "AWS", "R", "Go", "Rust", "API"]
https://analyticsindiamag.com/ai-news-updates/indian-government-ready-to-implement-new-laws-for-ai/
3
10
0
false
false
false
10,093,189
Is Google Following the Apple Path?
At the latest in-person Google I/O event, the tech giant announced a plethora of news at lightning speed, including the launch of major Pixel products, signaling that the Pichai-led company is finally building its own ecosystem after nearly a decade of attempts. Yesterday, a flurry of software and service features, coupled with state-of-the-art hardware, was showcased. This highly anticipated annual event unveiled a trio of Pixel products, spearheaded by the Pixel 7a as the successor to the Pixel 6a. Furthermore, Google ventured into uncharted territory by introducing a dockable tablet and a foldable device. With an expanding Pixel portfolio, the Google ecosystem has reached newfound heights of capability. Despite the limitations of the Android platform, leveraging its prowess, Google seems determined to deliver an experience that resonates with users on par with Apple’s unmatched standards. The New Entrants Since its debut in 2016, Pixel phones have evolved immensely. As a smartphone original equipment manufacturer (OEM), Google had a rough start. With a focus on integrating hardware and software seamlessly the latest developments suggest that the tech titan is now committed to building its own robust ecosystem. While previous versions of Pixel phones may not have been universally praised, the recently unveiled Pixel 7a has departed from the status quo. It  boasts significant enhancements both internally and externally; with upgraded camera hardware, Google’s proprietary processor, and a boxy design. Notably, the tech giant seems determined to retain the signature camera bar for the brand. The Pixel Tablet disrupts the landscape of smart screens by breaking in two. Composed of an Android slate and a magnetic dock unit equipped with its own built-in speakers, this design is clever. Amazon had previously explored a similar concept with its Fire tablets, but Google’s showcase is more aesthetic. Capitalizing on the feature, Google bundled the two components together at $499 (INR 40,970). Positioned as more than just a tablet, this device assumes the roles of a smart home controller/hub, a teleconferencing solution, and a video streaming machine. While it may not replace one’s television experience entirely, the Tablet is suitable for YouTube content and more. Furthermore, the search giant also unveiled the Pixel Fold, its first foldable phone, which will launch this summer at a whopping $1,799 (INR 1,47,500). Currently, South Korean-based Samsung rules the foldable phone domain, with a 62% market share during the first half of 2022, as per Counterpoint Research. Notably, Samsung is also the most sought-after brand for foldable phones. While the Pixel Fold seems promising, its high price point can act as an obstacle for an average consumer. Got To Get Serious The Pixel series hopefully remains more consistent henceforth which will help make things more interconnected and seamless for the tech giant. However, releasing many new products in a rush, could lead to  problems. We know that the Pixel ecosystem won’t be nearly as exclusive and restrictive as the Apple ecosystem; quite the opposite but a gradual and rigorous attempt from Google’s side is visible. For the Mountain View-based company, this is a prime opportunity to finally put to bed the claims about the hardware not being the company’s cup of tea. Anshel Sag, a senior analyst at Moor Insights & Strategy, emphasized that if Google doesn’t give up, it will be good for the company, however, “Google can only subsidize its hardware business for so long and will have to eventually chase profitability.” While Google’s Pixel line promises an ecosystem, the company’s reputation for taking U-turns remains unmatched and remains a challenge for potential investors.  The inaugural Pixel device, in 2016, left a lasting impression and Google is gradually resolving its shortcomings. While Google’s recent moves can establish the firm as a serious player in the hardware market, it will take time to overcome the uncertainties.
With the launch of Pixel products, it looks like Google is finally serious about creating its own ecosystem
["Global Tech"]
["Google Pixel"]
Tasmia Ansari
2023-05-11T18:00:00
2023
628
["Go", "API", "programming_languages:R", "AI", "ML", "programming_languages:Go", "RAG", "Aim", "Google Pixel", "R"]
["AI", "ML", "Aim", "RAG", "R", "Go", "API", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/global-tech/is-google-following-the-apple-path/
3
9
2
false
false
false
10,141,303
AI Powers Up to Tackle the World’s Toughest Energy Challenges
The world of energy is changing fast, driven by rising electricity demand, the push for decarbonisation, and the challenge of adding renewables to the grid. In an exclusive interview with AIM, Shantanu Som, Asia executive engineering leader at GE Vernova, explained how cutting-edge tools could transform the way we keep power systems running reliably and efficiently. What is Predictive Maintenance? Predictive maintenance streamlines the maintenance process and involves prioritising tasks based on actual needs rather than a schedule. It helps allocate resources more efficiently and use them only when and where required. This can also be implemented in the production of electricity, which is facing unprecedented demand due to the rapid growth of data centres, EV adoption, and the electrification of underserved regions. With over 700 million people still without access to electricity, the world currently faces a huge challenge. While phasing out coal demands massive investments, the rapid rise of renewable energy like solar and wind is straining power grids, causing supply instability. This is where predictive maintenance comes in. It analyses the demand and supply chains for electricity to optimise and help keep things running smoothly, ensuring reliable energy while also working towards a cleaner, more sustainable future. AI Solving Problems with Parity GE Vernova has been using AI and machine learning for years with tools like Asset Performance Management (APM) systems and digital twins to keep the equipment running smoothly. APM uses sensor data to spot problems early on, while digital twins create virtual models of assets like gas turbines to compare real-time performance and detect issues. “It’s like your car alerting you that your tyre will go flat in 5,000 kilometres, so you can act before that happens,” Shantanu explained. This approach helps avoid expensive downtime. Right now, these tools rely on physics models and past data. Generative AI could take things further by solving complex problems and ensuring consistent results. With 7,000 gas turbines worldwide, it could share solutions across similar machines, thus cutting down on errors. Shantanu further said that generative AI can help bring parity, improve maintenance and build trust in AI-driven systems. A Cautious Approach to GenAI Generative AI has great potential, yet its adoption in predictive maintenance is moving slowly. This is because, in industries like power and aviation, where safety and reliability are critical, results need to be precise and predictable. “The challenge is we still don’t fully understand what generative AI can and cannot do,” said Shantanu. “In industries like ours, one must be sure about the outcome versus the randomness of the outcome. It’s the same reason aviation is cautious about new technologies.” Salil Parekh, Infosys CEO and MD, said in July, “We are not at this stage disclosing and quantifying externally our revenue from it. The work we are doing is quite incredible. The focus is really on what enterprises are doing for generative AI.” Randomness in generative AI poses a challenge that must be carefully managed. A cautious approach allows more time to test and refine the technology, ensuring it’s reliable and easy to understand before being deployed widely. Diagnostics, as well as Troubleshooting Generative AI may still be in its early days, but Shantanu sees a sea of potential. It could improve diagnostics by simulating complex scenarios and suggesting fixes before the problems occur. This would be especially useful as renewable energy systems get more complicated. Moreover, it can create consistent solutions by analysing data from many machines and tailoring recommendations, reducing inconsistencies across fleets. Beyond maintenance, it could help design better, more efficient systems using past data and predictions. Generative AI could also make troubleshooting faster by offering smart suggestions to quickly identify and fix issues, cutting downtime. A Future Powered by AI Predictive maintenance is just one piece of the puzzle. GE Vernova’s broader vision involves creating solutions that integrate renewables, gas power, and digital tools into a cohesive framework. Platforms like the grid operating system are pivotal in managing grid stability by orchestrating which assets come online, how much power they generate, and for how long—all while keeping carbon intensity in check. Generative AI could help manage grid challenges by simulating scenarios and finding the best ways to balance renewable energy fluctuations with reliable power supply. While the industry is cautious, given the high stakes, the potential benefits are huge—more reliable systems, smoother operations, and lower carbon emissions. Generative AI’s role in the power sector reflects the energy transition itself: careful, gradual, and ultimately transformative. “The acceptance will be slower. But I also feel that it has immense power, which will make decision-making much faster and curated right, which is not there today,” said Shantanu.
AI could help manage grid challenges by simulating scenarios and finding the best ways to balance renewable energy fluctuations.
["AI Features"]
["energy"]
Sanjana Gupta
2024-11-20T18:00:00
2024
773
["Go", "energy", "machine learning", "GenAI", "AI", "ML", "Git", "Aim", "generative AI", "Rust", "R"]
["AI", "machine learning", "ML", "generative AI", "GenAI", "Aim", "R", "Go", "Rust", "Git"]
https://analyticsindiamag.com/ai-features/ai-powers-up-to-tackle-the-worlds-toughest-energy-challenges/
2
10
3
false
false
false
27,081
5 Robotised Countries Which Are Giving India Major Automation Goals
Robots have taken over many human-aided processes and have brought unique ways of automation, entertainment and bringing emotional connect, among many other things. With the number of humanoid robots like as Sophia, Aibo and Kuri gaining prominence, personal and assistive robots have become a rage. Reports suggest that automotive robotics market size is expected to be valued at $13.6 billion by 2025. Europe, North America, and Asian countries like Japan, China and South Korea are leading the market. While the need for automotive robots in industries such as manufacturing has increased significantly, countries like Japan are focusing on creating “caring” robots, which can look after the elderly and physically-challenged citizens. Here we list five countries who are leading the race in adopting robot-led services (in no particular order). Japan Japan is the world’s predominant industrial robot manufacturer and had produced a record 153,000 units in 2016. Reportedly, it accounts for 52 percent of the global robotics supply in areas like manufacturing. If we go by robot density, Japan employs approximately 300 robots for every 10,000 working humans. Some of the key robots made by the country are Robear, a nursing robot with a gentle touch, Vevo and Pepper. The concept of cobots — collaborative robots working in association with humans — has been gaining prominence in Japan. Some of the players leading the robot market in Japan are Fanuc and Yaskawa Electric. Germany Reports suggest that Germany boasts of using 300 robots per 10,000 human workers. Between 2010 and 2017, the robotics industry in Germany grew at a rate of 10 percent. The country witnessed a remarkable growth in sales as well, which almost doubled during that period. It also scores high in the number of units exported. With a growth rate of more than 15 percent, the industry turnover was projected at €4.2 billion. IFR suggests that the number of units produced will grow to 521,000 by 2020. Some of the leading German robotics players are Kuka Systems, Robot Technology GmbH, ArtiMinds Robotics, among others. Singapore It is one of the leading robot-aided countries, a density of approximately 488 robots per 10,000 employees. While manufacturing is the leading industry in terms of adoption of robots, electronics, healthcare, and construction, the hospitality industry follows a close suit. Reportedly, as about 90 percent of businesses in the country’s food and beverage sector are struggling to find workers, these businesses have turned to robots. Singapore also has an abundance of robotics research facilities and is home to universities such as Nanyang Technological University, National University of Singapore, and Singapore Polytechnic, which are driving robotics innovation. Their government also shows a keen interest in boosting the robotics industry as it announced a fund of $450 million in 2016 committed to the National Robotics Programme over three years. South Korea It has shown one the highest robotic density since 2010, which could be attributed to increased adoption particularly in the electronics and automotive industries. Just like Singapore, their government’s keen interest in spending money on corporate R&D centres has given it a big boost. In 2010, South Korea established the Korea Institute for Robot Industry Advancement to oversee the development. Since then it has been valued at over $4 billion. Some of the leading players in the robotic sector from South Korea are Doosan, and Hannah Techwin Group, among others. Denmark Not just the world leaders in using renewable energy, this country is acing the race of producing and using machinery and robots at a pace like none other. With a strong robotic cluster in Odense, it has easily become of the leading markets for robots and drones. It hosts a range of test sites for robot applications in healthcare, agriculture, production and other industries, making it a suitable ground for bringing robotic innovations. There were reportedly 752 industrial robots sold in 2016. It has been predicted that more than 1.7 million new industrial robots will be installed in factories worldwide by 2020. As robots make for 25 percent of their total industrial output, Denmark enjoys the mark of 211 robots per 10,000 working people. Some of the leading robotic players from Denmark are Universal Robots and Blue Ocean Robotics. It is currently home to more than 70 robots and automation companies. Not just a leading manufacturer, Denmark is also home to some interesting academic programs on robotics. Scenario in India While Asian nations like Japan, South Korea and China are making a headway in the robotics space, India is still catching up these nations at a decent pace. Given the larger economic developments and investments coming in, it paints a greener picture for a growing robotics market in India. India has shown significant growth both in terms of producing robots and adopting home robotics. We are, however, yet to see a significant growth. Concluding Note Other countries that are not included in the list but show significant growth in robotics space are the US, Sweden, China, Italy, Taiwan and Belgium. As these countries are driving technological developments at a fast pace, they have called for more practical and cost-effective robots at both industrial level and home-bot space. The debate, however, remains that if robots are responsible for job cuts. While some have feared that automation might take away jobs, other reports suggest that it has resulted in increased jobs, capital retention and revenue generation.
Robots have taken over many human-aided processes and have brought unique ways of automation, entertainment and bringing emotional connect, among many other things. With the number of humanoid robots like as Sophia, Aibo and Kuri gaining prominence, personal and assistive robots have become a rage. Reports suggest that automotive robotics market size is expected to […]
["AI Trends"]
[]
Srishti Deoras
2018-08-08T12:53:23
2018
891
["Go", "API", "TPU", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "RAG", "automation", "R"]
["AI", "RAG", "TPU", "R", "Go", "API", "automation", "innovation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-trends/5-robotised-countries-which-are-giving-india-major-automation-goals/
3
10
4
false
false
false
40,025
5 Latest Machine Learning Jobs That Should Be On Your Radar
Analytics India Jobs offers a new platform for the data science community in India by bringing the latest job openings across the country. Here are the 5 most recent Machine learning job openings across major cities in India: AI Architect @ ZS Associates, Pune ZS‘s Software Development Team designs, implements, tests and supports high quality products used by hundreds of companies and thousands of end users to make critical decisions and manage sales operations. Requirements: Knowledge and experience in some of the key AI platform, e.g. AWS Sagemaker, IBM Watson, Microsoft Azure, Google Api.Ai, Facebook Wit.Ai, Chatbots using Microsoft Bot Framework Serve AI/ML models in enterprise grade technology platforms through microservices Tensorflow, Caffe, CNTK, commercial technologies/platforms, etc Experience working in a DevOps environment, and using industry standard tools (GIT, JIRA, Teamcity, etc.). Able to explain technical concepts in a non-technical language Solid hands-on experience in the Artificial Intelligence platforms with understanding of the end to end life cycle of AI projects. Very solid proven hands-on experience on UI technologies (AngularJS, ReactJS, StencilJS etc), Java/ J2EE, Kubernetes, Spring framework, Microservices, REST API’s, Kafka etc. Exposure in Data Governance and management Exposure to Hadoop data science workbench solutions Preferred experience in any one of the integrated AI products like: Microsoft Azure ML, AWS ML Solid experience on any one of the ML pipeline solutions like DataiKU, Anaconda, KNIME and auto ML solutions like H2O.ai or DataRobot or Firefly.ai Apply here Data Engineer & Enterprise Architect@ Hugo Edge Solutions, Bangalore As a data engineer, the candidate will Identify valuable data sources and automated collection processes, undertake preprocessing of structured and unstructured data. Requirements: 4-6 years of IT Development Experience Data engineering chops with big data technologies (AWS, Hadoop) and the drive to improve data quality, accuracy, and completeness. An understanding of the client experience, appreciating what it takes to form long-term client relationships Proficiency with handling structured and unstructured data sets, writing complex queries, and the occasional stored procedure Ability to design, build, and query data warehouses (MySQL, PostgreSQL, Oracle, MS SQL, Redshift) AWS Certified Solution Architect or Developer Associate required; Professional certification preferred Designing and developing machine learning and deep learning systems. Running machine learning tests and experiments. Implementing appropriate ML algorithms. Experience with Agile and the Enterprise SDLC (functional and unit testing, etc) are extremely important as is experience monitoring, debugging, and troubleshooting production workloads Apply here Machine Learning & Analytics Professional @ Careernet Consulting, Bangalore A potential candidate would be working for one of the clients who are into Banking Sector. Requirements: Bachelor’s / Master’s in Computer science Engineering OR Masters in Statistics/Mathematics Experience in Analyzing the Medium Complex Problems and translate it into Analytical Approach Experience in Statistical Learning : Diagnostic Analytics, Simulation and Predictive Modeling, Time Series, Dynamic/Causal Model, Statistical Learning, Guided Decisions Experience with big data analytics and advanced data mining techniques to analyze data R, Python Experience with SQL and relational databases, data warehouse Hadoop(Hive, Pig, Map Reduce, HQL) Experience with statistical programming languages – SAS SAS EG, Enterprise Miner, Text Miner, KXEN, OPL/CPLEX, SPSS,R studio, Revolution R Experience with data warehouse platforms – Teradata / GreenPlum, HANA Apply here Deep Learning and Computer Vision Expert @ Pratibha Analytics, Hyderabad The candidate will be part of a team that consists of highly qualified and dedicated professionals who are focussed on value creation for clients through automation, data and decision sciences. The main role of the team is to apply leading Advanced Analytics, Robotics Process Automation, Machine Learning and other AI frameworks, tools and techniques to solve clients issues and help them gain powerful new insights to improve revenues, profitability and reduce costs by solving for a range of KPIs and value drivers based on the client industry and functional domain area. Requirements A degree in M.Sc. Stats, Phd in Statistics, Mathematics, Engg. in Computer Science, MCA, Masters in Computer Science or other business and engineering graduate streams with experience in Data Science and Advanced Analytics implementations. 2-8 years experience, with strong knowledge and skills in statistics, mathematics and computer science. Deep skills in Computer Vision solutions using CNN, RNN and other ensemble algorithms and methods. Experience in design and delivery of leading Analytics / AI solutions to clients by applying your deep skills in above mentioned disciplines including Advanced Analytics, Machine Learning and Deep Learning. Python, Tensorflow, Keras, Pytorch, Caffe, R, along with strong understanding of all key market leading Advanced Analytics, Machine Learning and Deep Learning techniques, frameworks, methodologies and tools. Apply here Principal Machine Learning Engineer @ Leben Care Technologies, Hyderabad Founded in late 2016, Leben Care offers automated medical image analysis algorithms that improve access and quality of diagnosis across areas of life sciences. Their mission is to develop artificial intelligence products and solutions that improve access and quality of diagnosis across areas of life sciences. Principal ML engineer would be part of  a world-class AI RnD team to enhance the Netra.AI platform enabling next generation of medical imaging diagnostic for Ophthalmology. Requirements Master’s Degree or PhD – Computer Science, Artificial Intelligence. Hands on Experience in implementing deep learning architectures using latest machine learning tech and frameworks. Existing track record as a researcher in machine learning with published scientific journal. Design and build novel machine learning models to solve unique medical problems and improve patient outcomes. Test and evaluate algorithms on large medical datasets to prove robustness Deliver high quality and production-ready code Solid Python and C++ experience and Linux user. Solid mathematical background. Experience with a vast set of computer vision libraries. Deep understanding and hands-on experience in state-of-the-art Medical Image Analysis algorithms will be a plus. Apply here
Analytics India Jobs offers a new platform for the data science community in India by bringing the latest job openings across the country. Here are the 5 most recent Machine learning job openings across major cities in India: AI Architect @ ZS Associates, Pune ZS‘s Software Development Team designs, implements, tests and supports high quality […]
["AI Hirings"]
["devops journal", "machine learning for cities"]
Ram Sagar
2019-06-03T05:00:50
2019
937
["devops journal", "data science", "machine learning", "artificial intelligence", "AI", "ML", "computer vision", "deep learning", "analytics", "TensorFlow", "Azure ML", "machine learning for cities"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "computer vision", "data science", "analytics", "Azure ML", "TensorFlow"]
https://analyticsindiamag.com/ai-hiring/5-latest-machine-learning-jobs-that-should-be-on-your-radar/
3
10
6
true
true
true
58,350
Sneakers With A Pinch Of AI
Artificial intelligence (AI) after transforming several industries, it has now entered the world of footwear and revolutionising the much-loved sneakers. AI is not just assisting manufacturers in bringing a new sneaker at a faster pace but also is helping them detect the fake copies that are present in the market and eats up a big share of the original manufacturers. In this article, let’s look at the roles AI is playing to uplift and modernise the sneaker market. Design and Fit Designing of any sneaker requires to be sublime since the attention and sales are dependent on how good a sneaker looks on an individual. Although taste is subjective and differences of choices is another story, sneakers have to look good for anyone to buy them. More importantly, the fitting of the shoes matters the most. Keeping this aspect in mind, Nike has introduced a foot-scanning solution, which is designed to find a person’s best fit and goes by the name Nike Fit. The solution uses a combination of computer vision, data science, machine learning, artificial intelligence and recommendation algorithms to find the right fit for an individual. By using a number of data points, measurements are sent to the machine learning model, which takes into account the silhouette and the material used to adjust the comfort. Moving on, this data is added with AI capabilities to curate individual’s personal fit. Shoe — The Coach Shoes are soon going to coach an individual on how to run. A company called Runvi is on the verge of bringing in shoes that can coach an individual on how to run properly. Powered by artificial intelligence, these shoes contain a total of 30 pressure points in its two sholes and an accelerator that collects data on how an individual runs. The shoes consist of a brain known as the Core, which powers the sensors and stores the data before sending it to an individual’s smartphone. Once the data gets collected, the app begins coaching the individual by showing various data after running or by giving tips before the run begins by analysing historical data of an individual’s running style. Printed, Not Stitched Recently, American sports apparel giant Under Armour teamed up with AutoDesk to come up with a unique sneaker, which was printed and not stitched. Under Armour created a sneaker that is durable, comfortable and lightweight with the use of additive or 3D printing. The company used a lattice structure to support the midsole, and once the concept was finalised, Autodesk came into play with its artificial intelligence machine program, which is used to test new designs. The AI calculates everything from the durability of the sneakers to how the final product would look like. With a green signal from Autodesk’s program, the sneaker is sent to 3D printing. Verifying Originality With the number of counterfeits throwing a massive blow on the original manufacturers, it was time to find a way of dealing with counterfeits of brands like Adidas, Nike, Puma and more. Entering the scenario a technology provided Entrupy with its solution Legit Check Tech, a device, which uses AI to detect if a pair of sneakers is original or counterfeit. The device is equipped with eight cameras from different angles. Once a shoe is put inside the device, it clicks pictures from different angles. On the other hand, a user needs to download the corresponding app for the device, where the photos get uploaded automatically after pairing. The photos are analysed using artificial intelligence, which detects the tag number of the shoe that is available in the manufacturer’s database. The counterfeits are caught since they do not consist of the correct number present in the manufacturer’s database. Business Growth The fashion market is continuously growing with the development of new pairs every day, it is imperative for managers to integrate AI to drive business growth. Due to this reason, several big brands are combining AI and deep expertise of people from the section of marketing to reach an effective solution. AI technology has witnessed a massive increase, and the industry is finally looking for tailored solutions with tech teams coming forward to tackle different domains.
Artificial intelligence (AI) after transforming several industries, it has now entered the world of footwear and revolutionising the much-loved sneakers. AI is not just assisting manufacturers in bringing a new sneaker at a faster pace but also is helping them detect the fake copies that are present in the market and eats up a big […]
["AI Features"]
["AI (Artificial Intelligence)", "nike"]
Rohit Chatterjee
2020-03-10T14:00:48
2020
695
["data science", "Go", "artificial intelligence", "machine learning", "programming_languages:R", "AI", "R", "programming_languages:Go", "Git", "computer vision", "nike", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "computer vision", "data science", "R", "Go", "Git", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/sneakers-with-a-pinch-of-ai/
3
10
3
false
true
false
24,827
Top 6 Free E-Books To Learn R At Beginner And Advanced Levels
When it comes to data science landscape, R competes with languages like Python and tools like SAS. When it comes to usability, R is the go-to language for exploratory work, visualisation and complex analysis, among others. While R is preferred for testing proof of concept Python, on the other hand is used more for performance. Both R and Python work well with big data technologies like Spark but PySpark has been voted to work better than SparkR. R also boasts of more statistical and machine learning libraries and that’s why engineers lean towards the technologies which can be easily be integrated into an analytics solution. Also, in terms of flexibility, R has been voted better for complex analysis as opposed to Python and is preferred by people from finance background. So how does one learn R without any prior programming skills or basic knowledge. There are plenty of online resources to get you started. In this article, we compile books voted as the perfect go-to for beginners in R. Beginners The R Book by Michael J Crawley: A good resource to get started, you can access the complete book here. Even though this is an old edition, it is a valuable resource for beginners who want to take a deep dive into the high level software language. It has been appreciated for its breadth of topics such as meta analysis, Bayesian statistics, tree models and time series analysis. Data Wrangling With R by Bradley C Boehmke: This is another go to resource for beginners which teaches the essentials of preprocessing. More suited for programmers this book showcases the techniques required for data munging and cleaning. You can access it here. Statistical Analysis With R For Dummies by Joseph Schmuller: Here’s a book for those who are stats whiz. According to Wiley, this book lays out an easy-to-follow guide that builds on the foundational statistical concepts which R addresses. It has also been billed as the best book for getting up to speed with R Studio and shows methods in predictive modelling. You can check it out here. R For Data Science by Garrett Grolemund and Hadley Wickham: This book is a great resource for beginners, as it dives into data visualisation, workflow basics and exploratory data analysis. It also demonstrates how to work with tibble package. You can access the book here. Advanced Advanced R by Hadley Wickham: It is one of the top voted books and is available here. The book is best suited for professionals because it is aimed primarily for R users who want to improve their programming skills and understand the language better. It can also be useful for programmers coming to R from other languages, as it explains some of R’s quirks and shows how some parts that seem horrible do have a positive side. The author gives a deep insight into packages and also explains R data structures and vectorisation. In fact, Hadley Wickham’s book has been voted as a prime resource for learning R since the author wrote most of the popular R packages. He also features another advanced book on his site. Visualisation R Graphics Cookbook by Winston Chang: A go-to book for visualisation, it features more than 150 ways to help generate high quality graphics. This book also shows how to ggplot2 package to get started with visualising data. For those who are interested in visualising data in R, this book demonstrates techniques such as making bar graphs, line graphs, histograms, structuring data for graphing, creating heat maps and 3D scatter plots. You can access the book here. Best Way To Learn While combing through the online forums and talking to people we found out that R enthusiasts believe that the best way to learn the language is with Tidyverse packages or by getting familiar with Base-R. Once you get the basics of R right, you can start by tackling problem, finding the dataset and cleaning it and applying it to business problems. It is best to use R for analysis, just as you would do in Excel, SPSS or Matlab. Go back to Stack Overflow, Reddit, reference books or manual pages for R functions for getting the queries answered. A great way to apply the learning is by participating in Kaggle competitions, which are also a great way to get support on R learning. For example, you can do the Titanic competition in R — it is a great way to assess your learning. This is billed as a beginner’s competition and Kaggle also provides a bunch of tutorials for this.
When it comes to data science landscape, R competes with languages like Python and tools like SAS. When it comes to usability, R is the go-to language for exploratory work, visualisation and complex analysis, among others. While R is preferred for testing proof of concept Python, on the other hand is used more for performance. […]
[]
["R", "R packages"]
Richa Bhatia
2018-05-23T08:27:19
2018
761
["big data", "data science", "Go", "machine learning", "AI", "R packages", "Python", "Aim", "analytics", "programming_languages:Python", "R"]
["AI", "machine learning", "data science", "analytics", "Aim", "Python", "R", "Go", "big data", "programming_languages:Python"]
https://analyticsindiamag.com/ai-features/top-6-free-e-books-to-learn-r-at-beginner-and-advanced-levels/
2
10
1
true
true
false
67,906
Top Exploration Strategies Used In Reinforcement Learning
A classical approach to any reinforcement learning (RL) problem is to explore and to exploit. Explore the most rewarding way that reaches the target and keep on exploiting a certain action; exploration is hard. Without proper reward functions, the algorithms can end up chasing their own tails to eternity. When we say rewards, think of them as mathematical functions crafted carefully to nudge the algorithm. To be more precise, consider teaching a robotic arm or an AI playing a strategic game like Go or Chess to reach a target on its own. Over the years, many exploration strategies have been formulated by incorporating mathematical approaches. In the next section, we list popular exploration strategies used in reinforcement learning models. Curiosity Based Exploration First introduced by Dr Juergen Schmidhuber in 1991, curiosity in RL models was implemented through a framework of curious neural controllers. This described how a particular algorithm can be driven by curiosity and boredom. This was done by introducing (delayed) reinforcement for actions that increase the model network’s knowledge about the world. This, in turn, requires the model network to model its own ignorance, thus showing a rudimentary form of self-introspective behaviour. Epsilon-greedy As the name suggests, the objective of this approach is to identify a potential way and keep on exploiting it ‘greedily’. This approach is popularly associated with the multi-arm bandit problem, a simplified RL problem where the agent has to find the best slot machine to make more money. The agent randomly explores with probability ϵ and takes the optimal action most of the time with probability 1−ϵ. A machine with the highest current average payout is selected with probability = (1 – epsilon) + (epsilon / k) And, machines that don’t have the highest current payout average are selected with probability = epsilon / k. Over time, the best paying machine will be played more and more often. Upper Confidence Bound Upper confidence bound Q^t(a)+U^t(a), where Q^t(a) is the average rewards associated with action a up to time t and U^t(a) is a function reversely proportional to how many times action a has been taken. Maximising this upper confidence bound is a strategy employed by the agents to move towards the goal. The popular AlphaGo zero program of DeepMind used Monte Carlo Tree Search, which, in turn, uses a neural network to guide the simulations. Each simulation in the Go game iteratively selects moves that maximise the upper confidence bound. Thompson Sampling If we consider the canonical example of slot machines again, an arm produces a random payout drawn independently of the past. Because the distribution of payouts corresponding to each arm is not listed, the player (read agent) can learn it only by experimenting. By continuing to explore alternative arms, an agent may learn how to earn higher payouts in future — making sure that the agent is not stuck with one strategy and exploring others while exploiting one. Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The agent keeps track of the probability of optimal actions and samples from this distribution. At each time step, an action is taken using the following probabilistic function (via Lilian Weng): Where π(a|h_t) is the probability of taking action, given the history h_t Inspired by Thompson sampling, Bootstrapped deep Q networks DQN introduced a notion of uncertainty in Q-value approximation in classic DQN by using the bootstrapping method. Bootstrapping is to approximate a distribution by sampling with replacement from the same population multiple times and then aggregate the results. Boltzmann Exploration Within Reinforcement Learning, exponential weighting schemes are broadly used for balancing exploration and exploitation, and are equivalently referred to as Boltzmann, Gibbs, or softmax exploration policies. In the most common version of Boltzmann exploration, the probability of determining an arm is proportional to an exponential function of the empirical mean of the reward of that arm, which is denoted as follows: Memory-Based Exploration Exploration algorithms in Deep RL fall into three categories: randomized value functions, unsupervised policy learning, and intrinsic motivation. Memory-based exploration strategies were introduced to resolve the disadvantages of intrinsic motivation or reward-based reinforcement learning. Rewards in varying environments can be inadequate in real time scenarios. DeepMind’s Agent57, which set a new benchmark for Atari games recently, employed episodic memory in their RL policy. Agent57 is built on an NGU (never give up) agent, which combines curiosity-driven exploration and distributed deep RL agents to compute an intrinsic reward in order to encourage exploration. This reward is defined by combining per episode and life-long novelty. The per-episode novelty rapidly vanishes over the course of an episode, and it is computed by comparing observations to the contents of episodic memory. Apart from Agent57, there have been other works on RL without the disadvantages of intrinsic motivations. One such is the Multi Agent Reinforcement Learning (MARL) by MIT and DeepMind, that encourages coordination among agents within an environment and learns from each other. Know more about exploration strategies in detail here.
A classical approach to any reinforcement learning (RL) problem is to explore and to exploit. Explore the most rewarding way that reaches the target and keep on exploiting a certain action; exploration is hard. Without proper reward functions, the algorithms can end up chasing their own tails to eternity. When we say rewards, think of […]
["AI Trends"]
["future of deep reinforcement learning", "Reinforcement Learning"]
Ram Sagar
2020-06-23T11:03:07
2020
857
["Go", "API", "future of deep reinforcement learning", "ELT", "Reinforcement Learning", "AI", "neural network", "ML", "llm_models:T5", "RAG", "R", "T5"]
["AI", "ML", "neural network", "RAG", "R", "Go", "API", "ELT", "T5", "llm_models:T5"]
https://analyticsindiamag.com/ai-trends/exploration-reinforcement-learning/
3
10
0
true
true
true
10,068,659
The story of Kaggle Grandmaster Janio Martinez Bachmann
Janio Martinez Bachmann’s life can be summed up in one phrase:  “Mama ho haw ho, wow wow!” The Kaggle Grandmaster, who loves to play Mario on Nintendo in his spare time, works as a data analyst at Voodoo.io. Janio is from the Dominican Republic and has a postgraduate degree in Financial Planning from Humber College, Canada. In an exclusive interview with Analytics India Magazine, the financial analyst turned data analyst shared his story of becoming a Kaggle Grandmaster. AIM: How did your fascination with algorithms begin? Janio Martinez Bachmann: Most of my experience comes from the financial industry. I used to work at a Credit Bureau in the Dominican Republic. I was highly dependent on tools such as Excel for my day-to-day tasks. Though I did enjoy my job, I always asked myself if there was a better way of doing these repetitive tasks more efficiently. So, I started digging into the topic of programming languages such as Python and bought a book – Hands on Machine Learning with Scikit-Learn and Tensorflow by Aurelien Geron– that changed the way I think about algorithms and data science. The book taught me how different algorithms, such as linear regression, Decision Trees, unsupervised models (Clustering) and more, work. When I started reading this book, data science was not so hyped. I was not sure what I was getting myself into. However, I loved the mechanics of how different models worked and how you could solve business problems by using them – this was something that fascinated me. AIM: What were the initial challenges, and how did you address them? Janio Martinez Bachmann: I must be honest. I was not a maths guru–neither in high school nor in college. One of the toughest challenges I had was understanding how models work. It felt like getting into a black room without a light bulb. However, my curiosity about different algorithms pushed me to understand how these black-box algorithms functioned. So, I started following many Youtube channels; Joshua Starmer is one of my favourites. I remember doing an exercise about how a DNN (Deep Neural Network) came up with a specific output. I had to do both forward and backward propagation on paper by implementing calculus concepts learned on the internet. If you ask me now about how I addressed these challenges, my only answer would be, “just be curious!”. You don’t need to have a PhD to know all these things. Curiosity is good enough. My advice to beginners would be to enjoy the ride and not get intimidated by all the terminology; all these concepts can be learned from the internet. AIM: What about coding excites you the most? Janio Martinez Bachmann: What I most enjoy about coding is that you have endless possibilities for getting your work done. As a data analyst, I am constantly challenged to find insights that will allow my employer to leverage opportunities in the market. But, how could I provide insights when working with massive amounts of data? The beauty of coding and open source packages comes into play here. The ability to code is like having superpowers! The possibilities are endless as to how to tackle a problem when you know how to code using different tools! This is what I most enjoy about coding–the creativity it brings and the efficiency in solving day-to-day problems. AIM: How do you get into the zone? Janio Martinez Bachmann: Believe it or not, I’ve had lots of struggles initially to get into the zone. Nowadays, distractions come from all angles, and it’s hard not to get distracted. However, when you need to pay attention to details (common when coding), it is critical to be in a state of mind of concentration. So, what does my routine look like? First, I hide my phone far away from my desk to get into the zone. Why do I do that? My mobile is my main source of distraction since I tend to get constant notifications from there, and the closer I have my phone, the more tempted I will be to see what that notification is. So, to avoid that temptation, I usually place my phone in a place hard to reach from my desk. I am an early bird. The first thing I do is to prepare my daily task list. This gives you a better perspective on what things you should accomplish during the day, giving you a better sense of direction. There is nothing worse for me than starting the day without knowing what I will do. I will feel completely lost. Once I have my task list prepared, I feel like I have a sense of purpose during the day. My daily task list would be the first step before getting into focus mode. AIM: What does your ML tool look like? Janio Martinez Bachmann: The most common tools I used include: SQL (Structured Query Language): I mainly use this to extract all the necessary data directly from the database. Here, I perform transformations necessary to be analysed after or display that information through a BI tool.Tableau: Talking of BI tools, this is the dashboard I currently use to display all the necessary insights to stakeholders. There are other platforms such as PowerBI, Looker, QlikView etc.R & Rstudio: I mainly use R for performing statistical analysis and A/B testing processes, but there are other functionalities such as data transformation, visualisations and many more.Python: I tend to use Python to automate processes that tend to be repetitive. Shiny Web Apps: I use them as a sort of dashboard. The only difference is that it has more flexibility to integrate machine learning models into the web application. DBT (Data Build Tools): It’s the latest tool I’m currently learning, but this will be a game-changer, and I will say it will be a must to learn in the foreseeable future. It’s a tool that uses software engineering principles to transform, test and document all your tables. I currently use this tool together with Redshift.Git: This is a tool that anyone will eventually need to learn since, in most organisations, you will need to work collaboratively with your code. By knowing Git commands, you will be able to work with Github, GitLab, Bitbucket and many more collaborative tools. AIM: How to prepare for the first hackathon? Janio Martinez Bachmann: In the hackathons that I participated in, I have mainly used Python to solve problems. So, my suggestion will be to start from there since it’s the most common language I have seen being used in Hackathons. However, in terms of libraries for machine learning, I recommend learning the basics of Pandas, Matplotlib and Scikit-Learn and concepts such as loops to have more flexibility when manipulating data. AIM: What’s your biggest pet peeve about hackathons? Janio Martinez Bachmann: I’ll be honest, when I did my first hackathon, one of my main challenges was collaborating with others. It’s not like I don’t enjoy collaborating with others. I tend to get nervous when I must code next to a person. Have you ever blacked out when you must show your code or work on a screen? Well, something like this happened to me in my first hackathon. I was worried about what other more experienced coders would think of my skills. However, we should have in mind that none of us are born coders. So, my advice would be, don’t be afraid to participate in hackathons. See this as an opportunity to learn from more experienced folks in the field. AIM: What’s the worst experience you’ve had as a coder? Janio Martinez Bachmann: As an analyst, I constantly interact with other stakeholders to visualise what those stakeholders want. One of the worst experiences working as an analyst is for you to deal with a stakeholder that asks you for something but entirely does not know what they want. In a work environment, this can be demotivating since you feel like you must somehow guess what that person wants. Fortunately, there are techniques to deal with these situations and the one I would suggest implementing is to ask questions constantly. By asking questions, you will be able to define the problem, which will allow you to elaborate on how to tackle a specific problem or request. Another not so nice experience I’ve had was when I elaborate a project to the end for a large number of stakeholders, and only a few of them use them. It has been demotivating and frustrating because some stakeholders could ask for things in the sense of urgency, making you feel that stakeholders need this. However, only a few find the end project useful when it is complete. This has happened to me a few times, especially when elaborating dashboards. To counter this, I will go back and ask questions! And most importantly, ask whether this project is necessary and how it will impact the organisation. AIM: What drew you to Kaggle? How has your journey been so far? Janio Martinez Bachmann: I heard about Kaggle when I started reading “Hands on Machine Learning with Scikit-Learn and TensorFlow by Aurelien Geron”. Kaggle was mentioned in the first few pages. I was curious to see what this website was and when I saw it for the first time, I was fascinated with it! Why? Being a beginner in coding, this platform was perfect for applying the theory I was learning from reading books. There is nothing better than learning to code while exploring some datasets and getting the story from a specific table. The data-storytelling part was one of the things that drove me to Kaggle and, most importantly, the amazing community that is out there to help you. Learning from the notebooks of talented individuals allowed me to improve my coding skills and learn different machine learning concepts. As for my journey, I have to say it has been tough, but worthwhile. I have been off  Kaggle lately, mainly due to my current job. But I plan to contribute to Kaggle to help the community. AIM: What was your first Kaggle competition like? Janio Martinez Bachmann: As far as I can remember, the first competition I participated in was predicting housing prices. It was an interesting competition because it was the first time I heard about feature engineering (mainly a concept in which we extract insightful features to enhance the predictive capabilities of our predictive models). Also, this competition allowed me to learn interesting advanced linear regression concepts that I had never heard of before. Nevertheless, you can guess I did poorly in this competition as it was my first one. But I learned a lot, and that’s what matters! So don’t be afraid to participate in competitions; they can be fun! AIM: What was it like to become a Kaggle Grandmaster? Janio Martinez Bachmann: I was in a state of shock. I remember I was on vacation in the Dominican Republic in March 2021. I was lying on the beach, and I received the notification from Kaggle that I became a Kaggle Grandmaster. I couldn’t believe it, but I was happy about it at the same time! After four years of dedication, I became a Kaggle Grandmaster. This does not mean you need to wait four years to become one. I’ve seen other Kagglers becoming Grandmasters in even two years. Nevertheless, I was full of joy when receiving the news from Kaggle! AIM: Tips to ace Kaggle competitions. Janio Martinez Bachmann: Here are my tips for moving to the top in Kaggle: Creating content: When I say creating content, I mean exploring datasets that only a few have explored and that you might think would be attractive to the community. I could relate to one example when I explored an interesting topic back in the day in dealing with Imbalanced classification. Back then, this topic was not “happening” in Kaggle, so I decided to take this opportunity and create a notebook revolving around “Credit Fraud || Dealing with Imbalanced Datasets”. It took me three months to create this notebook, but it was worthwhile, and currently, it has almost 4k likes.Participating in discussions: If you want to promote your brand in the Kaggle community, I would suggest participating in the discussion section mainly for two reasons. You will get to know other Kagglers through many discussion topics, and you will learn with them in all these discussions. It’s a great way to let yourself be known in the community.Respect the community: When I say this, try to behave ethically across the community. I have seen some unethical behaviour promoting your notebook across different notebooks so that people like yours. However, I would suggest not to do this even if you might feel tempted to do it. One, other users will not like it when someone directly requests this, and two, it might seem a bit unprofessional to do, which will ruin your reputation. That’s why it is important to create content but, most importantly, enjoy the ride! It does not matter if you are Grandmaster or Master; what is important is that you are learning many interesting topics across an engaging community such as Kaggle! Be patient!
I tend to get nervous when I must code next to a person
["AI Features"]
["Interviews and Discussions", "Kaggle", "Kaggle Grandmaster"]
Sri Krishna
2022-06-09T16:00:00
2022
2,197
["data science", "scikit-learn", "machine learning", "Kaggle", "AI", "neural network", "ML", "Aim", "analytics", "Kaggle Grandmaster", "TensorFlow", "Pandas", "Interviews and Discussions"]
["AI", "machine learning", "ML", "neural network", "data science", "analytics", "Aim", "TensorFlow", "scikit-learn", "Pandas"]
https://analyticsindiamag.com/ai-features/the-story-of-kaggle-grandmaster-janio-martinez-bachmann/
3
10
3
true
true
true
10,013,247
Guide To Parsehub: A No-Code, GUI Based Data Scraping tool
Since the internet has become such a large pool of data, every business must start adopting web scraping techniques to make their business more profitable. Now the previous era of Web scraping was all relying on coding skills and hours of working to achieve the smallest result, and whenever websites change their code a little bit, coders have to update their scraper again to make it work for another day. That’s why No-code development platforms(NCDPs) are trending because it saves time, money, and resources for companies; they can be used by anyone with zero coding experience and can do wonders. Forrester predicted the no-code market to reach $21 billion by 2022. As the number of users is increasing on the internet day by day, it will affect the big data market more and more, which is going to make web scraping tools sharper and incisive. So to remove these hours of tedious coding work, ParseHub came into the picture. It is a powerful Visual based web scraping tool, which enables everyone to create their own data extraction workflows without worrying about coding at all. Because ParseHub can handle all the source code element selection and prediction of neighbor elements on its own. Use-Cases Used by Data Scientists for research.Used in Sales Leads to scrape new sales leads from directors, communities and social media.Used for Competitor, marketing and industry analysisUsed to extract multiple websites millions of data into oneScraping news, products pricing, reviews, profiles, jobs and more. Installation Installing Parse Hub is super easy, just go to the website signup and download the free plan which includes 200 pages per run, five public projects and some other features. They have documented guides for installing ParseHub on different operating systems. Download page Last time we used Beautiful Soup and a large portion of code in this article to extract the article titles from analytics india magazine homepage.This time we are doing the same actually more than that without coding, and the result will be visible in your choice of a spreadsheet.Agenda Let’s Start Let’s start After installing, On first boot up you need to sign up with your ParseHub account and parsehub comes with its own inbuilt browser on which it handles all the web requests and extraction as well. Login Window Let’s dive into the User Interface(UI) which will boot up in pre pre-built web-browser environment with a tutorial and demo project, skip that part for now. Click on New Project project to start Web scraping. Load Analytics india magazine website in the work environment by searching inside the browser tab, or you can simply put the Url of a website in the Upper-left box as shown in the picture. It’s all up to you. Click Start project on this URL  and new window will popup Now let’s understand the UI, there are three main sections: The first block on the left side is where you can see your attributes and rename and modify them.Upper Right Tab works like a simple browser where you can interact and select reliable elements for scraping. And all the output is shown in the 3rd tab: Result tab, from where after cleaning and fetching we can download that dataset for further analysis. Now to begin extraction, you need to click on Webpage text or image as per your needs. In this case, we are clicking on the article title.Remember to click on Yes Tick on Non selected title to make your scraper accuracy high.Rename this attribute selection2  -> Title Now that you have some data you can see the preview of it in Bottom output tab. On the left side, click on the PLUS(+) sign next to the title to add related attributes like author name. Relative Select Using Relative Select command, click on the first article and then author name to extract Related author names Relative selection of Authors with respect to their Articles title Repeat step 7 and step 8 for Extracting further Published Date, Reading time and more using Relative Select. As shown in the below video Click on Get Data to Export your data. You have three options to choose from as per the data you are scraping: we use a test run to see if everything is going well, schedule to schedule the data extraction operation in case of large data extraction, but in our case, we are going to click on Run. Parsehub will start the data collection process as we call parsehub magic and in a minute we’ll get our data. Now download the data in formats like CSV/Excel, JSON, API as per our need. If we want to do data science work on this data, we can download it as CSV and then we can implement some word cloud or data visualization for the same. Output And there you have it ! Full structured and clean data for your further research. Having Article name, Author name, Date published, Article URLs, Reading time  ???? Output dataset in spreadsheet Conclusion We learned how non-coding web scraping tools can extract the data fast and easily and more accurately. Also, we saw a full demonstration of scraping data from the Analytics India magazine website. With exported output in spreadsheet ready for you data science work or any research. Parsehub has also published its API documentation which is designed around REST and can be used programmatically to manage and run projects.
Since the internet has become such a large pool of data, every business must start adopting web scraping techniques to make their business more profitable. Now the previous era of Web scraping was all relying on coding skills and hours of working to achieve the smallest result, and whenever websites change their code a little […]
["Deep Tech"]
["data exploration", "data preparation", "data science tools", "data scraping", "extract big data", "VR Data Analytics", "Web Scraping"]
Mohit Maithani
2020-12-06T16:00:00
2020
899
["big data", "data science", "Go", "data science tools", "TPU", "API", "programming_languages:R", "AI", "data scraping", "Web Scraping", "data exploration", "data preparation", "programming_languages:Go", "analytics", "extract big data", "R", "VR Data Analytics"]
["AI", "data science", "analytics", "TPU", "R", "Go", "API", "big data", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/deep-tech/parsehub-no-code-gui-based-web-scraping-tool/
2
10
3
false
true
true
10,079,694
10 Best Online/ Hybrid PG Data Science Courses in India
With employers eyeing market-relevant skill sets, online certification courses have been doing rounds for quite some time.  However, post-Covid popularity of such courses rose manifold. This often leaves students confused regarding which course to pursue based on their needs and interests. Analytics India Magazine has conducted a survey to rank some of the top PG online or hybrid courses in Data Science. AIM has been coming out with these rankings for the last 8 years. This year AIM has released separate rankings for postgraduate-level Data Science or Analytics programs based on their mode of delivery— whether the course is on-campus or online/hybrid. This report lists 10 best online or hybrid PG data science courses in India A good course has high and well-balanced scores across all the parameters, making it at the top of the ranking. These rankings will help students identify the right course for them. The report is helpful for institutes offering these programmes to understand where they stand compared to other courses and identify areas of improvement. It can also be used by the policymakers in public and private organisations to get an idea of the progress the Data Science education industry has made and then help bring in changes accordingly. Methodology Analytics India Magazine has conducted this survey to rank various data science and analytics programs based on their performance across five parameters— Certification Value, Return on Investment, Program Success, Teaching & Curriculum, and Student Engagement. An overall index has been calculated based on the average scores of each program across the five subindices (based on the above-mentioned parameters). The sub-indices themselves were calculated based on the scores given to the answers (of the survey questions) asked by AIM using uniform evaluation criteria. These scores were normalised on a scale from 0 to 1 before the average for the sub-indices was taken. Outliers were capped. You can see the hierarchy of sub-indices and the final index followed for analysis in the below infographic. 1. Integrated Program in Data Science, Machine Learning and Artificial Intelligence, Hero Vired The 11-month postgraduate certification program is fully online.  The program offered by Hero Vired in partnership with the Massachusetts Institute of Technology teaches concepts ground up.  The courses offered enable students to delve into the intricacies of machine learning by requiring them to write algorithms from first principles. The students have access to a curated curriculum and are assisted by a large team of faculty, program managers and instructional designers. Faculty members drawn from both industry and academia are highly experienced. They impart students with relevant theoretical knowledge along with practical applications of various concepts. To ensure that the curriculum is updated and in line with industry standards, Hero Vired has a dedicated team of instructional designers. The curriculum includes several modules on statistics and machine learning to enable students to tackle common industry problems along with undertaking cutting-edge research in the discipline. The industry projects that students work on help them acquire industrial experience. The academic projects involve first principle problem-solving in the form of building a recommendation engine, creating inferences for an SVM etc. The capstone projects train students in SWOT analysis, building solutions to several problems in the field of data science using platforms and libraries like jupyter notebook, Google Colab, data handling and visualisation library. Weekly one-on-one meetings are held with students and faculty members to provide personalised feedback and clear doubts. Along with academics, the mentorship program that is a part of the curriculum helps build soft skills and prepare for job interviews. 2. Integrated Program in Business Analytics (IPBA), UNext Jigsaw The 10-month fully online program offered by UNext Jigsaw is affiliated to IIM Indore. The course structure is designed keeping in mind the experience level of candidates. The program enables students to identify and describe complex business problems, apply appropriate analytical methods to find solutions for the same and translate the results of business analytics projects into effective courses of action. Students also get the opportunity to build and enhance their business analytics capabilities by adapting appropriate technology and software solutions. The program structure conforms to the 5D Framework of Business Analytics designed by faculty members from IIM Indore and industry experts. The framework is a 5-stage approach to harness the power of data for business storytelling that includes Define, Distill, Discover, Deliver and Drive. It prepares the students to tackle real-world business challenges and come up with innovative solutions. The curriculum includes a unique feature called Bring Your Own Project (BYOP). Under this, students get to apply the 5D framework and turn their project ideas into reality.  While working on the project, they get to function in groups and identify, shortlist and finalise a project idea. This ensures that students get to apply the Analytics and  Big Data techniques in the project to easily tackle real-life business problems and provide effective solutions. The faculty comprising of full-time and part-time members from IIM Indore conducting the learning sessions. Apart from them, there are visiting members from the industry who acquaint students with real-world industry scenarios. The program ensures that those enrolled gain an effective understanding and retention of knowledge. Students are assessed holistically based on practice assignments, graded assignments for every module and a capstone project. Assessment is done based on their understanding of business problems, their ability to work with business data, build statistical/mathematical models and draw business conclusions from them. The curriculum includes batch immersion programs and campus visits to IIM Indore. This provides them with peer learning and networking opportunities with IIM Indore’s students and alumni network. 3. Advanced Certification in Data Analytics, Edvancer The Post Graduate Certificate Program in Data Analytics offered by Edvancer is affiliated to IIT Kanpur. The six-month-long program provides a cutting-edge curriculum for freshers and professionals who wish to pursue a career in data analytics and related domains. The program focuses on data analysis, predictive modelling and machine learning using SQL, R & Python. As a part of the curriculum, the tools and techniques used for handling, managing, analysing and interpreting data are taught. The faculty comprises academics from CSE and the Statistics Department of IIT Kanpur, along with data scientists with industry experience. The mix of both brings helps provide strong theoretical grounding and practical industry and application-oriented learning. Assessments are done based on hands-on projects. Students are required to complete at least four projects across modules to get certified. The career centre at Edvancer prepares students for job interviews by training them on designing a good resume, soft skills,  aptitude and coding tests and case study analysis. Edvancer has entered into partnerships with companies like EY, IBM, NPCI, Dun & Bradstreet, TheMathCo, Hexaview, Accenture, Kinesso, DRDO and Futerense. These partnerships help students get internship opportunities and network with industry experts. 4. Advanced Certification in Data Science and AI, Intellipaat The seven-month postgraduate certification is designed for both IT and non-IT professionals with a focus on employability. The students are trained to nurture their mental ability to break down a business problem and find solutions for the same with the help of in-demand technologies. The program is focused on industry-oriented skill-building. The course curriculum has been designed to provide real-world exposure through assignments, exercises, case studies and capstone projects after each module. These help them gain industry experience. The faculty comprising of professors from IIT, help build a strong base on Machine Learning and Statistics while the industry experts impart industry exposure.  The assessment system is holistic. Students are assessed not only based on subject knowledge, project works, quizzes, assignments and case studies but also based on attendance. The capstone projects across multiple domains like retail, e-commerce, telecom, banking and finance are focused on providing students with real-time end-to-end experience of analysing a business problem right from scratch. Students get access to Intellipaat’s job-access portal and career services for any placement-related assistance. Students get easy access to jobs and internships from Intellipaat’s hiring partners. 5. Advanced Certification In AI & Machine Learning, Edvancer The Advanced Certification in Artificial Intelligence and Machine Learning program is a ten-month online program affiliated to IIT Kanpur. The course curriculum conducted by renowned faculty from IIT Kanpur and highly experienced industry professionals is open to both working professionals and freshers. The end goal is to equip students with the right blend of domain knowledge and skillset that serves the industry needs. Students get to experience good theoretical learning along with practical training from the faculty comprising of PhD scholars and AI/ML engineers from the industry.  Being a practical-oriented course, the majority of the learning hours are dedicated to hands-on, real-world projects and case study discussions. Assessments are also done based on these projects. Along with application-oriented knowledge in AI and ML, Edvancer provides students with a career services package that helps them with CV design, interview preparation, soft skills, aptitude and coding test preparation. Partnerships that Edvancer has forged with companies like EY, IBM, NPCI, Dun & Bradstreet, TheMathCo, Hexaview, Accenture, Kinesso, DRDO and Futerense facilitate students with internship and job opportunities. 6. PG Diploma/MS in Business Analytics, REVA University The UGC-approved two-year hybrid program aims to impart holistic techno-functional skills and expertise so as to enable them to lead data science projects. The curriculum includes subject knowledge, projects and research publications with an emphasis on experiential learning that includes the latest industry case studies. Experienced faculty members with decades of industry experience have designed the modules in such a way that students are able to build skills in statistical, machine learning and deep learning models and their applications in marketing, retail, finance, supply chain, and operations. The program is designed to build a full-stack skill set on descriptive, diagnostic, predictive, prescriptive, and cognitive models. Students get in-depth technical skills in an array of data science tools like  Python, R, Spark, SQL, TensorFlow, Rapid Miner, Tableau, Microsoft Azure Machine Learning Studio, IBM Cognitive services, AWS SageMaker and others. The continuous evaluation policy adopted for assessing students helps test both their theoretical knowledge along with applied learning.  The Business Analytics program includes two real-time capstone projects in which the students work on a live business challenge. These projects enable them to apply their concepts to real-life business problems, thereby honing their managerial and consulting capabilities and domain expertise. The university has forged academic partnerships with AWS Academy, MS Azure, CloudX labs, IBM and other companies to facilitate students to grab internship and project opportunities and network with relevant people from the industry. 7. Certificate Program in Data Science and Machine Learning, Imarticus Learning ​​This course offered by Imarticus Learning is affiliated with IIT Roorkee. The Post Graduate Certificate Program not only familiarises the learners with the fundamental knowledge of data science and machine learning but also equips them with appropriate tools and techniques to apply these concepts to address real-world problems like predicting property value or vaccination dosage, or heart disease. The admission process to this five-month program is fairly curated, with interested candidates being offered admission after their academic and professional background is duly reviewed by academic advisors. Being an industry-relevant curriculum, the program is open for not only tech-savvy individuals who want to learn data science and machine learning but also early and mid-level professionals for better career prospects. The enrolled candidates are able to access the Technology Innovation Hub at IIT Roorkee, which has been set up under the National Mission on Interdisciplinary Cyber-Physical Systems to train manpower and encourage entrepreneurship for the current global challenges like affordable health care, Industry 4.0 and sustainable smart cities. Moreover, the learners can leverage new-age technologies like AI, ML and Drones along with the larger innovative ecosystem extended by the iHUB DivyaSampark to build core competencies and train manpower for providing innovative digital solutions for national strategic sectors. 8. Post Graduate Certificate Program in Artificial Intelligence and Deep Learning – Joint Program by IIT Guwahati and TimesPro The eight-month postgraduate certification program conducted jointly by IIT Guwahati and TimesPro envisages upskilling students in the domain of artificial intelligence with relevant skills, techniques and tools so that they are able to keep up with the rapidly changing technology landscape. Students get comprehensive career support. Apart from the live learning sessions, the curriculum includes hackathons, campus immersion programs, on-demand sessions and monthly masterclasses with industry experts. These interactive sessions help students engage in peer learning, build their professional network and get a deeper understanding of current industry trends. They can also access AI-enabled tools to create ATS-compliant resumes and cover letters. The highly experienced faculty members are responsible for balancing program outcomes and learner engagement. They build a strong foundation of the essential data science concepts and models and upskill students in their areas of specialisation. Students get access to an LMS platform for accessing content related to the program in the form of videos, discussion fora and case studies. This helps in preparing for classes, clarify their doubts, and revising previously taught lessons at a pace that suits their routine. Students are continuously assessed by means of assignments, online quizzes and projects. The assignments and projects enable them to deal with real-world problems and datasets. The Capstone project that students work on gives them the opportunity to apply all the concepts and techniques they have learnt to solve their chosen problem statement. In the course, they learn about real-world industry aspects. 9.  Post Graduate Certificate Program in Data Science and Machine Learning – Joint Program by IIT Roorkee and TimesPro The postgraduate certification program conducted jointly by TimesPro and IIT Roorkee aims to keep students upskilled with the ever-changing technology landscape that calls for human resources to have the right mix of conceptual clarity and hands-on experience. The fully online program offers students periodic interaction with the industry and hands-on pedagogy in the form of demo sessions, hackathons, etc. These sessions with industry experts help to learn about the latest developments in the industry and access an AI-enabled tool to create ATS-compliant resumes/cover letters and an LMS platform to access study materials for reference, revision and clarification. The faculty comprising highly experienced academics and industry professionals, provides a holistic learning experience encompassing essential data science concepts and industry-oriented skill training. Along with live learning sessions, students get to engage in hackathons and campus immersion programs that enable peer-to-peer learning and building a professional network. The continuous assessment model tests students learning outcomes by means of assignments, quizzes, and projects. The  4-week long capstone project that students work on enables them to apply all the concepts and techniques they learnt over the course of the program on real-world industry datasets and problem statements. In the course, they get the opportunity to delve more into the industry aspects. Each capstone project is evaluated by the faculty on its accuracy, relevance, analysis, simplicity, and presentation. 10. BIBA – Business Intelligence & Business Analytics PG Certification Program, OrangeTree Global The Postgraduate certification program in Business Intelligence and Business Analytics (BIBA) is a four-month hybrid program that is aligned with in-demand industry-oriented skillsets and core competencies. The program structure has been designed in a way that students enrolled are able to manage their day-to-day commitments and obligations while giving full focus to the curriculum. Being conducted in smaller batches, each student gets due attention from the faculty members with regard to clarification of concepts and progress. The faculty members at Orange Tree Global comprise senior academics, researchers and industry specialists who, along with imparting lessons, also play a key role in designing the program structure and curating the content. Specialised faculty from the industry provides regular training to the students to familiarise them with industry culture and trends. The program has a deeper focus on application-oriented learning. Several modules based on R, Python, SQL, VBA, Power BI and Tableau are taught as a part of the curriculum. Application of regression and clustering techniques, credit risk modelling, customer segmentation, sentiment analysis and text mining are covered in the program. These modules have different weightage depending on the importance and difficulty of the module. The institute has tied up with companies like TCS, Genpact, HSBC Analytics, ICICI Lombard, Standard Chartered, Accenture, etc., for placement opportunities.  As a part of the curriculum, corporate training programs are also conducted by HDFC, Jubilant Food Industries, TCS, Novo Nordisk and HSBC. Apart from these, one-day workshops are conducted in renowned institutes across the country like Symbiosis School of Economics, ICFAI Business School, IIT Kharagpur etc.
A good course has high and well-balanced scores across all the parameters, making it at the top of the ranking. These rankings will help students identify the right course for them.
["AI Features"]
["Courses", "Data Science"]
Zinnia Banerjee
2022-11-16T10:00:00
2022
2,733
["data science", "machine learning", "artificial intelligence", "AI", "ML", "Ray", "Aim", "deep learning", "analytics", "Courses", "Data Science", "TensorFlow"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "data science", "analytics", "Aim", "Ray", "TensorFlow"]
https://analyticsindiamag.com/ai-features/10-best-online-hybrid-pg-data-science-courses-in-india-2022/
4
10
3
true
false
true
10,094,812
Big-tech Regulation: India, Drop the Dubiety &#038; Go the EU Way
In August 2022, top executives of firms such as Amazon, Google, Netflix and Microsoft, amongst others, were summoned before a parliamentary panel to discuss a legal framework to curb anti-competitive practices. This was triggered by a report by a parliamentary panel that highlighted anti-competitive practices by companies like Google and Meta. A few months later, in October 2022, the Competition Commission of India imposed two significant penalties on Google, totalling Rs 2,274 crore, for anti-competitive practices related to its Play Store billing policy. In the same week in the month of October, the giant was fined Rs 1,337.76 crore ($162 million) & Rs 936.44 crore ($113 million). A few months later, the Indian government released a new draft of the Data Protection Bill, known as the Digital Personal Data Protection Bill. Under the proposed Bill, companies that fail to comply with data protection regulations could face penalties as high as Rs 500 crore. It has also proposed several amendments to the current IT rules, developed the draft Digital Data Protection Bill, and proposed the Digital Competition Act and Digital India Act. But it seems like the measures have kicked in very late. On the other hand, EU has been one of the biggest imposers of fines amounting to billions of dollars. The European Union’s General Data Protection Regulation (GDPR) enables regulators to impose fines of up to 4% of global revenue for violations. Tech giants like Google, Meta, Amazon, and Apple have already encountered regulatory pressures in Europe and a few other nations, and financial consequences ranging in billions of dollars. In 2018, Google paid a €4.3 billion antitrust fine for promoting its search engine through the Android mobile operating system. Google recently lost its appeal against that decision, although the fine was slightly reduced. The company is also challenging fines of €2.4 billion and €1.5 billion for abuse of power in online shopping and advertising, respectively. Source: Statista Not just that, the EU has introduced new regulations that require tech firms to combat disinformation, particularly deep fakes. Non-compliance could result in fines of up to 6% of a company’s global turnover. What Should India Do? In India, historically, there has been a lack of fines for such violations, despite ongoing disputes between big tech and the government. The Wall Street Journal points out that the Indian government is adopting a combination of Europe’s strict antitrust policy and Chinese-style government surveillance in its efforts to regulate the big tech. Even though the administration seems to be waking up and subjecting them to scrutiny and imposing fines— the effectiveness of the upcoming Digital Data Protection Law in addressing these issues remains uncertain. This may result in data leaks, a sea of misinformation, and tech giants morphing into literal giants to influence power and people. Arindrajit Basu, a non-resident fellow at the Center for Internet and Society, in a conversation with AIM, had explained that emerging economies like India often engage in negotiations with tech giants from Silicon Valley and other regions as they continue to grow and develop. While these companies offer valuable digital services to individuals in these countries, there is a concern that they also reinforce their market dominance by extracting data from significant data markets like India. Basu highlighted the dual nature of these companies, providing services to individuals while simultaneously consolidating market concentration through data expropriation. India could drop the dubiety and move in a similar direction as the European Commission in their ex-ante regulation and competition in digital markets. The new regulation consists of two acts: the Digital Markets Act (DMA) and the Digital Services Act (DSA). The DSA focuses on online safety, protecting rights, and increasing transparency, while the DMA is primarily concerned with ensuring competitiveness in digital markets. The regulation aims to establish a fair playing field for emerging competitors and ensure fair treatment of users, especially business users, on large digital platforms. The goal is not to directly impact the structure of digital markets, but rather to set rules of conduct and prevent the suppression of competition.
While Big Techs offer valuable digital services to individuals in developing countries, there is a concern that they also reinforce their market dominance by extracting data from significant data markets like India.
["AI Features"]
["Amazon", "Google", "Microsoft", "netflix"]
Shyam Nandan Upadhyay
2023-06-09T12:05:32
2023
674
["Go", "programming_languages:R", "AI", "Amazon", "Git", "netflix", "programming_languages:Go", "Aim", "Google", "Rust", "R", "programming_languages:Rust", "Microsoft"]
["AI", "Aim", "R", "Go", "Rust", "Git", "programming_languages:R", "programming_languages:Go", "programming_languages:Rust"]
https://analyticsindiamag.com/ai-features/india-drop-the-dubiety-go-the-eu-route-for-big-tech-regulation/
3
9
2
false
false
false
32,007
Risk Analysis In The Automotive Industry Is The Safest Road To Decrease Losses
Risk Analysis is the procedure for detecting and analysing prospective matters that could give a baleful impact on business enterprises, so basically it is a component of risk management and the review of the risks associated with a specific circumstance. Risk analysis is playing a significant role in addressing the concerns of industrial jeopardy. In the automotive industry, risks that remain unidentified can proceed to a loss in the aim of productions as well as safety measures. This article aims to analyse the risk over time for the automotive industry. What Are Companies Doing In India? In India, researches are going on related to the risks in the automobile industry. Many tools have been developed for the detection and solving problems related to risk in the industry. Research like the risk assessment using Bayesian Networks for the automotive industry in India was proposed for analyzing the vulnerability of the automotive industry. Maruti Suzuki India Ltd. faced a big fall in domestic sales of passenger cars this year whereas, on the other hand, Mahindra and Mahindra Ltd. reported to have a fair rise in the same. It has been reported that from the month of April to September 2018 the production of automobiles has increased 13.32 percent year-on-year to reach 16.65 million units. The Indian govt. set up National Automotive Testing and R&D Infrastructure Project (NATRiP) centres as well as National Automotive Board to promote growth and managing the risks in the industry since 2015. The “Make in India” campaign was launched in Sept 2015 that was initiated by India’s Prime Minister Narendra Modi for investment, innovation and supporting skill development programmes. According to a report on July 2018, it is stated that the automobile industry in India is constituted of 7.1% of India’s Gross Domestic Product (GDP) and approximately 29 million people are being employed. In the year 2017, India ensued as the 4th largest manufacturers and 7th largest manufacturer of commercial vehicles in the world. So, basically, the risk level is rising up day-by-day. Preventive measures are being taken by Govt. as well as automobile industries. The Need For Risk Analysis In Automotive Industry Firstly, manufacturing cars is not an “easy” business and the competition is too savage. The automotive industry has entered in a new era that leads towards the electronic and self-driving cars. In the automotive industry, risks that remain unevaluated may result in loss of production targets, vehicle recalls which may cost a lot to the industry perhaps resulting in a doomsday for the industry. There are three basic types of risks that the automobile industry is facing hard these days. Some of the most prominent factors it needs to oversee are: Consumer’s constant urging: The demand for cars is growing aggressively along with the need for different types and specific demands for discrete geographies. For instance, India is no longer seating in the back row since the car manufacturers are manufacturing cars that are affordable for middle-class families. Economic risks: With the ups and downs in the economic graph, the sales adversely vary in the automobile industry. There are times when the cost of making cars is higher than the profit by selling them. Risk of disruption: The competition is no longer easy for the industry nowadays. With the upcoming manufacturers along with the modified technologies and possible user-friendly manners, already made some manufacturers feel like left behind. For the prevention of these risks, the industry should focus on the risk management efforts where the root of the risk can be evaluated and controlled. RCSA (Risk Control Self Assessment) is an authorising method through which the management and staffs of an industry can easily distinguish and evaluate risks and allied controls. One can improve the percentage of risk management by following four measures, mainly safety measures, the quality of the manufactured product, obsolete in nature, security measures. Safety measures: For instance, automakers may be forced to shut down their machines due to lack of the safety reasons needed but immediate safety measures can make the machines keep running in a good manner. Quality of the manufactured product: This is the one thing that should never get abandoned. The value and complexity of the product should always be kept in mind. It is to make sure that the employees are prepared with the skills that are necessary to maintain and ensure the quality of the product. Obsolete in nature: This nature can lead to the downfall of the productivity as well as industry. The best approach to avoid this nature is to keep updated with the mainframe of day-to-day life. Security measures: The risk and vulnerabilities should be understood clearly to implement security measures. Trusted vendors are the ones to be worked with in this regard. Before making the contract with any vendor, one should keep in mind about the safety and security policies that the vendor is providing. Lastly, we may not prevent every possible risk that may come up in the industry. But as it is said “Prevention is better than cure”, it is better not to be obsolescence and anchorage these measures from the very beginning.
Risk Analysis is the procedure for detecting and analysing prospective matters that could give a baleful impact on business enterprises, so basically it is a component of risk management and the review of the risks associated with a specific circumstance. Risk analysis is playing a significant role in addressing the concerns of industrial jeopardy. In […]
["AI Features"]
["automotive analytics"]
Ambika Choudhury
2018-12-21T12:43:28
2018
853
["Go", "programming_languages:R", "AI", "innovation", "RAG", "Aim", "automotive analytics", "ViT", "disruption", "Rust", "R"]
["AI", "Aim", "RAG", "R", "Go", "Rust", "ViT", "innovation", "disruption", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/risk-analysis-in-the-automotive-industry-is-the-safest-road-to-decrease-losses/
3
10
5
true
false
true
10,071,373
Plug-and-play ML models across platforms using MLCube
A variety of Machine Learning and AI models face the issues of portability and usage on various platforms. Some standard platforms like cloud, Kubernetes, GCP, and many times in localhost, the models developed cannot be deployed and used due to dependency issues. So this is where MLCube as a package is useful for sharing the models across platforms in the form of a simple downloadable and executable package. In this article, we will focus on MLCube and we will understand its importance in this context. Table of Contents Introduction to MLCubeMLCube on different platformsCase Study of MLCube in DockerSummary Introduction to MLCube Machine learning models are developed for various tasks, but sharing the model is sometimes difficult as various platforms would have certain dependency issues. So MLCube is a single shot library that facilitates machine learning researchers and engineers to use models developed from anywhere in the world on their platforms. Are you looking for a complete repository of Python libraries used in data science, check out here. MLCube basically operates on the principle of “One model run anywhere”. So using MLCube we can run the trained models on various platforms like your local machines, on cloud-based platforms like  GCP, or even on Kubernetes clusters if required. MLCube supports various platforms in the form of runners and some of the runners include dockers, Kubeflow, SSH, and many more. MLCube can be installed in required formats by using simple pip statements for respective formats. On the whole MLCube acts like a shipping container of ML models that can be used across diverse platforms to share and utilize trained models. MLCube has the flexibility to share the trained ML models to various platforms. The sharing of the models on various platforms is handled through “runners”. So in the next section of the article let us try to understand the different MLCube runners. MLCube on different platforms The MLCube framework is still in the development phase and currently, it supports 6 different types of runners. These runners can be utilized accordingly to the user’s choice of the working environment and in the platform, the machine learning engineer or researcher wants to train the model or requires the model to be shared across. Now let us look into the different types of MLCube runners that are made available in the development phase and try to understand the characteristics of each runner. MLCube in Docker The MLCube package in docker can be used by first installing the MLCube package that is designed for the Docker environment. That can be done through a pip command as shown below. !pip install mlcube-docker So once the MLCube package is installed in the docker environment the MLCube can be made to run in the docker environment using the mandatory commands. Some of the standard docker supporting commands include configure and run. A trained model can be made available in the docker environment and the model can be used accordingly. MLCube in GCP The MLCube package in GCP can be used by first installing the MLCube package that is designed for the GCP platform. That can be done through a pip command as shown below. !pip install mlcube-gcp So once the MLCube package is installed in the GCP platform the MLCube package can be made to run on the GCP platform. In GCP the configuration file parameters and some of the standard instances of the GCP like Compute Engine, and VM instances have to be activated accordingly and the MLCube library in GCP can be used accordingly to use the shared model or to retrain the shared model in the platform. MLCube in Kubernetes The MLCube package for Kubernetes can be used by first installing the MLCube package that is designed for Kubernetes. That can be done through a pip command as shown below. !pip install mlcube-k8s So once the MLCube package is installed in Kubernetes, the shared model and the Kubernetes platform can be used to accelerate the model training. Some of the standard functionalities like run commands in the Kubernetes platform can be used to activate the Job manifest. The runner in the Kubernetes platform would then create a cluster to train and use the shared model in Kubernetes. So the job has to be instantiated in Kubernetes according to MLCube requirements and once the training process of the model is completed, the job gets completed. This is how MLCube is used in Kubernetes. MLCube in Kubeflow The MLCube package for the Kubeflow platform can be used by first installing the MLCube package that is designed for the Kubeflow platform. That can be done through a pip command as shown below. !pip install mlcube-kubeflow MLCube in Kubeflow is still in development and needs a little improvement. Some of the basic commands supported are run and configure along with some standard arguments like platform and task. The PVC redirection to the MLCube workspace and kubeflow pipelining has to be completed to make complete usage of the MLCube library in Kubeflow. MLCube in Singularity The MLCube package for the Singularity platform can be used by first installing the MLCube package that is designed for the Singularity platform. That can be done through a pip command as shown below. !pip install mlcube-singularity The mandatory and standard commands supported by Singularity for MLCubes is similar to Kubeflow. Some of the standard runners designed in the development stage of MLCube include singularity, run {volumes} and {task args}. So these runners can be used accordingly in the Singularity working environment and instantiate MLCube in the working environment to train or use the shared model. MLCube in Secure Shell (ssh) The MLCube package for Secure Shell (ssh) can be used by first installing the MLCube package that is designed for the ssh. That can be done through a pip command as shown below. !pip install mlcube-ssh Some of the basic commands that have to be used in Secure Shell are ssh and rsync to activate the MLCube library in the secure shell working environment. The models shared can be made available in the secure shell working environment and the models can be used or trained accordingly by overriding certain ssh command line arguments. The rsync command has to be used efficiently to instantiate and to synchronize the training and running process of the model in the secure shell (ssh) environment. Case Study of MLCube in Docker In the development phase of MLCube, there are 4 use cases mentioned in the official Github repository.  Some of the use cases of MLCube include MNIST data-based model training and sharing, A simple hello world program, A example of using Electron Microscopy (EMDenoise) dataset, and a simple program to perform matrix multiplication famously known as matmul. In this article let us try to understand how MLCube is used for the MNIST data and train it in the docker platform. Consider that you are working in the docker terminal. Step-1: Create a python environment in the Docker platform Let us create a virtual python environment in the docker platform and activate the virtual environment created using the below lines of code. # Create a python virtual environment virtualenv -p python3 ./env && source ./env/bin/activate Step-2: Installing the MLCube docker package Let us install the MLCube docker page in the Docker environment by using the pip command as shown below. pip install mlcube mlcube-docker Step-3: Check for docker runners Once the MLCube docker library is installed in the Docker environment we have to check for all appropriate runners installed for Docker by using the below code. mlcube config --get runners Step-4: Check for platform configuration Once the MLcube docker library is installed in the Docker environment we have to check for platform configurations with respect to MLCube requirements by using the below code. mlcube config --get platforms Step-5: Cloning to MLCube examples Github repository As the MLCube library is still in the development stage currently we can only clone into the example repository of Github. So let us clone into the example repository of MLCube and we have to redirect to the cloned directory. We can use the below code to do the same. git clone 'https://github.com/mlcommons/mlcube_examples.git' && cd './mlcube_examples/mnist' Step-6: Visualizing the overview of the MLCube docker version We have to visualize the overview of the docker version of MLCube to interpret the successful cloning into the Github repository and to validate install of all prerequisites. mlcube describe --mlcube Step-7: Resolving MLCube configuration for Docker We have to validate and resolve MLCube library configurations for the Docker platform using the below code. mlcube show_config --resolve --mlcube . --platform docker Step-8: Downloading MNIST data from MLCube The MNIST data has to be downloaded into the Docker platform using the below code. mlcube run --mlcube . --task download --platform docker Step-9: Training the MLCube model in the Docker platform Now let us train the model for MNIST data in the docker platform using the below code. mlcube run --mlcube . --task train --platform docker So this is how MLCube has to be used in the Docker platform to make use of the MNIST data and use the trained model from the MLCube library in the Docker platform. Summary MLCube is a single shot framework that is used to increase the availability of models across platforms. It is still in the development stage and currently, it has the ability to function across 6 platforms flawlessly with the shared models. Through MLCube a single model can be shared anywhere in the world across different platforms right from localhost platforms, cloud-based platforms to Kubernetes clusters and Dockers. Due to MLCube, we can ensure more models active and in action for the desired tasks across various platforms as it is a simple plug-and-play library to share and use models across platforms. References MLCube Official DocumentationMLCube Official Github RepositoryML Cube examples Github Repository
MLCube is an interface which facilitates easy model sharing across platforms and account for increased model usage across platforms.
["AI Trends"]
[]
Darshan M
2022-07-22T10:00:00
2022
1,636
["data science", "machine learning", "GCP", "AI", "ML", "docker", "Python", "Kubeflow", "R", "kubernetes"]
["AI", "machine learning", "ML", "data science", "Kubeflow", "GCP", "kubernetes", "docker", "Python", "R"]
https://analyticsindiamag.com/ai-trends/plug-and-play-ml-models-using-mlcube/
3
10
0
true
true
true
10,078,072
The Real Reason Why India Falls Behind in Innovation
‘Patent’ is the hallmark that depicts the continuous innovation happening in a country. While India claims to achieve great heights in technology adoption, it’s still far behind when it comes to innovation. Among 53 nations, India is ranked 40 on the Global Intellectual Property Index, according to the U.S. Chamber International IP Index. In 2018–19, a total of 50,659 patent applications were submitted—representing a 5.9% increase from the previous year. Trends in the last five years concerning the filing of IP applications in India India has made significant investments to make its patent system open and effective, but it still lags far behind the IP5 nations. Nearly 80% of all patent applications filed worldwide are handled by the IP5 Offices collectively and about 95% of all work is done under the Patent Cooperation Treaty (PCT)—an international convention governing patent law. In order to safeguard inventions in all of its contracting states, PCT offers a standardised process for filing patent applications. Experts claim that patenting is the biggest indicator of innovation and India lacks immensely on that front, observing minute rise in the patent filing year-over-year. According to data released by the Ministry of Commerce and Industry, India’s application for patent filing has increased from 8,538 in 2000 to 50,659 in 2019. India lacks in R&D India’s ranking in the patent filing is directly related to the investment that the country is putting in research and development. The country spends only 0.7% of its GDP on R&D, while the U.S. spends 2.8%, China invests 2.1%, Israel puts 4.3% and Korea spends 4.2% of its GDP in R&D. These numbers explain why India lacks innovation when compared to others. Besides, most of the research and development investments are happening in India at the behest of the government with meagre contributions from corporates. It is also interesting to note that unlike other economies, most of the R&D investments in India are made by the government. In 2015, Indian corporations spent only $17 billion in R&D, compared to the $286 billion and $341 billion spent by their Chinese and American counterparts, respectively. It is obvious that the private sector has a significant role to play in this scenario. In addition, this may also help explain why there hasn’t been a significant increase in the number of resident filings but a steady increase in the number of international patent applications in India. India against other countries in innovation According to the data compiled by the World Intellectual Property Organization (WIPO), India filed only 2,053 patent applications in 2019, fewer than 1% of the global filings. The performance of individual companies was much higher than a country like India. For instance, Huawei filed 4,411 applications, Mitsubishi Electric filed 2,661, Samsung filed 2,334 and Qualcomm filed 2,127 applications. Besides, none of the Indian companies appeared on the top 50 global list while as many as 13 Chinese companies featured in it. Low literacy is another factor India has a very low IP literacy rate, which is another factor that is holding individuals and education institutions back from filing patents. Only 30% of the 50,000 patent applications submitted in India in 2018–19 were local businesses or individuals; the remaining 70% were submitted by international applicants. Compare that to the 1.4 million patent applications, the majority of which came from Chinese inventors. In India, there are 8 million students enrolled in 10,000 educational institutions to study technology. More than 95% of these institutions have not submitted any intellectual property, and the remaining 5% annually file between 2000 and 2500 patents combinedly. These low figures reveal a lack of IP understanding among Indian SMEs, students, and academia. Owing to such an insufficient knowledge of IPRs (intellectual property rights), small and medium-sized businesses (SMEs) in India are thought to be losing millions in revenue. Government amends policy to promote patent filing When compared to other nations, India is currently one of the leaders in fee reductions for colleges. For instance, under the new Patent Rules, an educational institution is required to pay INR 160 per page up to a maximum of INR 24,000 for each page of a sequence listing of nucleotides and/or amino acid sequences. This fee is about an 80% decrease in typical fees. Previously, these institutions paid INR 800 per page and INR 120,000 for the sequence listing. Experts claim that the amendments are aimed at smoothening the process of filing patent applications. Earlier, due to higher fees, education institutions felt discouraged while filing patents and protecting their inventions. The educational institutions in India are now at par with the United States and China—the two nations that own the majority of the world’s patents. As of now, India is rated third in scientific publications but far lower in patent filings; research and development are essential for nations with strong patent filing rates. Now that the main financial barrier has been removed, India should be in a stronger position to encourage innovation and boost the number of patent applications made by its educational institutions.
India has a very low IP literacy rate, which is another factor that is holding individuals and education institutions filing patents.
["AI Features"]
[]
Tausif Alam
2022-10-27T14:00:00
2022
835
["Go", "ELT", "AI", "IPO", "RPA", "innovation", "RAG", "Aim", "GAN", "R"]
["AI", "Aim", "RAG", "R", "Go", "ELT", "GAN", "RPA", "innovation", "IPO"]
https://analyticsindiamag.com/ai-features/the-real-reason-why-india-falls-behind-in-innovation/
3
10
2
false
false
true
10,170,182
Why OpenAI’s Codex is Not as Good as Devin or Replit
If you’re a software engineer, indie hacker, or startup founder who’s spent the last year tooling around with AI agents like Replit’s Ghostwriter, Cognition’s Devin, or Lovable’s smart terminals—well, OpenAI just entered the game, again. Over the weekend, OpenAI rolled out Codex, a cloud-based software engineering agent that looks suspiciously like the future of dev work. It’s available starting for ChatGPT Pro, Team, and Enterprise users at $200 a month, while it may take a while for the Plus users to get access. Greg Brockman, co-founder of OpenAI, said during the live research preview that Codex is their bet on vibe coding. This comes just days after OpenAI announced its acquisition of Windsurf for $3 billion. Windsurf, an artificial intelligence-assisted coding tool formerly known as Codeium, is also a direct competitor to Cursor, which was also backed by OpenAI. OpenAI is Vibing, But Without Internet Codex isn’t another glorified autocomplete. It’s a multi-agent dev assistant that runs coding tasks in parallel, inside sandboxed environments preloaded with your repo, which sounds similar to Devin, but OpenAI argues that it’s not. During the launch preview with Brockman, Katy Shi, one of the researchers at OpenAI, said, “Codex is as trustworthy, if not more trustworthy than my coworkers.” Shi added that she could access her coworkers’ logs without needing to talk to them. Shi meant that with Codex, developers can do work like writing new features, debugging, writing tests, or proposing pull requests—and it will do all of that while showing you terminal logs, test outputs, and commit history, so you don’t have to trust it blindly. This essentially means GitHub PRs can be drafted, tested, and explained by a bot that lives inside ChatGPT, making it possibly better than Devin. But while Codex acts as an agent running coding tasks in the background on the cloud, Replit allows developers to deploy apps, while Devin is an end-to-end software engineer. Codex still has other limitations, and in this case, pretty big ones. It is not connected to the internet, which makes it not an ideal choice over Devin. This is the biggest criticism currently of the release and the reason developers are not adopting it in their workflow. Devin is also in early access. It also needs well-scoped tasks. It sometimes fails tests or gets confused. And it won’t yet handle sprawling architectural decisions on its own. But for repeatable engineering chores, it’s surprisingly capable—and transparent. OpenAI conveniently calls this a research preview. Maybe the team will connect it to the Internet soon. The ambitions are anything but modest. Codex is powered by codex-1, a variant of OpenAI’s o3 model explicitly tuned for software engineering. It was trained with reinforcement learning on thousands of real coding tasks, making it eerily good at mimicking human dev styles, coding conventions, and PR etiquette. Devin, Cursor, Replit—Watch Your Backs “Codex increases the value of being technical. If you can describe precisely what you want to build, you can get a massive amount done in parallel,” posted Josh Tobin from OpenAI. “That’s fundamentally a technical skill.” But Cognition recently announced an update to Devin, offering a new agent-native IDE experience. Devin 2.0 supports multiple parallel instances, each with an interactive cloud-based IDE. Additionally, the latest update allows developers to take control while providing collaborative and fully automated approaches. Furthermore, it enables developers to refine code and run tests within the IDE. Cognition AI also announced additional features for Devin, including Interactive Planning, Devin Search, and Devin Wiki. This is where OpenAI’s Codex falls behind. Inside ChatGPT, Codex is accessed via a sidebar. You create tasks with prompts, click “Code” to generate changes, or “Ask” to query your codebase. Very different from Cursor’s “tab tab tab” models, but similar to Lovable and Replit. Each task gets its own isolated environment, where Codex can edit files, run linters, test harnesses, and type checkers. Depending on the complexity, completing a task can take anywhere from 1 to 30 minutes. You can monitor its progress in real time. It’s no coincidence that Codex seems to be eager to eat the lunches of agents like Devin, Cursor, and Replit’s AI tools. All these startups have been vying to become the default AI coding companion. But with Codex, OpenAI is using its distribution advantage—ChatGPT is already in millions of developers’ workflows. As Santiago Valdarrama joked: “Literally everyone is freaking out over Codex like they didn’t do the exact same thing for Devin, Cursor, DeepSeek, and every GPT drop since 2.0… VCs will congratulate themselves and write posts about how Codex will enable the next trillion-dollar market… until the next shitty autocomplete drops.” Codex is Good Enough for Now Despite the sarcasm, there’s truth to the cycle. But Codex is not autocomplete. At OpenAI itself, engineers are using Codex to offload annoying chores like renaming variables, writing tests, and fixing bugs. “By reducing context-switching and surfacing forgotten to-dos, Codex helps engineers ship faster and stay focused on what matters most,” the company writes. This guy literally shows how to build an AI business with Codex in 12 minspic.twitter.com/rQP3my0v6R— Aadit Sheth (@aaditsh) May 17, 2025 Codex isn’t being built in a vacuum. Early testers like Cisco, Temporal, Superhuman, and Kodiak Robotics are already using it. Cisco is testing it across its engineering teams to accelerate product development. Temporal uses it to debug, scaffold features, and stay in flow by offloading background work. Superhuman has even let product managers use Codex to write code, with engineers stepping in only for reviews. Kodiak, which builds autonomous driving tech, is using it to improve test coverage and debug tools and apparently to navigate obscure parts of its stack. Codex isn’t just stuck in ChatGPT either. OpenAI quietly launched Codex CLI last month—a terminal-based coding agent you can run locally. It brings the same models (o3 and o4-mini) into your dev environment. Now, they’ve added codex-mini-latest, a lightweight version of codex-1 optimised for snappier Q&A and faster editing inside the CLI. OpenAI is handing out $5–$50 in free API credits for Codex CLI for Plus and Pro users. No excuses not to try it. “We imagine a future where developers drive the work they want to own and delegate the rest to agents,” OpenAI wrote. Developers need to know what you want to build, but you may never have to write boilerplate again. Codex doesn’t kill Replit, Devin, or Lovable overnight. But it does something much more dangerous—it sets a new standard, but without the internet. Multi-agent, cloud-based, verifiable, and integrated into ChatGPT. It’s the baseline now. Everyone else needs to catch up.
Codex is not connected to the internet, which makes it not an ideal choice over Devin, or even Replit or Cursor.
["AI Features"]
[]
Mohit Pandey
2025-05-19T15:47:59
2025
1,094
["Go", "ChatGPT", "artificial intelligence", "TPU", "OpenAI", "AI", "Git", "RAG", "Rust", "R"]
["AI", "artificial intelligence", "ChatGPT", "OpenAI", "RAG", "TPU", "R", "Go", "Rust", "Git"]
https://analyticsindiamag.com/ai-features/why-openais-codex-is-not-as-good-as-devin-or-replit/
3
10
4
false
true
false
10,053,639
A Tutorial on Sequential Machine Learning
Traditional machine learning assumes that data points are dispersed independently and identically, however in many cases, such as with language, voice, and time-series data, one data item is dependent on those that come before or after it. Sequence data is another name for this type of information. In machine learning as well, a similar concept of sequencing is followed to learn for a sequence of data. In this post, we will understand what sequential machine learning is. We will also go through how the sequential data is used for modelling purposes and the different models used in sequential machine learning. The major points to be covered in this article are listed below. Table of Contents What is the Sequential Model? Understanding Sequential Modeling What is Sequential Data? Different Sequential Models RNN and its variants Autoencoders Seq2Seq Let’s start the discussion with the sequential model. What is The Sequential Learning? Machine learning models that input or output data sequences are known as sequence models. Text streams, audio clips, video clips, time-series data, and other types of sequential data are examples of sequential data. Recurrent Neural Networks (RNNs) are a well-known method in sequence models. The analysis of sequential data such as text sentences, time-series, and other discrete sequence data prompted the development of Sequence Models. These models are better suited to handle sequential data, whereas Convolutional Neural Networks are better suited to treat spatial data. The crucial element to remember about sequence models is that the data we’re working with are no longer independently and identically distributed (i.i.d.) samples, and the data are reliant on one another due to their sequential order. For speech recognition, voice recognition, time series prediction, and natural language processing, sequence models are particularly popular. Understanding Sequential Modelling Simply described, sequence modelling is the process of producing a sequence of values from a set of input values. These input values could be time-series data, which shows how a certain variable, such as demand for a given product, changes over time. The production may be a forecast of demand for future times. Another example is text prediction, in which the sequence modelling algorithm predicts the next word based on the sequence of the previous phrase and a set of pre-loaded conditions and rules. Businesses may achieve more than just pattern production and prediction by employing sequence modelling. What is Sequential Data? When the points in the dataset are dependent on the other points in the dataset, the data is termed sequential. A Timeseries is a common example of this, with each point reflecting an observation at a certain point in time, such as a stock price or sensor data. Sequences, DNA sequences, and meteorological data are examples of sequential data. In other words sequential we can term video data, audio data, and images up to some extent as sequential data. Below are a few basic examples of sequential data. Source Below I have listed some popular machine learning applications that are based on sequential data, Time Series: a challenge of predicting time series, such as stock market projections. Text mining and sentiment analysis are two examples of natural language processing (e.g., Learning word vectors for sentiment analysis) Machine Translation: Given a single language input, sequence models are used to translate the input into several languages. Here’s a recent poll. Image captioning is assessing the current action and creating a caption for the image. Deep Recurrent Neural Network for Speech Recognition Deep Recurrent Neural Network for Speech Recognition Recurrent neural networks are being used to create classical music. Recurrent Neural Network for Predicting Transcription Factor Binding Sites based on DNA Sequence Analysis In order to efficiently model with this data or to get as much information, it contains a traditional machine algorithm that will not help as much. To deal with such data there are some sequential models available and you might have heard some of those. Different Sequential Model RNN and its Variants Based Models RNN stands for Recurrent Neural Network and is a Deep Learning and Artificial Neural Network design that is suited for sequential data processing. In Natural Language Processing, RNNs are frequently used (NLP). Because RNNs have internal memory, they are especially useful for machine learning applications that need sequential input. Time series data can also be forecasted using RNNs. The key benefit of employing RNNs instead of conventional neural networks is that the characteristics (weights) in standard neural networks are not shared. In RNN, weights are shared over time. RNNs can recall their prior inputs, whereas Standard Neural Networks cannot. For computation, RNN uses historical data. A different task that can be achieved using RNN areas, Source One-to-one With one input and one output, this is the classic feed-forward neural network architecture. One-to-many This is referred to as image captioning. We have one fixed-size image as input, and the output can be words or phrases of varying lengths. Many-to-one This is used to categorize emotions. A succession of words or even paragraphs of words is anticipated as input. The result can be a continuous-valued regression output that represents the likelihood of having a favourable attitude. Many-to-many This paradigm is suitable for machine translation, such as that seen on Google Translate. The input could be a variable-length English sentence, and the output could be a variable-length English sentence in a different language. On a frame-by-frame basis, the last many to many models can be utilized for video classification. Traditional RNNs, as you may know, aren’t very excellent at capturing long-range dependencies. This is primarily related to the problem of vanishing gradients. Gradients or derivatives diminish exponentially as they move down the layers while training very deep networks. The problem is referred to as the Vanishing Gradient Problem. To tackle the vanishing gradient the LSTM was introduced as its name derives from the problem. The RNN hidden layer is modified with LSTM. RNNs can remember their inputs for a long time thanks to LSTM. In LSTM, a cell state is transferred to the next time step in addition to the concealed state. RNN and LSTM Long-range dependencies can be captured via LSTM. It has the ability to remember prior inputs for long periods of time. An LSTM cell has three gates. These gates are used in LSTM to manipulate memory. The gradient propagation in the memory of a recurrent network is controlled by gates in long short-term memory (LSTM). For sequence models, LSTM is a common deep learning technique. The LSTM algorithm is employed in real-world applications such as Apple’s Siri and Google’s voice search, and it is responsible for their success. Auto-Encoders One of the most active study areas in Natural Language Processing is machine translation (MT) (NLP). The goal is to create a computer program that can quickly and accurately translate a text from one language (source) into another language (target) (the target) The encoder-decoder model is the fundamental architecture utilized for MT using the neural network model: The encoder section summarizes the data in the source sentence. Based on the encoding, the decoder component generates the target-language output in a step-by-step manner. Basic Structure of Single-layer Autoencoder The performance of the encoder-decoder network diminishes significantly as the length of the input sentence increases, which is a limitation of these approaches. The fundamental disadvantage of the earlier methods is that the encoded vector must capture the full phrase (sentence), which means that much critical information may be missed. Furthermore, the data must “flow” through a number of RNN steps, which is challenging for large sentences. Bahdanau et al. presented an attention layer that consists of attention mechanisms that give greater weight to some of the input words than others while translating the sentence and this gave further boost in machine translation applications. Seq2Seq Seq2seq takes a sequence of words (sentences or sentences) as input and produces a sequence of words as output. It accomplishes this through the usage of a recurrent neural network (RNN). Although the basic RNN is rarely used, its more complex variants, such as LSTM or GRU, are. In Google’s planned version, LSTM is used. By taking two inputs at each moment in time, it constructs the context of the word. The name recurrent comes from the fact that it receives two inputs, one from the user and the other from the previous output (output goes as input). It is sometimes referred to as the Encoder-Decoder Network since it primarily consists of two components: an encoder and a decoder. Encoder: It translates input words to corresponding hidden vectors using deep neural network layers. Each vector represents the current word as well as its context. Decoder: It uses the encoder’s hidden vector, its own hidden states, and the current word as input to construct the next hidden vector and forecast the next word. Final Words By going through these discussed techniques one might confuse between seq2seq and Autoencoder. The input and output domain of the seq2seq model is different like (English-Hindi) and used mostly in machine translation applications. Whereas the Autoencoder is a special case of the seq2seq model where both input and output domain are the same (English-English), it behaves like auto-association means it perfectly recalls or rebuilds the input sequence if we pass a corrupted sequence. Features like this have leveraged autoencoder in many applications like pattern compilation etc. Through this post, we have seen what is a sequential model is. In which we have discussed more fundamental concepts of sequential models and sequential data. In short, we can say data to be sequential if it is anyhow associated with time or its instances are dependent. To deal with such data traditional ML algorithms are not much useful for that need to deal with special cases of Deep Learning technique as we have discussed. References Sequence Models LSTM vs GRU RNN Network seq2seq
Machine learning models that input or output data sequences are known as sequence models. Text streams, audio clips, video clips, time-series data, and other types of sequential data are examples of sequential data.
["AI Trends"]
["autoencoders", "Deep Learning", "Machine Learning", "RNN"]
Vijaysinh Lendave
2021-11-17T13:00:00
2021
1,633
["autoencoders", "machine learning", "TPU", "AI", "neural network", "sentiment analysis", "ML", "Machine Learning", "RAG", "NLP", "deep learning", "RNN", "Deep Learning", "R"]
["AI", "machine learning", "ML", "deep learning", "neural network", "NLP", "RAG", "sentiment analysis", "TPU", "R"]
https://analyticsindiamag.com/ai-trends/a-tutorial-on-sequential-machine-learning/
3
10
3
false
true
true
10,104,797
7 Wittiest Artificial Intelligence Memes
Memes have become an integral part of online culture within the broad and constantly changing internet. These oddball, frequently humorous images combined with astute commentary have evolved into the digital equivalent of a secret handshake, providing a quick and efficient means of communication for people to connect, exchange ideas, and convey a whole range of human emotions. However, this virtual story now has a twist! AI has transformed meme production into a whole new territory. Memes are firmly establishing themselves as the undisputed kings of internet humor, but they’re more than just timely jokes —they’re a science and they’re art. To master the meme game, producers must strike a careful balance between things that make people laugh and things that could set off the internet’s laughter detector. Let’s catch a few AI memes that went viral this year: Sam’s not f**king leaving This meme perfectly captures Sam Altman’s return after being ousted by the OpenAI board.Altman’s office records at OpenAI now featured an infamous three-day holiday. So, hold on to your AI systems, meme engineers – for this was a special case of pink slip and rehire. One course is all you needWhen you’re halfway through your PhD in machine learning, you realize that you and Andrew Ng are practically best friends because of his online courses. Read more about the latest course by Ng, your genius friend here. It’s all the same thing When using ChatGPT to play mind games, you may display the same image with different faces, and it will react by saying things like, “Wait a byte! Did I just see a face-making pixel? The AI world of made up perplexity got interesting in 2023 with the ultimate face-show. We go hand in hand When your illustration abilities are a work of abstract wonder, do not be alarmed. With AI art generators like Emu Edit, Imagen, DALL-E 3, Stable Diffusion, and Midjourney, AI has you covered. With the help of these AI tools that can instantly transform doodles into masterpieces through neural network processing, even your stick figures can now shine on the digital canvas. You got it rightAcross the globe, it’s a “large language model”, but stroll down the boulevards of Paris, just “le big model”. Because in the City of Light, even our AI speaks the language of chic sophistication. Late realisationWhen you realize ChatGPT can do your job, and you’re like, “Wait, is my role just a glorified conversation starter?” You’re left wondering if you’re the one being helped or the one providing assistance when ChatGPT begins creating its own job description. What did he just say? Murthy’s move: Unintentional joke or 4D chess? Workers contemplate, as the timer continues to run. How are meme generators powered by AI? AI and machine learning techniques are utilized by AI meme generators to produce memes in an automated manner. These generators examine pre-existing memes and produce new ones based on discovered patterns by utilizing a variety of methods, including computer vision and natural language processing. AI systems learn comedy, trends, and the cultural context needed to create memes that connect with viewers by being trained on enormous volumes of meme data.
AI has taken meme production to a whole new level in 2023.
["AI Trends"]
["AI Tool"]
Arya Vishwakarma
2023-12-13T16:00:00
2023
526
["Go", "ChatGPT", "machine learning", "OpenAI", "AI", "neural network", "ML", "Git", "computer vision", "AI Tool", "R"]
["AI", "machine learning", "ML", "neural network", "computer vision", "ChatGPT", "OpenAI", "R", "Go", "Git"]
https://analyticsindiamag.com/ai-trends/7-wittiest-ai-memes-of-2023/
3
10
1
false
false
false
10,020,007
Data Science Hiring Process At Avast
Avast is one of the world’s largest security companies that relies on next-gen technologies to fight cyber attacks in real-time. Data scientists form an integral part of their team. The company provides a portfolio of self-service data technologies that are managed centrally but used by people across functions such as product management, e-commerce data scientists. The data ecosystem’s shared components, such as data catalog, master data management, and EDW, are also managed centrally. In recent years, the company has increased its focus on growing data-literacy across Avast through special data-only events for all employees. We got in touch with Jon Hill, Global Head of Talent Acquisition at Avast, to know more about the hiring process, skills required and more. Required Skills Hill said they look for experts who can utilise multiple technologies and platforms. The candidate should be well versed in different databases, modelling techniques and tools, predictive modelling, and data analytics. They should actively seek the best tools available to them, and if not, consult other members of the team for advice. “A team player is crucial when encountering problems and must have the people skills to blend into our existing data science team so they can overcome difficulties during problem-solving activities,” highlighted Hill. Hill said the candidates must have strong foundational statistical skills and be good with theoretical fundamentals, probability questions, strong technical understanding, etc. “But above all, they must be able to articulate their work, approach, methods and tell the story through data,” he said. An ideal data science candidate at Avast should take a curious approach towards interviewing business stakeholders to understand their problems fully. They must have the business acumen to support the findings and solutions through data. “The science part is the means to get in the room, but then we expect a polished, business-focused, data-driven solution,” he added. In terms of education, a background in IT, computer science, Math, Physics or any data-related field works best for Avast. “We do not think that educational background and skills are exclusive of one another. Prior work experience in a related field or sector is always better,” said Hill. Hill said data scientists who work at Avast have nearly unlimited opportunities to grow in both career and experience. “The projects they work on are truly world-class and ahead of the competition. They will not get bored!” he said. Interested candidates can apply on their career pages or corporate websites. “We are also very active on social media platforms like LinkedIn, and our in-house team of recruiters is more than happy to speak with any interesting candidate,” shared Hill. Interview Process Avast is always on the market for candidates that suit their culture. The first step is the initial screening by the in-house team, who are experts in hiring technical minds. The selected candidates are referred to the hiring manager who interviews them based on their profile, experience, logical thinking and storytelling abilities. A key step in the hiring process is an intensive panel discussion of the candidates with existing team members, where they are asked to analyse problems and present back their solutions. “This checks not only the box of technical, methodical and logical elements but also the team fit,” he added. While recruiting for data science teams can be a challenge for most companies. Hill believes it is easier for Avast to hire talents in the data science field. “We are founded on breaking codes, finding answers to what was previously thought impossible questions, and this attracts these great minds to Avast,” shared Hill.
Avast is one of the world’s largest security companies that relies on next-gen technologies to fight cyber attacks in real-time. Data scientists form an integral part of their team. The company provides a portfolio of self-service data technologies that are managed centrally but used by people across functions such as product management, e-commerce data scientists. […]
["AI Hirings"]
["data science master", "master data science"]
Srishti Deoras
2021-02-11T16:00:00
2021
589
["data science", "Go", "programming_languages:R", "AI", "data-driven", "data science master", "programming_languages:Go", "ViT", "master data science", "analytics", "R"]
["AI", "data science", "analytics", "R", "Go", "ViT", "data-driven", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-hiring/data-science-hiring-process-at-avast/
2
9
2
true
false
false
10,049,132
Five Things To Keep In Mind When You Work On Your Next ML Project
Machine learning is an exciting area of research and development. ML tools are important in many industries and science fields. ML research is also very tricky and has several challenges. If not addressed suitably, these challenges can lead the project in the wrong direction. Micheal A Lones, Associate Professor in the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, recently produced a paper on the common mistakes that occur when using machine learning techniques and outlining a few techniques to dodge them. This guide is especially useful for students. This paper covers five stages of the machine learning process: what to do before building the model, reliably building the model, model evaluation, fairly comparing models, and reporting the results. Before starting to build the model First, the data has to be collected from a reliable source and through reliable technology. Second, one should also make sure that they have enough data, which is a prerequisite to training a model that generalises. Once the good quality and sufficient data have been collected, the researcher should avoid making untestable assumptions. This means that the developer should avoid looking at any test data too closely in the initial exploratory analysis stage as it may limit the generality of the model. Talking to domain experts should be considered an important part of the preparation. They can help one understand which problem to solve, the most appropriate feature set and ML model to use, and help in publishing to the most appropriate audience. Apart from this, it is important to do thorough literature surveys to understand what has and hasn’t been done previously. At last, if the eventual goal of the project is to produce an ML model that would be used in the real-world, then it is worth thinking about how it is going to be deployed. Building models reliably With modern ML frameworks, it is easy to test different approaches to building models and see what works. However, this can lead to disorganisation. It is important to approach model building in an organised manner. Generally speaking, there is no single best ML model. No ML approach is better than any other when considering a range of possible problems. The job of the researcher is to find the ML model that works best for a given problem. Researchers should also make sure not to use inappropriate models. When the barrier to implementation is lowered, modern ML libraries make it easy to apply inappropriate models to the data. Further, it is better to use a hyperparameter optimisation strategy, which may include random and grid search, or use tools that intelligently search for optimal configurations. Lastly, one should avoid leakage of test data into the training process. In case of such leakage, the data no longer provides a reliable measure of generality. This is often the reason why published ML models fail to generalise to real-world data. Robustly evaluate model Measuring the true performance of an ML model is a way to go about if one wishes to contribute to the progress in the field genuinely. In order to do so, a researcher would need to have valid results to draw reliable conclusions from. Firstly, a researcher must always use a test to measure the generality of the ML model. One must ensure that the data in the test set is appropriate and should not overlap with the training set, apart from being representative of the wider population. While it is not usual to train multiple models in succession and use previously gained knowledge to guide the configuration of the next, it is important not to use the test set within this process. Researchers could instead use a separate validation set to measure performance. To get a reliable estimate of the model instance’s generality, researchers may use another test set. If there is enough data, it is wise to keep some aside and use it once to provide an unbiased estimate of the final selected model instance. Fair comparison of models Model comparison is the basis of academic research but it isn’t easy to get right. An incorrect and unfair comparison may lead to confusion and misleading reports. Researchers must evaluate different models within the same context, explore multiple perspectives, and correctly use statistical tests. Following steps could be followed to avoid unfair comparisons: Suspend the belief that bigger numbers imply better modelsUse statistical tests when comparing modelsExercise caution when considering results from community benchmarksConsider combinations of model Reporting the results Author Lones writes that the aim of academic research should be seen as an opportunity to contribute to knowledge rather than used for self-aggrandisement. To effectively contribute to knowledge, the researchers must provide a complete picture of their work that covers both what worked and what did not. Since it is rare that one model is better than another in all aspects, a researcher must try to reflect this with a nuanced approach to reporting results and conclusions.Read the full paper here.
This paper covers five stages of the machine learning process: what to do before building the model, reliably building model, model evaluation, fairly comparing models, and reporting the results.
["AI Trends"]
["Machine Learning", "machine learning research"]
Shraddha Goled
2021-09-22T14:00:00
2021
828
["machine learning research", "Go", "machine learning", "programming_languages:R", "AI", "RPA", "ML", "Machine Learning", "programming_languages:Go", "Aim", "GAN", "R"]
["AI", "machine learning", "ML", "Aim", "R", "Go", "GAN", "RPA", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-trends/five-things-to-keep-in-mind-when-you-work-on-your-next-ml-project/
2
10
0
false
true
true
10,054,791
Will Timnit Gebru’s New Institute Take On Big Tech’s Influence On AI Research
Timnit Gebru launched Distributed Artificial Intelligence Research (DAIR) Institute, an independent, community-driven organisation that aims to counter Big Tech’s influence on the research and development of AI. In the past, Gebru has also spoken about big tech’s excessive (often unregulated) say on the AI landscape; she hopes to create an independent space where researchers from varied backgrounds can come together and set an agenda for AI research that is rooted in their communities and experiences. Gebru is one of the most respected voices in the field of AI Ethics. Last year, Gebru, who then worked as the co-lead of the Ethics AI team, was ousted from Google for allegedly co-authoring a paper on large language models and their adverse effects. Why DAIR “AI needs to be brought back down to earth. It has been elevated to a superhuman level, which leads us to believe it is inevitable and beyond our control. When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity,” said Gebru. I love the poetic Justice of my dear friend @timnitGebru launching @DAIRInstitute a year after her unconscionable firing. Yes to build the world we wish to see. Yes to new research paradigms! Yes to epic collaborations. https://t.co/FhrMshWDWn— Dr. Joy Buolamwini (@jovialjoy) December 2, 2021 The new institute established by Gebru is currently a Code for Science and Society project and has received $3 million in funding from Ford Foundation, the John D. and Catherine T. MacArthur Foundation, the Kapor Center and the Open Society Foundation, among others. With DAIR, investors and founders hope to build a field of public interest technology – meaning harnessing the power of emerging technologies for the public good. This will ultimately build the movement towards an inclusive and equitable technology. DAIR eventually plans to establish itself as a non-profit organisation. DAIR will develop use cases for AI that are unlikely to be developed anywhere else (read big tech companies) to further inspire others to give technology a new direction. One of the first projects that the institute will work on is creating a public dataset of aerial imagery of South Africa to examine how and if apartheid is still etched into land use. A preliminary analysis showed that most vacant land in the densely populated region that was once restricted to non-white and poor people, developed between 2011 and 2017, has now been converted to wealthy residential areas. DAIR will soon be publishing a paper on this project. It will mark DAIR’s debut in the academic AI research circle at the NeurIPS conference. Need for Independent Research in AI Researchers like Gebru have been speaking about freeing AI research from the clutches of big tech. These companies exert a lot of influence and power over their fields since AI underpins some of their popular products like Google search engine and Amazon’s Alexa. In this endeavour, companies routinely publish influential research papers, fund important conferences, establish data centres for large scale AI research, and hire top researchers in the field. Studies have shown that the majority of tenure track faculty at four top universities received backing from Big tech companies. Another 2019 report revealed that Google had poured more than $250 million since 2005 into academia. Similarly, Samsung pumped $1.5 billion into Korean research institutions through a funding programme launched in 2013. Receiving funding from these large conglomerates helps researchers be free of the financial burden and is also a pathway for a full-fledged industry career. But all this comes at a cost. There is a serious conflict of interest in such cases. Other than that, usually, these research projects must be related to the funding company’s business interest.
“AI needs to be brought back down to earth. It has been elevated to a superhuman level, which leads us to believe it is inevitable and beyond our control.”
["AI Features"]
["AI Research", "timnit gebru"]
Shraddha Goled
2021-12-03T17:00:00
2021
628
["Go", "funding", "artificial intelligence", "AI", "AI Research", "AI ethics", "timnit gebru", "Aim", "ViT", "GAN", "AI research", "R"]
["AI", "artificial intelligence", "Aim", "R", "Go", "GAN", "ViT", "AI ethics", "funding", "AI research"]
https://analyticsindiamag.com/ai-features/will-timnit-gebrus-new-institute-take-on-big-techs-influence-on-ai-research/
3
10
2
false
false
true
10,057,754
The Enformer vs the Basenji &#8211; The AI algorithms for gene expression predictions
DeepMind and Alphabet at Calico introduced a neural network architecture called Enformer that greatly improved the accuracy of predicting gene expression based on DNA sequence. In the paper “Effective gene expression prediction from sequence by integrating long-range interactions” published in Nature Methods, DeepMind suggested that Enformer is more accurate than Basenji. Basenji2 and limitations The basic building blocks of gene expression have typically been convolutional neural networks. They have, however, been limited in their ability and effectiveness to model due to the effects of distal enhancers on gene expression. So Deepmind depends on Basenji2, built on TensorFlow, which offers a variety of benefits, including distributed computing, a large and adaptive developer community, and is designed to predict quantitative signals using regression loss functions, rather than binary signals using classification loss functions. The best part of Basenji is that it could predict the regulatory activity of 40,000 base pair DNA sequences at a time. Enformer’s advances include Enformer, on the other hand, relies on a technique common to natural language processing from Google called Transformers to take into account self-attention mechanisms that would be able to integrate much more DNA context. As Transformers can read long text passages, DeepMind modified them to read DNA sequences of vastly extended length. Enformer outperformed the best team on the critical assessment of genome interpretation challenge (CAGI5) for noncoding variant interpretation despite no additional training. Furthermore, Enformer learned to predict promoter-enhancer interactions directly from DNA sequences, competing with methods that took direct experimental data as input. In the case of training, DeepMind used Sonnet to construct neural networks used for many different purposes. It is defined in enformer.py. DeepMind pre-computed variant effect scores for all frequent variants (MAF>0.5%, in any population) and stored them in HDF5 files per chromosome for the HG19 reference genome under the 1000 genomes project. Additionally, they provide the top 20 principal components of variant-effect scores per chromosome in a tabix-indexed TSV file (HG19 reference genome). These files have the following columns: #CHROM – chromosome (chr1)POS – variant position (1-based)ID – dbSNP identifierREF – reference allele (e.g. A)ALT – alternate allele (e.g. T)PC{i} – i-th principal component of the variant effect prediction. Hopefully, these advances will enable better mapping of growing human disease associations to cell-type-specific gene regulatory mechanisms and provide a framework to understand how cis-regulatory evolution works.
Enformer, a genetic research tool based on Transformers, advances genetic research by predicting how DNA sequences influence gene expression.
["AI Trends"]
["AI (Artificial Intelligence)", "Data Science", "Deep Learning", "Machine Learning"]
Sohini Das
2022-01-06T16:00:00
2022
388
["Go", "attention mechanism", "AI", "neural network", "distributed computing", "Machine Learning", "Transformers", "ai_frameworks:TensorFlow", "ViT", "Deep Learning", "Data Science", "TensorFlow", "R", "AI (Artificial Intelligence)"]
["AI", "neural network", "TensorFlow", "Transformers", "distributed computing", "R", "Go", "attention mechanism", "ViT", "ai_frameworks:TensorFlow"]
https://analyticsindiamag.com/ai-trends/the-enformer-vs-the-basenji-the-ai-algorithms-for-gene-expression-predictions/
3
10
0
true
false
true
16,461
Analytics India Industry Study 2017 by AnalytixLabs and Analytics India Magazine
With an increase in data generation at individual level and at organizational level, the need for tools that can analyze these data has only increased. Deployment of analytics is becoming critical to a business and from performing simple tasks to sophisticated ones, they are relying extensively on analytics. It can even be said that, an increasing efficiency of tools and products specializing in data processing has resulted in growth of business analytics. Either way around, there has been an extensive increase in the overall revenue generated by these companies, at an estimated $2.03 Billion annually and 23.8% of CAGR. India has also been making headways in analytics export in the form of various analytics services. With USA and UK being the major market for analytics export for India, its domestic market serves a total of 4% of analytics revenues. The analytics industry is  growing for sure and Analytics India Magazine in association with AnalytixLabs brings the Analytics India Industry study 2017 covering all the aspects of the analytics industry. The study is a result of extensive primary and secondary research conducted over a duration of two months. Read last year’s report here Key Trends- Analytics/ Data science/ Big Data industry in India currently estimated to be $2.03 Billion annually in revenues, growing at a healthy rate of 23.8% CAGR. Of the annual inflow to analytics industry – almost 12% can be attributed to advanced analytics/ predictive modeling and Data science A sizeable 24% can be attributed to big data. Analytics/ Data science/ Big Data industry in India is expected to almost double by 2020. Geographies served- In terms of geographies served, almost 60% of analytics revenues in India come from analytics exports to USA. UK comes on a distant second at 8.4% of revenues. India domestic market serves as a significant opportunity, with almost 4% of analytics revenues coming for Indian firms. Sector Type- In terms of Sector type, Finance & Banking continues to be the largest sector being served by analytics in India. Overall, 37% or $756 Million in revenues to analytics industry in India comes from Finance & Banking. Marketing & advertising comes second at 26%, followed by E-commerce sector at 15% of analytics revenues in India. In comparison to the last year, Pharma & Healthcare saw the biggest jump in analytics revenues, from $103MM to $137MM, a jump of 34%. Finance & Banking saw an increase of 31% vis-à-vis last year. Analytics Industry by Cities- 28% or $565 Million in market size for analytics industry comes from Delhi/ NCR. This is closely followed by Bengaluru at 27%. The highest increase in year on year analytics revenues for an Indian city comes from Hyderabad, from $134MM in 2016 to $178 this year; an increase of 33%. Analytics Professionals in India- Work Experience- The average work experience of analytics professionals in India is 7 years; up from 7.2 years from last year. Around 12,000 freshers were added to analytics workforce in India this year; up from 8,500 freshers last year. Almost 42% analytics professionals in India have a work experience less than 5 years, down from 46% last year. Last year, analytics professionals with more than 10 years experience increased by more than 30,000. Education- Top 10 universities/ schools that analytics professionals in India graduate from are University of Mumbai Delhi University Kendriya Vidyalaya University of Pune Indian Institute of Management, Calcutta Indian Institute of Management Bangalore Indira Gandhi National Open University Indian School of Business SVKM’s Narsee Monjee Institute of Management Studies Indian Institute of Technology, Bombay Almost 18% of analytics professionals in India graduate from these 10 universities/ Schools. 57% of analytics professionals have a Master’s/ Post Graduation degree, same as a year earlier. 3% of analytics professionals in India hold a PhD or Doctorate degree, again similar to a year back. Women participation in analytics workforce remains low – just 24% of analytics professionals in India are women. Company Size Almost 40% of analytics professionals in India are employed with large sized companies – with more than 10K total employee base. Mid size organizations (total employee base in range of 200-10K) employ 33% of all analytics professionals in India. Startups (less than 200 employee base) employees 27% of analytics professionals in India. Startups have significant contribution to overall output of analytics in India. Even though small in absolute term, the overall impact is increasing significantly with small to mid size organizations in India. Conclusion- The current situation of analytics industry presents positive picture as is suggested by the study. The numbers are suggestive of the fact that India is emerging as one of the top destinations for analytics with everything from analytics education to job opportunities showcasing satisfactory numbers. The study suggested that, of the annual inflow to analytics industry, almost 12% can be attributed to advanced analytics, predictive modeling and data science. It also says that analytics industry is expected to almost double by 2020, which would mean an increased opportunity for professionals dwelling in this space. The study overall is representative of a booming analytics industry in India and as the numbers suggests, it is expected to grow sizably in the future too. Here’s the complete report- Or download it below [attachments include=”16484″] Your opinion Matters [poll id=”13″]
With an increase in data generation at individual level and at organizational level, the need for tools that can analyze these data has only increased. Deployment of analytics is becoming critical to a business and from performing simple tasks to sophisticated ones, they are relying extensively on analytics. It can even be said that, an […]
["AI Features"]
["Analytics India", "analytics industry", "analytics industry india"]
Srishti Deoras
2017-07-24T09:12:57
2017
874
["big data", "data science", "TPU", "startup", "programming_languages:R", "AI", "Analytics India", "RAG", "analytics industry india", "analytics", "GAN", "R", "analytics industry"]
["AI", "data science", "analytics", "RAG", "TPU", "R", "big data", "GAN", "startup", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/analytics-india-industry-study-2017-by-analytixlabs-and-analytics-india-magazine/
3
10
4
false
false
true
23,160
Women In New Tech: Vidhya Duthaluru Of [24]7.ai Talks About Creating Equitable Work Environment
The abysmal number of women in emerging technologies such as artificial intelligence, machine learning, data science and analytics is a worrying trend for organisations all over the world. The resultant sexism is increasingly becoming one of the side-effects, making the global protests for gender equality so much more necessary. Why is it so? And what can be done to change it? Analytics India Magazine is featuring women leaders in these sector for all of March celebrating Women’s Day. Vidhya Duthaluru, vice president, Data Sciences Group, [24]7.ai What does the career in analytics/data science/AI look like for a woman? A career in analytics, data sciences and AI is very exciting for everyone be it a woman or a man. AI and deep learning have really taken off given the explosion of data that we have had over the last several years. This is an exciting and competitive field and one has to have curiosity and a constant love for learning. The field is evolving daily with researchers from premier institutions and technology companies working on complex problems and devising new ways to solve them. One has to read, research and keep up with the advances to stay current. So with a lot to learn and exciting new findings from different data sources, and constant evolution, this is one of the most exciting fields to be in. I would certainly encourage many, many more women to pursue this. Why did you choose this field as a career option? I did my Ph.D. in Electrical and Computer Engineering from Rutgers University. My specialization was in the area of speech science. I continued working in that area after my PhD, on deploying speech recognition and then optimizing the performance. We did analysis of the data and lots of experimentation to improve performance with the data we had. This was well before we actually had data at scale, but the principles of analysis were still the same. This area of making sense of the data we have, and building machine learning models to recognize patterns has always been fascinating to me especially, in the area as critical and complex as customer interactions. It’s immensely satisfying to see the impact of this work in real life – improving customer experience for millions of customers. How has your growth story been so far? I have had a very successful journey of professional growth. Starting out as a Speech Scientist doing research and deployments of speech recognition and voice biometrics to then transforming to dealing with larger volumes of data coming from multiple sources, it’s been a fascinating journey. There is so much to learn every day from all the people I work with. Today, I manage a team of scientists and analytics professionals at [24]7.ai as well as look at deriving insights from the data from customer interactions and building automation in processes and conversations from it. I think the key is to keep learning, and continue to be curious about ways to tackle problems at scale. It also helps to find a cause that you can align with – our company vision ‘we make it easier for consumers to connect with companies to get things done’ is a constant motivator to use cutting-edge technology to solve complex customer service problems. Do you struggle to maintain a work-life balance? Maintaining work life balance is certainly a struggle for working professionals, and more so for women because we want to give our 100% at work and 100% at home with our families. It’s always a juggling act and I tend to take it one day at a time. It helps greatly that I have a very supportive husband who is there every step of the way. Another important aspect is to choose the right organization – the one that values your contributions and gives you enough flexibility so you manage both fronts well. The thing to understand is that it will never be perfect, and there will always be compromises, but if we treat this as a way of life and make the right decisions for us on a daily basis, it will work for the most part. There will be misses but we shouldn’t come down too hard on ourselves for that. It also helps to be more organized and more planned to ensure that both work and family are given the attention they need. Your thoughts on incorporating more women in new tech sectors. Over the last decade, we have witnessed a steady increase in the number of women pursuing courses and careers in technology, however we have still a long way to go. The tech world buzzing with new technologies such as big data, blockchain, AI, quantum computing etc. Thus the overall mood is quite upbeat and these new technologies have created many new kinds of roles. The start-up proliferation has also created a lot of demand for technical skills. We need to have a talent pool of women ready who can take on these roles. And that preparation has to start early on – when these women are in their formative years at schools/colleges and are forming their opinions about what they would like to do in their lives. We need to ride this wave and get more girls excited about technology roles as their preferred career choice. Enterprises too have to be committed to increase women presence in their workforce. Organizations need to ensure that they create a welcoming and equitable work environment where women thrive as much as their male counterparts. What are the key changes in education/career choices that could change the current scenario? We need to give girls a chance to explore coding at an early age – coding camps in schools and community-run programs where professionals volunteer to teach basics of coding would be a good way to implement this. As parents and teachers, we should instil interest in STEM education early on. Let these girls tinker with technology, dream about science and develop a fascination for a lot of new innovations that we are being exposed to. We need to talk to girls about other successful women in technology and if possible get them to meet a few technology professionals to get a rich perspective on career, various trending technologies and how they should prepare themselves. As women working in technology, each of us has a responsibility to educate and inspire more young girls and women to study these fields. Most of all, we need to reinforce that they can be anything they want to be and they are as talented as anybody else trying to make their mark in the world of technology.
The abysmal number of women in emerging technologies such as artificial intelligence, machine learning, data science and analytics is a worrying trend for organisations all over the world. The resultant sexism is increasingly becoming one of the side-effects, making the global protests for gender equality so much more necessary. Why is it so? And what […]
["AI Features"]
["Interviews and Discussions", "Women in AI", "Women in Tech", "Women's Day"]
Prajakta Hebbar
2018-04-01T14:37:06
2018
1,106
["big data", "data science", "Go", "artificial intelligence", "machine learning", "AI", "RAG", "deep learning", "Women in Tech", "analytics", "Women's Day", "R", "Women in AI", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "deep learning", "data science", "analytics", "RAG", "R", "Go", "big data"]
https://analyticsindiamag.com/ai-features/women-new-tech-vidhya-duthaluru-24-7-ai/
2
10
1
false
false
true
590
Competing through data: Four experts offer their insights
Prithvijit Roy, CEO and Co-founder at BRIDGEi2i Competing through data has been the secret sauce for a few organizations over the last decade and they have all built significant competitive advantage. “Competing on analytics – the new science of winning” by Davenport and Harris is a great read on these organizations. Times are changing. Today, data is growing exponentially, thanks to better technology. Many more corporates are realizing that data and analytics can be powerful tools to make more informed decisions in their business. Newer applications of analytics are also continuously emerging in this digital business world. A greater adoption of analytics is inevitable across all organizations over a period of time. Analytics would emerge as a need for survival. The differentiation between companies will depend on the extent to which analytics has been institutionalized in the decision making process of their organizations. [divider top=”1″] Parthasarathy Vallabhajosyula, Co-founder at Analytics Quotient The purpose of analytics is to manage business operations more efficiently with data driven insights. It isn’t about data giants making complicated, difficult to understand reports and presentations. To ‘compete with data’ you need to be able to comprehend and decode your data as efficiently as possible. To enable businesses to do this, analytics as a discipline has changed. What was once the singular domain of statisticians has now included ‘data artists’ or visualizers who expose new patterns in data without compromising the complex nature of data. The other new player in the field is the ‘data technician’ who uses technology to access, analyze and synergize data across the organization delivering easy applications which serves up ‘ready to use analysis’ across androids, laptops and traditional computing devices. The other emergent player is the ‘story teller’ or the Business Insights Manager, who understands the needs of the market / business and knows how to map back data driven insight to impact business operations, thus connecting the dots. Indeed the analytics company of today combines the skills of statisticians that build probability models, with consultants that can ask and answer business questions, and imaginative analysts with the vision of artists and the training of technologists. [divider top=”1″] Ajay Kelkar, Co-Founder and COO at Hansa Cequity At Hansa Cequity, our model is unique because it tries to integrate very contrasting dimensions into one entity where the sum is larger than the parts! Having a designer’s sense with data may contrast with a statistician’s dry look at numbers! We seek “intersection” skills- intersection of Creative, technology, data & business! Not easy to do with highly talented people & we are attempting it! [divider top=”1″] Joydeep Sen Sarma, Co-Founder and Head at Qubole India There is a lot of untapped potential in democratizing analytics. Almost everyone in an organization at some point or the other needs access to data to make decisions. The easier it is to access this data – the better the decision making. On the other hand – if it’s hard to access data, people will naturally tend to avoid it as they try to get their work done – or decision making will be slowed down. None of these is a good outcome. In my limited experience – few companies truly democratize data access. It is hard to discover data in a self-service manner. One has to find key people with knowledge and access. New insights are hard to obtain as detailed data is often even harder to access. [divider top=”1″]
Prithvijit Roy, CEO and Co-founder at BRIDGEi2i Competing through data has been the secret sauce for a few organizations over the last decade and they have all built significant competitive advantage. “Competing on analytics – the new science of winning” by Davenport and Harris is a great read on these organizations. Times are changing. Today, […]
["IT Services"]
[]
Дарья
2012-07-28T10:22:13
2012
571
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Git", "ViT", "analytics", "GAN", "R"]
["AI", "analytics", "R", "Go", "Git", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/competing-through-data-three-experts-offer-their-insights/
2
9
2
false
true
false
4,200
Axtria Opens its Second Delivery Center in Gurgaon
Citing the rapid growth of its business, Axtria Inc., a New Jersey-headquartered advanced analytics solution provider, today announced the opening of its second delivery center in Gurgaon, India. Located in DLF Cyber City SEZ, the center has the capability of housing 300 associates. This will expand the current capacity in Gurgaon to 400 associates. Axtria business offerings are based on technological innovations, domain expertise, and world-class service. The new facility will house Axtria’s new product innovation lab. The innovation lab will focus on research into upcoming trends including bid data, machine learning, and decision sciences. Mr. Navdeep Chadha, CTO and Co-founder of Axtria said, “We are very excited to continue our growth and set up our second delivery center in Gurgaon. The new facility is not merely an extension of our already existing operations in Gurgaon, but is a firm commitment to serve our customers.” Manish Mittal, Managing Principal and Global Delivery Head of Axtria, adds: “This is exciting times for Axtria. As we grow rapidly, this new center will help us expand to the needs of clients in healthcare sector, retail industry, and financial services.” Axtria currently has 200 employees globally and is hiring top experts from various verticals to support its clientele. Axtria has been developing a talented pool of employees in all divisions including delivery, sales, and operations. In 2011, Axtria made substantial investments in recruiting experienced domain experts, technology experts, and business analytics specialists in key industries. Axtria also offers extensive development programs and training to its employees. About Axtria Axtria is an advanced analytics and business information management company based out of New Jersey with locations in California, Arizona, Georgia, Virginia, and Gurgaon, India. Our broad portfolio of services and solutions help our clients improve their sales, marketing and risk management. We blend analytics, technology and consulting to help customers gain deep insights from their data, create strategic advantage and drive profitable growth.
Citing the rapid growth of its business, Axtria Inc., a New Jersey-headquartered advanced analytics solution provider, today announced the opening of its second delivery center in Gurgaon, India. Located in DLF Cyber City SEZ, the center has the capability of housing 300 associates. This will expand the current capacity in Gurgaon to 400 associates. Axtria […]
["AI News"]
[]
AIM Media House
2013-10-21T09:17:57
2013
318
["API", "machine learning", "programming_languages:R", "AI", "innovation", "analytics", "R"]
["AI", "machine learning", "analytics", "R", "API", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/axtria-opens-its-second-delivery-center-in-gurgaon/
2
7
3
true
false
false
10,040,925
A Credit Score For Your Health: Startup Story Of FEDO
The adoption of  AI/ML in healthcare has radically changed risk profiling, diagnosis, and underwriting in medical insurance. To understand how Fedo is using AI and machine learning, we got in touch with Arun Mallavarapu, co-founder & CTO of Fedo. Founded by Arun Mallavarapu and Prasanth Madavana in 2017, Fedo is a health-tech startup that uses an AI-enabled system to generate a health score which is more like a Trans Union credit score for finance. It has built an AI/ML platform that quantifies an individual’s risk for various diseases and his/her propensity to claim over the next few years. Flagship products Fedo Score is a holistic indicator of the future health risks of an individual. It also offers insights on the likelihood of incurring a medical expense in the next few years by taking the individual’s demographics, lifestyle, and habits into account. The Fedo health score was built by medical professionals and data scientists using 250+ medical studies, 2000+ plus quality controlled and academic and research documents from all over the world and analysing over 50 million global health records and 1.5 million claims data. The risk calculated through Fedo’s algorithms is used to determine underwriting decisions in real-time. Facial Recognition system takes a facial image as an input and predicts features like BMI (body mass index), smoker or not, age and gender. What’s the differentiator Mallavarapu said, “Predicting diseases with the help of AI & ML has been hugely researched upon. Many health analytics companies predict diseases and provide a solution to patients. FEDO predicts the risk of six chronic diseases and quantifies the risk into a health score through different AI/ML algorithms which sets it apart from all the other products in the market. It also segments the FEDO score into three different categories – unhealthy (<400), moderately healthy (400-600), healthy (>600) so individual health can be better mapped. In the age of big data, where data is readily available all around, FEDO has got the skilled resources, capacity, and channels to leverage the data and process it to invent innovative products.” Use of AI & ML at Fedo The company uses a wide variety of AI/Ml techniques ranging from classification and regression models to convolutional neural networks. “We have a stronghold on the AI/ML side and are aware of when to use a specific technique. For instance, we use transfer learning to learn the lower-level features in computer vision use cases so that the data required to build efficient models reduces drastically.  In addition, we use a variety of machine learning algorithms for classification/regression problems. Our uniqueness lies in the fact that we don’t just pull an out-of-the-box model and execute it on data, rather we go under the hood of the specific model we are implementing and tune the mathematical and statistical components as they relate to our problem,” said Mallavarapu. Tech stack The core models of Fedo are built using MATLAB and Python. Also, the models are typically consumed through APIs hosted on AWS and Azure cloud servers. The company uses docker extensively and follows standard dev ops and agile processes for deployment and development, respectively. Road ahead Mallavarapu said Fedo will help in quantifying the health risk of people and will make the underwriting process much easier and faster. The five year plan includes- Generating awareness among the people about the importance of health & life insurance.Awareness about benefits of new technology like AI/ML in healthcare.Making Fedo Score the gold standard of health score globally.Collaborating with a greater number of insurance companies for retail and group underwriting process across nations.To emerge as a leader in the health-tech industry.
The adoption of  AI/ML in healthcare has radically changed risk profiling, diagnosis, and underwriting in medical insurance. To understand how Fedo is using AI and machine learning, we got in touch with Arun Mallavarapu, co-founder & CTO of Fedo.  Founded by Arun Mallavarapu and Prasanth Madavana in 2017, Fedo is a health-tech startup that uses […]
["AI Startups"]
["Startups"]
Ambika Choudhury
2021-05-29T13:00:00
2021
603
["machine learning", "AWS", "AI", "neural network", "ML", "computer vision", "RAG", "Aim", "analytics", "Startups", "Azure"]
["AI", "machine learning", "ML", "neural network", "computer vision", "analytics", "Aim", "RAG", "AWS", "Azure"]
https://analyticsindiamag.com/ai-startups/a-credit-score-for-your-health-startup-story-of-fedo/
3
10
3
false
false
false
49,633
How Incubators Are Driving The Agri-Tech Sector In India
The agriculture sector is one of the biggest contributors to the Indian economy. It is still woefully fragmented and unorganized. The scope for building a seamless ecosystem in the agri-technology space is humongous. The start-up community has seen a lot of traction in this sector, and India is home to over 450 agri-tech initiatives. The growth is exponential to the tune of over 25% YOY. The highly volatile nature and seasonal aspect of the business has many challenges for the entrepreneurs in the initial handholding stage. The rates for success multiply when the start-up gets the opportunity to be incubated in a Technology Business Incubator. It will attract strong mentorship in the form of corporate leaders who drive smart market insights and consumer surveys that can make a huge difference. The technology-based incubator can offer crucial support in the legal, financial planning, intellectual and property rights through its expertise or via its network support. The right Technology incubators can help stress test the go-to-market strategies and gain access to high-value customers. The Business planning and training assistance are of immense value in the initial phase when the start-up is struggling to find its feet in the open market. Market linkages and networks are an essential part of any agribusiness becoming sustainable. Technology incubators can use its network to bolster the logistics and storage needs of the early-stage ventures. Technology incubators are fundamental in raising capital, pitch refinement, and obtaining statutory Government approvals. Technology and knowledge-driven start-ups often find Technology incubators integral to their dissemination of product technologies and ideation. The input in the Research and Development technology segment ensures the survival and stability of these ventures. A feasible product development ecosystem from the very beginning under the able guidance of the mentors can be the palpable difference between failure and success. The host of services that incubators can procure or provide to Agri-preneurs includes using the premises of the host institute, along with basic facilities like communication, computers, cafeteria and conference spaces. This reduces initial costs and can help the initiative get the jump start it needs to thrive in the challenging market scenario prevalent today. The incubator is committed to nurturing the start-up for 2-3 years. A word of caution, though, not all ventures can succeed in the harsh and competitive market environment. Most agri-preneurs need knowledge dissemination in the Government E-tendering processes. The incubator is of immense help in facilitating the transfer of technology and knowledge and helps snag the initial B2B contracts that can start the journey towards success. The availability of multiple value-added services under one roof makes the task easier all around. Last but not least, the in-depth knowledge and experience that the mentors bring into the start-up ecosystem are invaluable. These seasoned veterans have a 360-degree view of how the market operates, a secure network, and an understanding of the common mistakes that agri-preneurs should avoid. This, in itself, can save the start-up precious time and resources that can be put to good use in other areas. The Technology incubator culture is ideal for fostering the culture of entrepreneurship to generate robust business and wealth. The idea is to build a sustainable venture in the agriculture space that can reduce wastage, improve yields, and create a dynamic and organized business structure. The incubators can help an initiative find its feet and guide it through stormy waters in the beginning by providing infrastructure, finance, and mentorship. The aim is to reach projected milestones to build self-employment and entrepreneurial zeal.
The agriculture sector is one of the biggest contributors to the Indian economy. It is still woefully fragmented and unorganized. The scope for building a seamless ecosystem in the agri-technology space is humongous. The start-up community has seen a lot of traction in this sector, and India is home to over 450 agri-tech initiatives. The […]
["AI Features"]
["Agriculture"]
Manisha Acharya
2019-11-11T11:00:05
2019
584
["Go", "API", "programming_languages:R", "AI", "ML", "Agriculture", "programming_languages:Go", "RAG", "Aim", "GAN", "R"]
["AI", "ML", "Aim", "RAG", "R", "Go", "API", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/how-incubators-are-driving-the-agri-tech-sector-in-india/
2
10
2
false
true
false
14,635
How DevOps became a much sought after skill that firms are looking for?
Like the rise in the era of internet, cloud, AI or IoT, the tech world is currently ranting about the recent entrant into the list- the DevOps. Well, not a completely new finding, DevOps has been in existence in other forms and nature, and has seen an increased adoption in the industries across the globe. There are many organizations that has started to look at DevOps as a mainstream strategy and this has brought in a lot of employment opportunities. Big firms like Accenture, Tech Mahindra, Barclays or Oracle are in a constant lookout for employees with skills in DevOps. Other companies like Walmart, Target, Amazon, Facebook, Adobe etc. too have hirings for DevOps. This increased interest in the area of DevOps, led us to cover this area and understand a bit more about what DevOps is and how has it become a much sought-after skill that big firms are looking for. Understanding DevOps- An interesting mashup of Development and Operations, DevOps has become the hottest trends of today. While there can often be a confusion about what it really is, it is important to note that DevOps is more that just a software methodology or placing people from the operations to the development team. It, in fact is following a set of practices that erases the boundary between operations and development. In the usual scenario, many teams developing software typically spend more time in deployment activities than actually developing new features. This is where DevOps comes in handy, where it reduces the barrier between Development and Operations, thereby increasing development autonomy. In simple words, it is the collaboration and communications between the software developers and IT to ensure faster delivery of the software processes. It brings IT and software to work more closely, while allowing quick and effortless deployment and keeping customers at the centre of the process. In the current scenario where IT is constantly evolving, DevOps has become a critical driver for digital transformation, as it allows you to be more agile, deliver things faster and react to market conditions quicker. DevOps- Qualification/ Roles & functionalities- If the report by Gartner is to be believed, by the end of 2016, 25 percent of top global 2000 organizations would have adopted DevOps as a mainstream strategy. These numbers are significantly high and opens up a door for opportunities and roles in this area. The few popular job titles that are gaining a momentum in DevOps space are— DevOps Architect, DevOps Engineer, DevOps manager, Security Engineer, Automation Engineer, Release manager and much more. Essentially, DevOps as a methodology includes providing a consistent software delivery, faster resolution of complex problems and crisp feature delivery to developers and operations managers. AIM tried to explore more roles and functionalities that jobs in DevOps area demand. Here’s a quick view: Demanding an experience of 2 to 5 years in DevOps platforms for the role of DevOps engineers to an experience of 10-12 years for DevOps architect and manager roles, various companies can seek experience based on their requirements. A degree in technical subject or equivalent knowledge is preferred with a technical understanding of various DevOps methodologies. Few of the skills that are required for role in DevOps (not necessarily all of them) are— SQL, Linux, Unix, MySQL, Automation, Python, Open source, Middleware, Java, Puppet, Maven, Crucible, Jenkins, Software configuration management, Cloud such as AWS/ Rackspace, Docker, OpenShift, vRealize, OpenStack, Ansible, MongoDB and more.  Writing and maintaining deployment scripts, managing nodes and diagnosing application services, server and network issues are some of the must haves. A look at DevOp roles at Accenture and Oracle- Oracle, for instance was hiring for DevOps Architect with an experience of 10-20 years and offering a salary of INR 25,00,000 – 40,00,000 P.A.The requirements that they stated for this role were- A work experience in Ruby/Java development Experience in at least one popular scripting language (Perl/Python/Jython) Preferred Experience in Chef, Docker, Puppet, Packer, Salt Preferred strong experience with Weblogic server Object Oriented analysis and design using common design patterns. Excellent knowledge of Relational Databases, SQL technologies Experience with test-driven development Similarly, Accenture while looking for DevOps Engineer demanded an experience of 2-5 years with following basic skills- Experience of working with DevOps platforms Good technical understanding of cloud infrastructure services in general (Azure, AWS), specific experience and knowledge of compute virtualization, automation and DevOps methodologies Environment management: Linux/Unix, Windows, Shell Scripting, Ansible, Chef, Puppet; Experience with coding and Software Defined Data Centers maintenance DevOps Payscale- According to Glassdoor, average salaries for DevOps engineers start from Rs 5.69 lakhs per annum, whereas PayScale suggests that Development Operations (DevOps) Engineer earns an average salary of Rs 666,232 per year. Both the numbers are for an experience of an average of 2 years. PayScale also mentions that the greater the number of skills in DevOps a candidate is associated with, the higher is the pay. “Most people with this job move on to other positions after 10 years in this field”, notes PayScale.
Like the rise in the era of internet, cloud, AI or IoT, the tech world is currently ranting about the recent entrant into the list- the DevOps. Well, not a completely new finding, DevOps has been in existence in other forms and nature, and has seen an increased adoption in the industries across the globe. […]
["AI Highlights"]
["devop"]
Srishti Deoras
2017-04-28T10:41:42
2017
831
["AWS", "AI", "Azure", "MongoDB", "R", "docker", "RAG", "Python", "Aim", "SQL", "devop"]
["AI", "Aim", "RAG", "AWS", "Azure", "docker", "MongoDB", "Python", "R", "SQL"]
https://analyticsindiamag.com/ai-highlights/devops-became-much-sought-skill-firms-looking/
3
10
1
true
false
true
52,228
Why Intel Acquired Habana
Intel Corporation this week announced that it has acquired Habana Labs for approximately $2 billion. Habana is an Israel-based company that develops programmable deep learning accelerators for the data centre. This acquisition is aimed at strengthening Intel’s artificial intelligence portfolio and accelerate its efforts in the AI silicon market, which Intel expects to be greater than $25 billion by 2024. “This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data centre,” said Navin Shenoy, executive VP at Intel, in a press release. In July, Habana announced its Gaudi AI training processor, which the Tel Aviv startup promised was capable of beating GPU-based systems by 4x. The company has been rumoured to be a target for an Intel acquisition for a while now, as Intel looks to get out in front of the AI market. In 2019 alone, Intel is expected to generate in excess of $3.5 billion in “AI-driven revenue,” a 20% increase over the year prior. Why Habana? Avigdor Willenz via Habana Habana’s Gaudi AI Training Processor is responsible for developing large-node training systems, which are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. The Habana’s Goya AI Inference Processor, especially, has demonstrated an excellent inference performance and real-time latency in a highly competitive power envelope. The Goya platform architecture has been designed to target deep learning inference workloads. It uses a cluster of eight fully programmable Tensor Processing Core (TPC) cores. The Goya HL-100 PCIE card provides throughput of 15,453 images/second for a ResNet-50 workload at a latency of ~1 msec, which is significantly less than the industry standard of 7 msec. via Habana.ai Goya’s architecture is also an ideal match to the BERT workload, as both the GEMM(GEneral Matrix to Matrix Multiplication) engine and the Tensor Processing Cores (TPCs) are fully utilised, supporting low batch sizes at high throughput. Whereas, Gaudi is designed for versatile and efficient system scale out and scale up with integrated on-chip RoCE RDMA, enabling high-performance interconnectivity. A single Gaudi card dissipating 140 Watts delivers 1,650 images/second training throughput. Gaudi for training and Goya for inference, offer an easy-to-program development environment for customers to help deploy their solutions as AI workloads. Intel believes Habana will propel their AI offerings for data centres with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads. “We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. Intel also announced that Habana Labs chairman Avigdor Willenz will continue to serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana. Path To AI Glory Comes At A Cost With more than $300 billion-plus market opportunity for silicon driven solutions, Intel has made some bold decisions by acquiring startups for a hefty price. The last half of this decade has witnessed a steady increase in the number of AI chip makers. Intel, which has been the pioneers in the semiconductor industry, embarked on its AI focussed journey by acquiring peculiar startups while also developing cutting edge technology. Earlier this year, another AI chip-making giant NVIDIA made a gigantic purchase with its $7B acquisition of Mellanox. With Mellanox, NVIDIA aims to optimise datacenter-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilisation and lower operating costs for customers. Mellanox too, like Habana, happens to have its roots in Israel. There also has been a tremendous rise in small to medium level AI chipmakers around the globe. Today there are nearly a hundred chip makers and who make custom hardware for specific AI applications. These mega acquisitions by rivals like NVIDIA and Intel combined with the sporadic rise of startups clearly indicates the advent of the 21st-century gold rush in the form of silicon. Here a list of interesting acquisitions by Intel over the years: Acquisition Of Nervana Intel acquired Nervana startups for more than $350 million, back in 2016. Founded in 2014 and headquartered in San Diego, California, Nervana has a fully-optimised software and hardware stack for deep learning. Nervana’s expertise in accelerating deep learning algorithms caught Intel’s attention and they have been, ever since, expanding their capabilities by optimising the Intel Math Kernel Library and its integration into industry-standard frameworks. Acquisition Of Movidius Back in September 2016, Intel acquired an Irish chip company Movidius for $400M. Movidius specialises in designing low-power processor chips for computer vision. Products like the Myriad 2 by Movidius gave Intel the capabilities to build vision processing unit (VPU) that provides low-power, high-performance vision processing solutions across various target applications including embedded deep neural networks, pose estimation, 3D depth-sensing, visual inertial odometry and gesture/eye tracking. To supplement the efforts being made by Movidius, Intel also bought Vertex.AI, a Seattle-based startup that built deep learning compilation tools like PlaidML, an open source tensor compiler. Combined with Intel’s nGraph graph compiler, it gives popular deep learning frameworks performance portability across a wide range of CPU, GPU and other accelerator processor architectures. Acquisition Of Mobileye for $15.3 B This is not the first time Intel forayed into Israel’s AI scene. It all began back in 2017 when Intel bought Mobileye for a whopping $15 B. Mobileye is a global leader in the development of computer vision and machine learning, data analysis, localisation and mapping for advanced driver assistance systems and autonomous driving. With this acquisition, Intel aimed at landing firmly in the autonomous vehicles segment. According to Intel, they estimate the vehicle systems, data and services market opportunity to be up to $70 billion by 2030. “With Mobileye, Intel emerges as a leader in creating the technology foundation that the automotive industry needs for an autonomous future,” said Intel CEO Brian Krzanich, in a press release, back in 2017.
Intel Corporation this week announced that it has acquired Habana Labs for approximately $2 billion. Habana is an Israel-based company that develops programmable deep learning accelerators for the data centre. This acquisition is aimed at strengthening Intel’s artificial intelligence portfolio and accelerate its efforts in the AI silicon market, which Intel expects to be greater […]
["Global Tech"]
["AI Chips", "Intel"]
Ram Sagar
2019-12-18T13:53:13
2019
1,001
["machine learning", "artificial intelligence", "AI", "neural network", "ML", "computer vision", "RAG", "AI Chips", "Ray", "Aim", "deep learning", "Intel"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "neural network", "computer vision", "Aim", "Ray", "RAG"]
https://analyticsindiamag.com/global-tech/why-intel-acquired-habana/
4
10
4
false
false
false
10,067,425
5 methods that will not let your neural network model overfit
Everyone in the data science field is starving for a modelling procedure that can predict and work accurately. However, modelling neural networks has the potential to become highly accurate but there are various problems that developers are required to face for optimal results from the modelling. Overfitting is also a problem that can accrue with neural networks because of the model or the data we are working with. In this article, we are going to discuss overfitting and methods to use to prevent the overfitting of a neural network. The major points to be discussed in the article are listed below. Table of contents About overfittingMethods to prevent overfitting of a neural networkMethod 1: Data augmentationMethod 2:  Simplifying neural networkMethod 3: Weight regularizationMethod 4: DropoutsMethod 5: Early stopping Let’s start with understanding overfitting. About overfitting In many examples of modelling, we can find that the model is representing a higher level of accuracy but while talking about the prediction it is throwing wrong outputs. these are the situations where we can say that the model is overfitted. While modelling data we mainly focus on estimating the distribution and probability under the data. This estimation helps us in creating a model that can predict using similar unseen values.  Under training, a model can be encountered with a lot of noise in the training data and this can be a reason for being the model false because it has also tried to model that noise. Overfitting occurs when the model tries to learn every detail and noise of the data and this learning goes to the extent where the model starts giving wrong predictions or we can say the learning impacts the performance of the model on the new data. The above image is a representation of the overfitting problem in which the red dots are the data points and the green line is a representation of the relationship between data and the blue line is representing the learning of the model that is overfitted. Generally, we find this problem with the non-linear models and most of the neural networks are nonlinear and show the problem of overfitting. Here the nonlinearity of the models means that they are flexible and can expand according to the data which makes the model overfitted sometimes. In this article, we will look at the steps that we should take to prevent the overfitting of a neural network. Steps to prevent overfitting of a neural network In this section, we will take a look at some of the simple but major steps that a basic modelling procedure required to prevent the overfitting of a neural network. We will start from the data side and go to the training side. Method 1: Data augmentation In one of our articles, we have discussed that acquiring more data is a way to improve the accuracy of the models. It is simple to understand that more data gives more details about the task the model needs to perform. Here data augmentation can also be considered as the way to enlarge our datasets. For a simple example while working with the small image dataset we can increase the count of the images by pushing the filliped, rotated, and scaled versions of the images in the data. That will increase the size of the data and using such techniques we can enhance the accuracy of the model while saving it from the overfitting condition. This step is the general step that can be used with every type of modelling whether it is a neural network or static models like random forest and support vector machine. There are various methods we apply for data augmentation with classification data like SMOTE, and oversampling and using one of our articles,  we can find an idea of data augmentation with image data. Method 2:  Simplifying neural network This may seem like a wrong step toward solving our problem but this is one of the basic and easy steps to prevent overfitting. This step can consist of two methods one is to remove the complex layers and the second is to reduce the neurons of the layers. In general modelling, we can find that using complex models with easy data can increase the problem of overfitting while simple models can perform much better. Before reducing the complexity of the network we are required to calculate the input and output of the layers. It is always suggested to use simple networks instead of applying complexity to the network. If the network is overfitting then we should try to make it simple. Method 3: Weight regularization Weight regularization is a step that helps in preventing overfitting by reducing the complexity of the models. There are various ways of regularization like L1 and L2 regularization. These methods mainly work by penalizing the weights of any function and these smaller weights lead to simpler models. As discussed above the simpler models helps in avoiding overfitting. As the name suggests this step adds the regularization term along with the loss function so that the weights matrix can get smaller. The addition makes a cost function and can be defined as follows Cost function = Loss + Regularization term We can differentiate between the methods of regularization by looking at the regularization term. Using the L1 regularization we add the following regularization term. Here we can say that this regularisation tries to minimize the absolute value of the weights. Using the L2 regularization we add the following regularization term. Here we can see that this regularization tries to minimize the squared magnitude of weights. Both of these methods are popular methods and the main difference between them is the L1 method is robust, simple and interpretable while L2 regularization is capable of learning complex data and is not so robust. So the Selection of any of the methods is dependent on the complexity of the data. In one of our articles, we can find more information about regularization methods. Method 4: Dropouts This step helps us in preventing overfitting by reducing the number of neurons from the network by the time the network is getting trained. We can also say this is a regularization technique but not working with the cost function working with the neurons. This method is simple and drops the neurons from the network at the time of training every epoch. We can also think of this process as, making the network simple and different at the time of training because ultimately it is reducing the complexity of the network and willing to prepare a new network. The net effect of applying dropout layers in the network converges to the reduced overfitting of the network.  The below image can be considered as the representation of the working of this step. The above image represents a model with 2 hidden layers whose complexity is reduced by removing some of the neurons. We can apply a dropout in the TensorFlow network using the following lines of code. tf.keras.layers.Dropout( rate, noise_shape=None, seed=None, **kwargs ) Here we are required to set a rate as a numeric value and this layer will automatically drop neurons at each step during training. Method 5: Early stopping As the name suggests this step is a method to stop the training of the network at earlier stages than the final stage. We can compare it with the cross-validation technique because it also uses some of the portions of the training data as validation data so that the performance of the model can be measured against this validation data. As the performance of the model increases to a peak point training can be stopped. This step also works while we train the model. As the model learns in training we try to measure its performance on the unseen data and it keeps the training running to the point from where the model starts failing on the validation or unseen data.  if the performance on this validation set is decreasing or remains the same for certain iterations, then the training is stopped. The above image is a representation of the learning graph of a network where early stopping is applied. We can see as the errors start increasing the early stopping point is decided and we can stop training the network at this point. For networks made using TensorFlow, we are required to set callbacks under the fit function. The callback can be defined using the following codes. Callback = tf.keras.callbacks.EarlyStopping( monitor=’val_loss’, min_delta=0, patience=0, verbose=0, mode=’auto’, baseline=None, restore_best_weights=False ) After setting the callback we can fit this into the training using the following codes history = model.fit(np.arange(data, val_data, callbacks=[callback], verbose=0) In the history object, all the recodes will get saved and we can check the iteration by just checking the length of the history object. Final words In this article, we have discussed the overfitting problems of the neural networks which is a general problem that can be happened because of noisy data and non-linear models and the steps that can be utilized to prevent our neural networks from overfitting.
Through this post we will discuss about overfitting and methods to use to prevent the overfitting of a neural network.
["AI Trends"]
["Deep Learning", "Machine Learning", "Neural Networks", "overfitting", "Python"]
Yugesh Verma
2022-05-19T11:00:00
2022
1,508
["data science", "Go", "data augmentation", "TPU", "Keras", "ELT", "AI", "neural network", "R", "Machine Learning", "Python", "Deep Learning", "TensorFlow", "overfitting", "Neural Networks"]
["AI", "neural network", "data science", "TensorFlow", "Keras", "TPU", "R", "Go", "ELT", "data augmentation"]
https://analyticsindiamag.com/ai-trends/5-methods-that-will-not-let-your-neural-network-model-overfit/
4
10
0
true
true
false
10,045,264
Top Five Data Labelling Companies in India
If you have labelled data in machine learning, it means your data is marked up or annotated to determine the target, which is the result you need your machine learning model to predict. Data Labelling is a technique that incorporates data tagging, data annotation, data classification, data moderation, data transcription, or processing. Data annotation usually correlates to the operation of labelling data. Data annotation and data labelling are often used interchangeably. Labelled data shows the features of data like- properties, characteristics, or classifications – that can be examined for patterns that improve the model’s prediction accuracy. For illustration, in automotive image processing for self-driving cars, a data labeller can extract frame-by-frame samples from the video and apply data labelling tools to show the position of street signs, pedestrians, or other vehicles. The Rise in Demand for Data Labelling Services in Multiple Domains Several years ago, computer vision systems were not able to understand hand-written digits. But now, AI-powered machines can assist self-driving vehicles, discover malignant tumours in medical imaging, and evaluate legal contracts. Along with superior algorithms and robust compute devices, labelled datasets help fuel AI’s development. The Global Data Labelling Market is expanding at a rate of 29.6% CAGR and is expected to touch 5 billion USD by the end of the year 2026. Consequently, most companies are outsourcing data labelling services to build robust machine-learning models. AI depends extensively on data and requires correctly annotated, classified, and anonymized data so that the machine learning algorithms can learn and get trained for better performance. With the rapid development of machine learning and artificial intelligence, there is a surging demand for high-quality data labelling companies. Top Data labelling Companies India has surfaced as one of the most trustworthy outsourcing destinations for data labelling for obvious reasons. Globalization, demographic power, and low-cost labour, to mention a few. Many data labelling firms have emerged to address this growing demand for data labelling services. Let us discuss the top five data labelling companies in India that are still booming the global market in 2021: Zuru: Zuru is an AI-assisted Data labelling company founded by Sharath in 2019. Zuru has its headquarters in Bangalore, India. Zuru is a data annotation start-up aimed at improving AI businesses to provide low-cost, high-quality training data at scale. Zuru offers end-to-end scalable annotation solutions with swift turn-around time & stellar accuracy. They offer image, text, and voice annotations as a service. Cogito Tech: Cogito Tech was founded in 2011 by Rohan Agarwal with its control centre in Delhi, NCR, India. Cogito offers data annotation and labelling services via its captive workforce and platform-agnostic strategy in several industries such as Medical & Healthcare, Automotive, Agriculture, Defence etc. iMerit: iMerit is a global data labelling company based in West Bengal, India, founded by Ragha Basu. It extends end-to-end, high-quality data annotation–across computer vision, natural language processing and content services–that controls machine learning and artificial intelligence applications for its clients. They also offer their services in dataset creation, image tagging, data verification, data enhancement, data cleaning, etc. Wisepl: Wisepl was founded in 2020 by Fayis Paloli and is headquartered in Kerela, India. Wisepl provides annotation services for ML and AI model developments. It offers precisely annotated data in image, video, and text, applying different annotation techniques like KeyPoint Annotation, Polygon Annotation, Cuboid, Polylines Annotation, Semantic Segmentation, Bounding Box, Landmark Annotation, etc. Tika Data: Tika Data was founded in 2017 and is based out of Bangalore, India. Tika Data offers non-crowdsourced data acquisition and image labelling services for applications in Natural Language Processing, Computer Vision, and the Internet of Things. Tika Data provides a modern solution to data annotation services to feed the AI age to cater to the increased application of AI to day-to-day living. When is the Right Time to Consult Data Labelling Company? Data labelling requires the labelling of training datasets with different metadata styles like audio, text, images to train ML models like chatbots, self-driving vehicles, and more. Suppose your company works in innumerable use cases involving natural language processing, natural language understanding, computer vision, and speech recognition. In that case, your company will demand data labelling to accurately categorize and annotate the data for a particular use case. Data labelling is an imperative component of the AI and machine learning industry; hence both have added tremendous value to the digital ecosystem. For the constant growth of the AI industry, organizations providing data labelling services will stick around for decades. Conclusion High-quality annotated data is the prime requirement for the AI engine to operate smoothly. The more precise annotation is, the better the algorithm will perform. The demand to implement a secure and cheap method for image annotation is paramount now. Data Labelling companies seek the best way to boost your competitive advantage and empower you to grow seamlessly.
If you have labelled data in machine learning, it means your data is marked up or annotated to determine the target, which is the result you need your machine learning model to predict. Data Labelling is a technique that incorporates data tagging, data annotation, data classification, data moderation, data transcription, or processing. Data annotation usually […]
["AI Trends"]
[]
Mrinal Walia
2021-08-04T12:30:00
2021
801
["Go", "machine learning", "artificial intelligence", "AI", "chatbots", "ML", "computer vision", "RAG", "Aim", "R"]
["AI", "artificial intelligence", "machine learning", "ML", "computer vision", "Aim", "RAG", "chatbots", "R", "Go"]
https://analyticsindiamag.com/ai-trends/top-five-data-labelling-companies-in-india/
3
10
3
false
false
false
16,876
India Facing Challenges But Yet Among Top 10 Analytics Markets Globally’, Says Dunnhumby CEO Guillaume Bacuvier
Being one of the largest customer science companies, Dunnhumby, leverages the power of customer data and science to drive sales and margin for its clients. AIM got an exclusive interview with Global CEO Guillaume Bacuvier and spoke to him about the analytics industry in India along with the most recent technological developments in the space. Big data analytics sector in India is expected to witness eight-fold growth to reach $16 billion by 2025 from the current $2 billion. Bacuvier believes the analytics market, which was primarily driven by a handful of analytics firm in India has now grown to an extent that it is already among top 10 big data analytics markets in the world. Moreover, the country is seeing a flurry of startups in the space. The increase in the number of companies who are setting up their own analytics wings has also gone up remarkably and the simple reason being Indian executives are upbeat about using analytics in leveraging it for decision making, not just for understanding what’s happened in the past but also forecasting for the future. If numbers are anything to go by, there are reports that over the next two years, a lot more organisations will invest $10 million or more for data and analytics resources. The Challenges And Developments Ahead Like any other industry, analytics is also not a bed of roses. There are challenges that are often always the game changer. As Bacuvier puts it, the challenges include: Finding and attracting the right talent (like data scientists, statisticians, etc.), Ensuring completeness and veracity of the data being captured Storing the data appropriately in data lakes and their quick retrieval Security, privacy and compliance issues Right tools, products and expertise to make business sense of the enormous amounts of data. But despite the challenges that the industry faces, it is after all a fast growing space and is ever evolving. And when it is evolving, it also sees some remarkable technological developments. “Technological developments are taking place like “you blink and you miss”. Every other day we see some new advancement, an improved version, an enhanced upgrade in various techs and gadgets,” said Bacuvier. Some recent technological developments in the analytics space: Advancements in machine learning and artificial intelligence (AI) – opening new doors for businesses to make data-informed decisions Self-service big-data applications – speed to offer real-time solution and implementation Cloud solutions – scalable and sustainable for managing huge volumes of data Data agility – agile development and application platform supporting comprehensive assortment of analytic models Digital media measurement capability – Millennials want the personal touch, transparency and collaboration that one-on-one communication allows and businesses need to take advantage of this huge audience base on social messaging platforms. Brands need the right insight to optimise ad spend and to understand what works best for customers. Bacuvier, however, warns that if organisations do not move towards being data driven, they pose a huge risk of going out of business soon. A Quick Suggestion But he also has a piece of advise for the younger startups, who are yet to find a sweet spot for themselves. The challenge with analytics in his view is how to make the insights and reports produced by data science actionable, and yielding effective business results. “If the results of your analytics can only be understood by other data science practitioners, they’re essentially useless and carry limited business value,” he suggested. On Tuesday, AIM published a detailed Q&A with Guillaume Bacuvier, speaking about dunnhumby’s vision and roadmap and its journey in the analytics field so far. You can read the report, here.
Being one of the largest customer science companies, Dunnhumby, leverages the power of customer data and science to drive sales and margin for its clients. AIM got an exclusive interview with Global CEO Guillaume Bacuvier and spoke to him about the analytics industry in India along with the most recent technological developments in the space. […]
["AI Trends"]
["AI (Artificial Intelligence)", "Data Analytics", "dunnhumby", "Machine Learning"]
Priya Singh
2017-08-09T07:24:05
2017
602
["data science", "Go", "artificial intelligence", "machine learning", "AI", "Machine Learning", "dunnhumby", "Scala", "RAG", "Aim", "analytics", "Data Analytics", "R", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "Aim", "RAG", "R", "Go", "Scala"]
https://analyticsindiamag.com/ai-trends/india-among-top-10-analytics-markets-globally-still-lacks-talent-says-dunnhumby-ceo-guillaume-bacuvier/
2
10
4
false
true
false
10,007,443
Why NVIDIA Acquired ARM
“An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.”Jensen Huang, Founder, NVIDIA NVIDIA today announced that it would acquire Arm Limited from SBG and the SoftBank Vision Fund in a transaction valued at $40 billion. According to the terms of the transaction, NVIDIA will pay SoftBank $21.5 billion in common stock and $12 billion in cash. This includes $2 billion payable at signing. “AI is the most powerful technology force of our time and has launched a new wave of computing,” said Jensen Huang, founder and CEO of NVIDIA. Talking about the future of Nvidia with Arm, Huang explained that trillions of computers running AI would create a new internet-of-things that would be larger than today’s internet-of-people. “Our combination will create a company fabulously positioned for the age of AI,” added Huang. With Arm on its side, Nvidia is well-positioned to steer innovation for the next generation of artificial intelligence and expand into large, high-growth markets. Highlights Unites NVIDIA’s leadership in AI with Arm’s vast computing ecosystem NVIDIA will build a world-class AI research and education center, and build an Arm/NVIDIA-powered AI supercomputer. Arm’s open-licensing model is still intact and Arm’s IP licensing portfolio to expand with NVIDIA technology. Arm’s IP will remain registered in the U.K. The $40 billion transaction will be a combination of NVIDIA shares and cash. NVIDIA to issue $1.5 billion in equity to Arm employees. Why Is Nvidia Making A $40 Billion Bet Jensen Huang, CEO, Nvidia Nvidia’s GPUs have been powering the gaming industry for a long time now. These GPUs also happened to be the driving force behind the AI revolution, which started a few years ago. Since then, Nvidia has been investing heavily in making custom hardware for deep learning applications. Their state-of-the-art integrated solutions have made them a hot commodity for data centres as well; data centres that fuel the whole cloud computing industry. Whereas Arm, the poster child of the UK’s innovation, has been involved with building chips that accelerated the smartphone innovation. Today more than 95% of smartphones in the world house Arm’s architecture. Arm’s common architecture supports diverse AI applications – from highly capable cores. It interconnects for mega compute power in the data centre to machine learning processors for AI algorithms in edge endpoint devices, such as wearables and sensors. The future of AI is in the trillions of devices around us, including voice assistants and consumer robots, that deliver more personalised content at the endpoint. As AI, 5G, and the IoT continue to mature, the convergence of these three technologies will transform industries like self-driving, AR/VR, healthcare and more. Nvidia has been eyeing the self-driving, 5G and edge computing industry for quite some time now. And, with Arm, Nvidia is officially a key player in the global markets. Especially in Asia, where Samsung and Qualcomm have been making moves to dominate self-driving and 5G services. Technologies mentioned above require a new kind of integrated hardware and software solutions. And, Nvidia is optimistic that Arm’s technical prowess and customer reach will establish them as an undeniable force in the most lucrative industry of this century. The Flip Side Of This Deal Arm has always been at the centre of smartphone innovation, with more than 95 per cent of the world’s smartphones built on their IP. So far Arm has shipped 180 billion chips to-date with its licensees. Its open licensing policy enabled global customer neutrality, and most of the UK chipmaker’s success can be attributed to their licensing policy. Arm’s customers also include competitors of Nvidia. So, does this deal dent the sentiments of the customers? Yes, say the critics. Earlier this year, Apple has already announced its decision to move away from Intel. The Apple A13 Bionic is a 64-bit ARM-based system on a chip. So, along with Samsung and Qualcomm, Intel too wouldn’t be too pleased with this deal. Though NVIDIA assured that Arm would continue to operate its open-licensing model and Arm’s intellectual property will remain registered in the UK, but, Arm’s co-founder Hermann Hauser didn’t look pleased. According to reports, Hauser went on record saying that this deal is not in the public interest, warning it will result in UK job losses and a lack of competition. “I think this is terrible,” said Hauser regarding the deal. He warned that Arm as an US company would inevitably have to abide by the regulations and that in turn would put the US in the driving seat of managing Arm’s exports anywhere in the world, including China, which is a major market. Hauser even went ahead and wrote an open letter to Prime Minister Boris Johnson urging to block the deal, and help to take Arm public on the London Stock Exchange. Amidst concerns of UK’s economic sovereignty and Arm’s endangered customer neutrality, Nvidia’s CEO, Huang’s message hinted that Arm would continue to maintain its ethos as it ushers a new wave of AI.
“An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.” Jensen Huang, Founder, NVIDIA   NVIDIA today announced that it would acquire Arm Limited from SBG and the SoftBank Vision Fund in a transaction valued at $40 billion. According to […]
["Global Tech"]
["arm", "chipset", "edge computing", "Mergers and Acquisitions", "NVIDIA", "Softbank"]
Ram Sagar
2020-09-15T18:00:23
2020
836
["Go", "artificial intelligence", "machine learning", "AI", "cloud computing", "innovation", "arm", "deep learning", "ViT", "edge computing", "chipset", "NVIDIA", "Mergers and Acquisitions", "R", "Softbank"]
["AI", "artificial intelligence", "machine learning", "deep learning", "cloud computing", "edge computing", "R", "Go", "ViT", "innovation"]
https://analyticsindiamag.com/global-tech/nvidia-buys-arm-40-billion-soft-bank/
4
10
2
true
false
false
68,829
Employees In India Have Acquired High-Level Cybersecurity Awareness During Lockdown: Trend Micro
Global cybersecurity player Trend Micro released survey results that show how remote workers address cybersecurity. Survey reveals users take security training seriously, but may still engage in risky behaviour Nearly three quarters (72%) of remote workers say they are more conscious of their organisation’s cybersecurity policies since lockdown began, but many are breaking the rules anyway due to limited understanding of resource constraints. Trend Micro’s Head in the Clouds study is distilled from interviews with 13,200 remote workers across 27 countries on their attitudes towards corporate cybersecurity and IT policies. The Head in the Clouds study looks into the psychology of people’s behaviour in terms of cybersecurity, including their attitudes towards risk. It presents several common information security “personas” with the aim of helping organisations tailor their cybersecurity strategy in the right way for the right employee. The survey revealed that there has never been a better time for companies to take advantage of heightened employee cybersecurity awareness. The survey reveals that the approach businesses take to training is critical to ensure secure practices are being followed. In India, the results indicate a high level of security awareness, with 84% of respondents claiming they take instructions from their IT team seriously, and 83% agree that cybersecurity within their organisation is partly their responsibility. Additionally, 67% acknowledge that using non-work applications on a corporate device is a security risk. However, just because most people understand the risks does not mean they stick to the rules. Dr Linda K. Kaye, Cyberpsychology Academic at Edge Hill University explains: “There are a great number of individual differences across the workforce. This can include individual employee’s values, accountability within their organisation, as well as aspects of their personality, all of which are important factors which drive people’s behaviours. To develop more effective cybersecurity training and practices, more attention should be paid to these factors. This, in turn, can help organisations adopt more tailored or bespoke cybersecurity training with their employees, which may be more effective.” Other Findings: 44% of employees admit to using a non-work application on a corporate device, and 46% of them have actually uploaded corporate data to that application.83% of respondents confess to using their work laptop for personal browsing, and only 45% of them fully restrict the sites they visit.42% of respondents say they often or always access corporate data from a personal device – almost certainly breaking corporate security policy.14% of respondents admit to watching/accessing porn on their work laptop, and 14% access the dark web. Productivity still wins out over protection for many users. 52% of respondents agree that they do not give much thought to whether the apps they use are sanctioned by IT or not, as they just want the job done. Additionally, 44% think they can get away with using a non-work application, as the solutions provided by their company are ‘nonsense.’ Nilesh Jain, Vice President, Southeast Asia and India, Trend Micro, said, “It’s really heartening to see that so many people take the advice from their corporate IT team seriously, although you have to wonder about the 16% who don’t. At the same time those people also accept their own role in the human firewall of any organisation. The problem area seems to be translating that awareness into concrete behaviour. To reinforce this, organisations to take into account the diversity across the organisation and tailor training to identify and address these distinct behavioural groups. The time to do this is now, to take advantage of the new working environment and people’s newfound recognition of the importance of information security.”
Global cybersecurity player Trend Micro released survey results that show how remote workers address cybersecurity. Survey reveals users take security training seriously, but may still engage in risky behaviour Nearly three quarters (72%) of remote workers say they are more conscious of their organisation’s cybersecurity policies since lockdown began, but many are breaking the rules […]
["AI News"]
["Cybersecurity", "Cybersecurity India"]
Vishal Chawla
2020-07-02T12:38:55
2020
594
["programming_languages:R", "AI", "Aim", "Cybersecurity India", "ViT", "GAN", "Cybersecurity", "R"]
["AI", "Aim", "R", "GAN", "ViT", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/remote-workers-in-india-have-acquired-high-level-cybersecurity-awareness-among-during-lockdown-finds-survey/
3
6
1
false
false
true
57,612
7 Challenges Faced By Data Scientists In Data Processing In 2020
Each day we generate 2.5 quintillion bytes of data. All the data that is being generated by us while using the internet is raw and cannot be used by an organization to make data-driven decisions. We wanted to understand how data scientists have evolved in 2020 and the type of tools they are using now to tackle the challenges with these new and different forms of data. Therefore, we asked AIM Expert Network (AEN) members to share their insights on the challenges they face while processing different types of raw data and how they convert the same data into valuable assets for their organization. Getting data from multiple sources While building the anomaly detection system for our app we faced an issue with a huge amount of data that was coming from different sources/databases. The biggest challenge that we faced was to consider all forms of data being generated from the app and make it into one single format to centralize the observation. A real-time querying production database was also not possible. So action required here was to get this unstructured data all together in one database. For this, we used Google’s BigQuery, a relational database to have the data stored. The refresh cycle was twice daily to get data from our database and files and dump in the BigQuery database. Post this we worked on processing this data further. – Netali Agrawal, Technology lead – Infosys. 2. Unlocking value out of Unstructured Text Data A major chunk of data that is stored by enterprises around the world is unstructured text data. Traditionally, an enormous amount of time, effort and resources have been spent by analysts around the world in data processing by transforming unstructured text data into a standardized format to find insights out of it. Overall, results have varied due to lack of right technology and unfortunately mostly low intelligence insights being derived out of data with benefits being outweighed by the cost. Solution: Ontologies and Graph Databases Recently, enterprises have realized the impact of using Ontologies which has reduced the burden on Data Processing from data engineers with its increased adoption. Ontologies help define common vocabulary and help in smooth knowledge management. Also, with the increased maturity and awareness of Graph databases (such as Neo4J, AWS Neptune, etc.) which are used for knowledge management and finding connections in text data, organizations are able to unlock value out of unstructured data. Ranjan Relan, Data Strategy and Tech Consultant – ZS Associates 3. Setting up the infrastructure and velocity of data The primary challenge in handling modern data requirements (especially streaming) is setting up the infrastructure owing to high volumes and velocity of data. This can be handled in a very efficient manner by using data streaming cloud services like Microsoft Azure. Accordingly, two PaaS services stand out viz. Azure Stream Analytics and Azure Databricks. The former is a first-party streaming service that gels well with messaging services like Azure IoT Hub or Event Hub. The article ‘An Introduction to Azure IoT with Machine Learning’ elucidates more on this. However, the latter i.e. Azure Databricks is a unified analytics platform to implement Lambda Architecture. Details on this can be found here: Lambda Architecture with Azure Databricks Prasad Kulkarni, Senior Software Engineer – Nuance Communications 4. Adapting to different tools to collect unstructured data The biggest challenge now and going forward in data processing is a change in the type of data that is coming in. Previously all the data was structured, but now, a lot of data is coming in an unstructured format from numerous sources like social media platforms, emails or shared cloud storage platforms. Analyzing, processing and storing this data has become a challenge that organizations are grappling with even today. The first thing in my opinion that any organization looking to become more data-driven needs to do is to revisit their data strategy including data collection mechanism, data entry points and the tools used for data processing and integration. The data teams need to appreciate that no one tool can help them with all the challenges so they should be open to use and adopt new tools or processes for data management. The fundamentals of data management will remain same which will include having a robust data model (virtual, federated or physical), having a way to build trust in the data (with good and robust data quality processes in place) and lastly but most importantly having a way for larger group to know what data is available and its business definition (having a metadata strategy in place). As the data is useful and valuable only when it is available and accessible to all, we should be building our models in such a way that savvy users are able to take that and add their data to it using front end tools like Tableau and Qlik. In case there are too many sources of data then using tools like alteryx to enable self-service ETL in a controlled environment for the users can also be done. With the advent of cloud, the storage problem has been solved to a large extent as the data architects can create an “elastic” warehouse or lake in the cloud and not worry about storage space limitations. Amit Agarwal, Senior Manager (IT) – Nvidia Graphics 5. Building a robust strategy before collecting data Life doesn’t start when a child takes birth, it starts when life is formed inside the mother’s womb. Similarly, a data analysis project doesn’t start when you get the data, it starts when you start collecting the data. Once we understand this, only then we will know what all analysis is possible and not possible with our data. Suppose we have some known goals of analysis and the data is to be collected through some survey, we should be designing the questionnaire such that everything we need is captured precisely. This will reduce disappointments during analysis. And as we already know that we do not have the required attributes collected for certain analyses, we won’t be wasting our time trying something out. This will ultimately save time and regrets. Prakash B Pimpale, Principal Technical Officer – CDAC Mumbai 6. Understanding the quality based on the semantics of data Enterprises see huge opportunities in Big Data Analytics by integrating data from both internal and external sources of data including structured, semi-structured (weblogs) and unstructured data (Social media feeds). New use cases around Sentiment Analysis and Customer Feedback arise from the processing of unstructured data from call logs or chatbots. This can provide vital insights into why a customer is not happy about the services provided or a product and can be used to improve service quality and enhance customer satisfaction. Data Security and Privacy are key in countries with data protection regulations like the GDPR in EU. Enterprises must process data-keeping design principles like Secure by design in mind. Principles of data anonymization and minimalization are key here to address the security and privacy concerns. Business and Technical Metadata across the diverse data domains is crucial for enterprises managing large complex data sets to have the semantics of the data being collected, processed and analyzed. The quality of insights is dependent on understanding the semantics of data. Saumya Chaki, Data Platforms Solutioning Leader – IBM 7. Building a strong Data Foundation We all work in different domains with a variety of sources and the biggest challenge is to have a holistic centralized view. This needs to be viewed in a different perspective more than storage, as a complete RDM and MDM powered Single Source of Truth. This involves a mammoth effort in building such a strong data foundation but worth its benefit on a long-run perspective. A broader solution would lie in creating an Enterprise-wide initiative to form a data strategy along with a number of specialized teams in areas like Data Governance, Data Quality, RDM, MDM, Data Stewardship, Data Integration and Data processing. There are plenty of tools available in the market for these different areas and having Sysops and Dataops teams in place will help in provisioning these services. In a nutshell, challenges in data processing will continue to grow until we have a good and robust data foundation in place. Ravichander Rajendran, Data Analysis & AI Engineering Lead – AstraZeneca This article is presented by AIM Expert Network (AEN), an invite-only thought leadership platform for tech experts. Check your eligibility.
Each day we generate 2.5 quintillion bytes of data. All the data that is being generated by us while using the internet is raw and cannot be used by an organization to make data-driven decisions. We wanted to understand how data scientists have evolved in 2020 and the type of tools they are using now […]
["AI Trends"]
["big data for social good", "Data Governance strategy"]
AIM Media House
2020-02-27T15:00:00
2020
1,402
["machine learning", "AWS", "AI", "chatbots", "sentiment analysis", "big data for social good", "RAG", "Aim", "anomaly detection", "analytics", "Azure", "Data Governance strategy"]
["AI", "machine learning", "analytics", "Aim", "RAG", "chatbots", "anomaly detection", "sentiment analysis", "AWS", "Azure"]
https://analyticsindiamag.com/ai-trends/7-challenges-faced-by-data-scientists-in-data-processing-in-2020/
3
10
4
false
false
true
10,138,655
Google Announces $15 Million Funding to Train the Government Workforce in AI
On 16 October 2024, Google.org, Google’s charitable arm, unveiled a new $15 million funding initiative to support public sector organisations in learning AI to enhance their services. This aligns with the U.S. Department of Energy’s (DOE) push for an AI-ready workforce in the country. At the Google Public Sector Summit in Washington, D.C., Google.org claimed that the initiative is part of its broader commitment to using technology for social good. The funding is split into two grants. A $10 million grant would be given to help the nonprofit Partnership for Public Service establish the Center for Federal AI in Spring 2025. Since 2019, the Partnership for Public Service has trained 550 senior federal leaders from over 50 agencies in AI skills. The remaining $5 million grant will support InnovateUS, which offers free AI training to public sector workers. They provide self-paced courses, live workshops, and programs, and over 40,000 learners have been trained so far. This initiative aims to strengthen AI skills in state and local governments nationwide. Beth Simone Noveck, founder of InnovateUS and chief AI strategist for the state of New Jersey said, “For government to work better and be more accessible to the people it serves, our workers must have the opportunity to take advantage of the latest tools and technologies.” A report from Deloitte University Press notes that AI holds great potential for improving efficiency and lowering labour costs in the public sector. The report indicated that It could save working hours and potentially reduce the time spent on non-essential tasks by 30% within five to seven years. However, it stressed the need for a strategic approach and urged leaders to carefully consider which tasks can be automated. DOE is leading initiatives to respond to the critical need for skilled professionals to use AI. It has tried to align this with the 2023 Executive Order from the White House, which aims to use AI to boost scientific discovery, economic growth, and national security. To support this goal, the DOE is partnering with the National Science Foundation (NSF) to create programs that will train 500 new researchers by 2025.
America needs an AI-ready workforce, says the U.S. Department of Energy.
["AI News"]
["AI (Artificial Intelligence)", "ai funding", "AI Workforce", "Fund Raising", "google.org"]
Sanjana Gupta
2024-10-16T20:06:37
2024
352
["Go", "funding", "programming_languages:R", "AI", "Fund Raising", "programming_languages:Go", "AI Workforce", "ai funding", "Aim", "GAN", "R", "google.org", "AI (Artificial Intelligence)"]
["AI", "Aim", "R", "Go", "GAN", "funding", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/google-org-announces-15-million-funding-to-train-the-government-workforce-in-ai/
3
8
0
false
false
false
10,125,219
AI Sparks India’s Research Boom but Threat to Quality Looms
At the IGIC 2024 event in Bengaluru, a research scholar from Singapore highlighted a curious trend: grants were being awarded to studies that merely mentioned AI. Interestingly, India’s AI research scene has seen some dramatic rise lately. In 2023, India published 17,000 AI papers, a significant increase from fewer than 4,000 just a decade ago. This surge pushed India past the UK, which previously held the fourth position. However, the question is how relevant are these papers? ‘AI’ : The Keyword Source: X Several researchers aimed to measure the difference in research quality. In 2020, Klinger et al studied the development and variety of topics in AI research and looked at how much private companies influence this research. They analysed around 90,000 AI-related papers from arXiv.org. Later in 2021, Hagendorff and Meding analysed 10,000 machine learning papers from three major conferences (CVPR, NeurIPS, and ICML) published between 2015 and 2019. They categorised the papers by searching for academic and industry-related words in the full text. Their findings showed that the number of papers involving both academia and industry is increasing. They measured the impact of papers by their median number of citations and found that industry-affiliated papers received more citations than academic ones. However, this difference decreases over time as newer papers accumulate citations more slowly. They also found that academic papers were about two years behind industry papers in mentioning trending topics like “adversarial” and “deep learning”. An analysis of social impact terms showed no significant difference between academic and industry papers. India’s Patent System In the 2017-2022 period, India published around 1.3 million research papers, making it the 4th largest producer of research globally. Source: Statista 2024 In 2023 alone, India filed over 90,000 patents, averaging 247 per day, which marks a notable 17% increase – the highest in two decades. Despite this progress, India still does not rank among the top 10 countries for patent filings, raising questions about its position in the global innovation sector. “Many patents attributed to Western companies are the result of work conducted in India. However, these patents are registered under the names of Western companies due to their global operations,” TV Mohandas Pai, chairman at Aarin Capital, rightly pointed out at IGIC. Pai highlighted that the Indian research culture does not consider patent filing as an objective, which has been a hindrance. While in the US, a significant number of patents are filed defensively driven by broader strategic reasons rather than for immediate utility. Regarding financial capital, Pai noted that over the past decade, $145 billion has been invested in India, in contrast to China’s $835 billion and the US’s $2.3 trillion. This scarcity of financial capital significantly constrains India’s growth. “Public funding for research is insufficient, with Indian universities receiving around $800 million annually, less than what a single prominent American university might receive,” he said. In 2020, India filed nearly 57,000 patents, which is only 4% of the 1.49 million applications filed in China and 9.5% of the 5,97,000 applications in the US. Similarly, India granted 23,361 patents, compared to 5,30,000 in China, and 3,50,000 in the US. On average, it takes about 58 months to process a patent application in India, while it takes about 20 months in China and 23 months in the US. What’s Next? India needs to focus on improving its research infrastructure, increasing funding for universities, and fostering a culture that emphasises the importance of quality over quantity. Strengthening collaborations between academia and industry could further boost the quality and impact of AI research. Leveraging its large population and data potential, India can position itself as a leading player in the next wave of AI technology and innovation. Source: X
In 2023 alone, India published 17,000 AI papers.
["AI Features"]
["AI (Artificial Intelligence)", "AI Research", "AI Tool", "India"]
Vidyashree Srinivas
2024-06-28T14:05:26
2024
616
["Go", "API", "machine learning", "AI", "innovation", "ML", "AI Research", "RAG", "Aim", "deep learning", "AI Tool", "R", "India", "AI (Artificial Intelligence)"]
["AI", "machine learning", "ML", "deep learning", "Aim", "RAG", "R", "Go", "API", "innovation"]
https://analyticsindiamag.com/ai-features/ai-sparks-indias-research-boom-but-threat-to-quality-looms/
3
10
1
false
false
true
10,099,404
Why PyTorch is Love
Ask any developer, and they will tell you why, in the ever-evolving world of deep learning and generative AI, the name PyTorch stands as the beacon of affection and admiration. While large language models (LLMs) based on Transformers have been the talk of the town, they have not overshadowed the significance of more traditional frameworks like PyTorch. PyTorch’s popularity among data scientists and engineers remains steadfast, and for good reason. According to several discussions, one of the primary attractions is PyTorch’s “inherent goodness”. It offers an intuitive and dynamic approach to building neural networks, making it an ideal choice for deep learning experiments and prototyping. Unlike some other frameworks like TensorFlow, PyTorch has a reputation for keeping things simple yet powerful. Though TensorFlow is undoubtedly powerful, it is buggy. Some people even say that TensorFlow is still better in terms of production, but developers have just shifted to PyTorch. Even years ago, it was known to work flawlessly “out of the box” on relatively simple systems. This user-friendly aspect of PyTorch has endeared it to researchers and experimenters alike, making it a no-brainer choice for those in pursuit of innovation. TensorFlow’s death gave rise to PyTorch’s glow PyTorch is like a trusted companion. Its flexibility and ease of use allow developers to quickly implement their ideas, test hypotheses, and iterate on their models. The dynamic computation graph in PyTorch allows for real-time debugging and experimentation, which is crucial for refining algorithms and achieving breakthroughs. While PyTorch reigns supreme in the realm of research and experimentation, TensorFlow has found its calling in end-user facing applications. It has become the framework of choice for deploying machine learning models in production environments. However, even within the realm of deep learning research, TensorFlow’s popularity has seen a decline. TensorFlow is dead and if you're using it at work you're on a career dead end https://t.co/CC5h1zjrlc— (((jroon))) (@untitled01ipynb) August 12, 2023 Google learned from Meta’s PyTorch and made TensorFlow 2.0, which is better and easier for research than its previous version. Still, researchers have no reason to return to giving TensorFlow another chance. Now, with PyTorch 2.0 in the picture, hopes for TensorFlow fall even shorter. Moreover, even Google and DeepMind have shifted away from TensorFlow in many of their projects. Instead, they have embraced JAX and frameworks built on top of it, such as Haiku and Flax. This shift underscores the evolving landscape of deep learning frameworks, with PyTorch and JAX emerging as the preferred options. Python is the king of AI research at the moment. Interestingly, PyTorch is often referred to as “Pythony” by developers. This explains its wide adoption, and people shifting to PyTorch (built largely on Python) because it seemed comfortable and easy to use, and has a faster learning curve for new users. Community is the reason for PyTorch’s success Another reason for PyTorch’s success is its compatibility with NVIDIA’s CUDA. CUDA is the beloved framework for developing AI models, and PyTorch’s code just made it a lot easier for developers. Earlier, Google was leading it with TensorFlow, but Meta’s PyTorch won hearts with the ease of use. Handling Cuda and multiple GPU is why I love PyTorch. Tensorflow is an headache lol— Predstan (@RajiAdeola10) January 28, 2022 One of the other factors contributing to PyTorch’s enduring love is its strong presence within the Hugging Face ecosystem. The dominance of PyTorch in the Hugging Face ecosystem is evident from the statistics on StackOverflow Developer Survey. In 2022, a staggering 45,000 PyTorch exclusive models were added to Hugging Face, while only 4,000 new TensorFlow exclusives made their way onto the platform. This resulted in a whopping 92% of models on Hugging Face being PyTorch exclusive, leaving a mere 8% for TensorFlow. This disparity in model availability on Hugging Face showcases the widespread preference for PyTorch among developers and researchers. It also underlines the practicality and efficiency that PyTorch offers in creating and deploying state-of-the-art models. Moreover, PyTorch’s core developers are known for their responsiveness to user issues and feature requests. This dynamic interaction fosters a sense of partnership between the framework creators and its users, further solidifying PyTorch’s place in the hearts of many. The warmth and affection for PyTorch extend beyond its technical merits. It can be attributed to the vibrant and supportive community that has grown around it. PyTorch enthusiasts and experts readily share knowledge, offer assistance, and collaborate on open-source projects. As we look at the future of deep learning and artificial intelligence, it becomes increasingly clear that PyTorch and JAX are poised to play pivotal roles. These frameworks offer the flexibility and performance needed to tackle the complex challenges of tomorrow. The fusion of PyTorch’s user-centric design and JAX’s efficiency paints a promising picture of what lies ahead.
Google was leading with TensorFlow, but Meta’s PyTorch won hearts with the ease of use, and things have stayed that way
["AI Features"]
["Pytorch", "Tensorflow"]
Mohit Pandey
2023-09-04T12:11:42
2023
791
["Pytorch", "Hugging Face", "machine learning", "artificial intelligence", "AI", "neural network", "PyTorch", "deep learning", "generative AI", "JAX", "TensorFlow", "Tensorflow"]
["AI", "artificial intelligence", "machine learning", "deep learning", "neural network", "generative AI", "TensorFlow", "PyTorch", "JAX", "Hugging Face"]
https://analyticsindiamag.com/ai-features/why-pytorch-is-love/
4
10
1
false
false
true
10,144,167
Top 10 Videos of 2024 by AIM – Editors&#8217; Pick
AIM has a robust video content module known for producing over 100 curated videos annually, alongside content from our conferences and events. These videos offer insights into the latest advancements in AI, spotlight industry leaders, and showcase innovations shaping the future of technology. From enterprise strategies to quirky AI-driven experiments, here are the top 10 AIM videos of the year. 1. Redefining Mobility with AI | Ford Business Solutions At Ford’s Business Solutions team in Chennai, AI is driving the future of mobility. This video dives into how nearly 1,000 professionals at Ford’s Global Data Insights & Analytics (GDIA) team are using AI and big data to transform the automotive industry. From connected cars to AI-powered logistics, Ford’s innovations extend far beyond vehicles, creating global impact. Key Takeaway: Ford is at the forefront of AI-driven mobility, fostering a culture of innovation and inclusion. 2. Every Industry Will Transform Using AI | Ishit Vachhrajani, AWS AWS Enterprise Strategist Ishit Vachhrajani takes center stage to discuss how generative AI is disrupting industries, from healthcare to manufacturing. This episode of Simulated Reality explores the role of AI-driven solutions in reshaping businesses globally. Key Takeaway: AI will redefine industries at every level, creating transformative possibilities across the board. 3. Wipro: Best Firm for Data Scientists Wipro’s recognition as the “Best Firm for Data Scientists” highlights their commitment to fostering talent and innovation. This video delves into how Wipro’s data scientists are contributing to real-world solutions, and why the company stands out as a leader in analytics and AI. Key Takeaway: Wipro provides a stellar environment for data scientists, cementing its place as a top employer in AI and analytics. 4. AI: A Solution or a Problem? | Kailash Nadh, Zerodha Zerodha CTO Kailash Nadh explores the intersection of AI, open-source culture, and fintech. This in-depth conversation reveals how Zerodha harnesses AI while staying grounded in practical, human-centric innovation. Key Takeaway: AI is powerful but must align with real-world problems, not just hype. 5. Building AGI in India | Young Indic AI Developers Three of India’s leading AI developers discuss their groundbreaking work on foundational models, AGI, and India’s AI ecosystem. This dynamic conversation explores the future of AI development in India. Key Takeaway: India’s young AI talent is making significant strides toward AGI and foundational AI models. 6. Database Market in India | Raj Verma, Singlestore In this engaging interview, Raj Verma, CEO of Singlestore, unpacks the $120 billion potential of India’s database market. Verma highlights Singlestore’s role in shaping real-time analytics and distributed SQL. Key Takeaway: Real-time data and analytics are revolutionizing industries in India and beyond. 7. AI Meets the Kitchen | ChatGPT Recipe with Chef Nidhi Nahata In this light-hearted video, chef Nidhi Nahata uses ChatGPT to create a unique recipe, blending AI with culinary arts. Will AI lead to a masterpiece or a kitchen disaster? Key Takeaway: AI’s creative potential can extend to unexpected places, like the kitchen! 8. AI vs You: Who can write better Pick-Up lines? | Vox Pop In this fun and interactive video, we explore AI’s impact on creativity. Can AI generate better pick-up lines than humans? This lighthearted experiment dives into AI’s role in shaping music, fashion, and humor. Key Takeaway: AI’s creativity is expanding, challenging human ingenuity in unexpected ways. 9. OpenAI in India | Pragya Misra Pragya Misra, Lead Public Policy at OpenAI India, discusses OpenAI’s progress towards AGI, hiring challenges in India, and Sam Altman’s vision for the region. Key Takeaway: OpenAI is deeply invested in nurturing AI talent and innovation in India. 10. The ‘Woke’ Google Gemini Reaction In this reaction video, AIM’s editorial team dissects Google’s controversial Gemini tool, highlighting the debate around AI ethics and representation. Key Takeaway: AI ethics and accuracy remain central to AI development and deployment. These top 10 videos showcase how AI is shaping industries, culture, and even our everyday lives. If you’d like to collaborate with us on video production, reach out to info@aimmediahouse.com. Our video team is exceptional, delivering high-quality, engaging content that highlights the latest in AI innovation. Stay tuned for more engaging content in 2025!
Explore AIM’s top 10 videos of 2024, showcasing cutting-edge AI innovations, industry leaders, and creative experiments shaping the future.
["AI Trends"]
["aim tv", "podcasts", "Video"]
AIM Media House
2024-12-23T16:14:21
2024
684
["ChatGPT", "OpenAI", "AI", "AWS", "RAG", "Aim", "analytics", "aim tv", "generative AI", "SQL", "Video", "podcasts", "R"]
["AI", "analytics", "generative AI", "ChatGPT", "OpenAI", "Aim", "RAG", "AWS", "R", "SQL"]
https://analyticsindiamag.com/ai-trends/top-10-videos-of-2024-by-aim-editors-pick/
3
10
5
false
false
false
59,077
Are SMBs Doing Enough To Protect Themselves From Cyber Attacks?
Cybersecurity is indeed a very important concern for SMBs; however, due to certain limitations, many small and medium businesses (SMBs) cannot secure their organisations properly. In fact, according to a report, more than 50% of SMBs have experienced a cyberattack and a data breach. This, in turn, has led to the majority of SMBs understanding the importance of having a robust security posture in place. And, considering SMBs aren’t comprehensive with their security strategies, making them vulnerable; it gets a lot easier for hackers and criminals to attack a small business over a large organisation. If you’re a small business, then this is the high time to re-plan your security posture. Below are four ways SMBs can protect themselves from cyber-attacks. Understand the dynamic landscape & the changing trends In order to prepare to fight against cybercriminals, SMBs need to create a comprehensive security strategy. And to be ready with your cybersecurity posture, it is imperative for business leaders to have a comprehensive understanding of the vulnerabilities — be it internal or external, that can affect their business and how hackers can gain entry including their different methods and motives, and points of weakness. According to experts, for most of the instances, cyberattacks either happen due to businesses having a weak security system, or a week firewall system to fight against the cyber attacks. And, therefore learning different types of cyber fraud schemes and common threats – everything from phishing and spoofing scams, to systems hacking and pharming, will help SMBs to plan their moves way ahead of the actual attack. SMBs should also ensure that all of their employees are well aware of the changing cybersecurity scenario so that they do not mishandle sensitive and confidential business information. Ensuring these points will help SMBs to create robust and comprehensive strategies in order to protect their business from potential attacks. Some of the common cyber attacks methods to be aware of are — hacking, phishing, social engineering, malware threats, and identity thefts. Vulnerable hardware can provide an edge to cybercriminals SMBs need to understand the importance of securing their hardware as frequently as their software. In order to stand against cybercriminals, it is imperative to secure all hardware devices and software network with the most robust security solutions available and should keep them updated. Most small and medium businesses fail to consider securing their business’ hardware, which later can cost them during an actual cyber attack. In fact, for small businesses, any loss or theft of business hardware, even the tiniest ones, could be as dangerous as it is equivalent to a vital data breach. Therefore business leaders must ensure that only authorised employees are given access to integrities of business hardware. Apart from creating a firewall for accessing sensitive information, in order to secure business hardware, SMBs must deploy multi-factor authentication for people accessing their hardware devices and surveillance system. Ensuring security policies for better management For SMBs to run smoothly without the fear of being attacked by cybercriminals, it is critical to defining protocols to abide by. But in order to be effective in this dynamic environment, the security policy must permeate throughout the organisation, through every department, and should be comprehensively embedded into its overall business strategy. A robust security policy should direct how each employee should operate in the organisation. After all, employees have access to the company’s sensitive information about the company, which makes them the first line of defence against cyber attacks. Consequently, SMBs need to educate and train their employees about data and its protection. Every employee should be aware of the warning signs, red flags, safe practices, and responses to a suspected attack. Employees should be well aware of ways to protect themselves, such as using complex and strong passwords and maintaining a clean desktop in order to not expose the company’s sensitive information for other potential people using your devices. The newer cybercriminals are way advanced and therefore SMBS to be well aware of the new emerging scenarios. Additionally, SMBs should also have deployed a mitigation plan that can help them in cases of cyber attacks. Employees awareness for better security Last but not least, the most important thing for an SMB to foolproof themselves from cyberattacks is to educate their employees. Usually, employees aren’t trained enough to understand the risks they can create due to their mistakes. Therefore it is vital to train your staff and make them aware of cyberspace, and ways to prevent and deal with cyberattacks. Also, in cases of employees working remotely or when employees are using their own device to communicate and deal with confidential business information, SMB needs to ensure that the devices are well secured with necessary firewalls, encryption, and strict password policies. In short, SMBs should make their employees aware of the risks involved in sharing personal or business information over the internet. Wrapping Up In order to thrive in this dynamic environment, SMBs should start thinking about creating a robust security strategy. As we all know, prevention is better than cure, and therefore SMBs should secure themselves in advance to stand against the cybercriminals as well as prepare strategies to mitigate an actual attack. It is currently crucial for SMB owners to take cybersecurity seriously. However, creating a robust cybersecurity framework should never come in the way of business innovation; rather, comprehensive security should be created to add value to the business.
Cybersecurity is indeed a very important concern for SMBs; however, due to certain limitations, many small and medium businesses (SMBs) cannot secure their organisations properly. In fact, according to a report, more than 50% of SMBs have experienced a cyberattack and a data breach. This, in turn, has led to the majority of SMBs understanding the […]
["AI Trends"]
["Cyberattacks", "Cybersecurity", "Cybersecurity India", "Data loss prevention", "Firewall hardware", "hardware firewall", "International Affairs", "network firewall", "SMBs", "SMBs india"]
Sejuti Das
2020-03-20T10:00:00
2020
901
["Cyberattacks", "network firewall", "programming_languages:R", "AI", "innovation", "International Affairs", "SMBs", "Cybersecurity India", "Data loss prevention", "Firewall hardware", "ViT", "SMBs india", "GAN", "Cybersecurity", "hardware firewall", "R"]
["AI", "R", "GAN", "ViT", "innovation", "programming_languages:R"]
https://analyticsindiamag.com/ai-trends/are-smbs-doing-enough-to-protect-themselves-from-cyber-attacks/
2
6
1
false
false
false
10,547
10 Most Sought After Big Data Platforms
For organizations today, the most important asset is ‘Data’. And in order to be successful and be ahead in the race of getting customers and market shares, it is important for them to not just  collect this data but also analyze it to use for business decisions and innovations. But how does an organization achieve all this and with ease. Well the answer is ‘Big data Platform’. Big data platforms help store, manage and analyse big data to achieve required business outcomes. But with so many big data platforms being available today, which is the right one for your business, which platform meets your needs is a question. So we compiled a list of ‘ 10 Leading Big Data Platforms’ and what they have to offer you. While there are dozens of big data platforms that have propped up over the last few years, we created this list based on their relative popularity right now, collected through mediums like Google insights. Cloudera Enterprise Cloudera Enterprise is a high performance low cost platform for conducting analytics on data. Cloudera Enterprise has the only native Hadoop Search engine and this platform provides its users with active data optimization feature. Cloudera manager includes advanced features like intelligent configuration defaults, customized monitoring, and robust troubleshooting which allow easy administration of Hadoop in any environment. With predictive maintenance included in its Support Data Hub, Cloudera Enterprise keeps you up and running. HPE Vertica Hewlett Packard Enterprise’s big data platform Vertica is one of the most advanced SQL database analytics portfolio to address today’s demanding big data analytics initiatives. It allows companies to manage and analyse massive volumes of structured and semi-structured data quickly and reliably with no limits. Vertica allows organizations to leverage on analytics by providing support for all leading BI and visualization tools, open source technologies like Hadoop and R, and built-in analytical functions. Hortonworks Data Platform (HDP) HDP is a secure and enterprise-ready open source Apache Hadoop distribution. It is based on a centralized architecture YARN. This big data platform takes care of the entire data need i.e. it addresses data-at-rest, powers real-time customer applications and delivers robust analytics to help organizations in decision making. YARN and Hadoop Distributed File System (HDFS) are the two pillars of HDP. IBM Big data Platform IBM’s big data platform caters to the full spectrum of big data business challenges allowing its users to have an integrated experience. This platform is a blend of traditional technologies and modern new technologies. It combines the traditional technologies which are best suited for structured, repeatable tasks along with new technologies which provide speed and flexibility and are ideal for adhoc data exploration, discovery and unstructured analysis. Hadoop-based analytics, stream computing, data warehousing, and information integration & governance are four core capabilities of IBM Big data Platform. Kognitio Analytical platform The Kognitio Analytical Platform is best suited to run complex analytics on big data. It is designed to cater to organization with more users having more complex queries and requiring a shorter response time. This analytics platform provides ultra-fast high-concurrency SQL for Hadoop platforms and also for existing data warehouse implementations. It seamlessly interoperates with the user’s existing BI, analytics and visualisation applications, and Hadoop big data storage. MapR Converged Data Platform MapR Converged Data Platform is one single platform for big data applications. Organizations using MapR’s platform do not require to move data to specialized silos for processing as data can be processed in place. MapR’s Platform is based on the concept of  Polyglot Persistence which allows to leverage multiple data types and formats directly. MapR, a converged data platform integrates the power of Hadoop and Spark with global event streaming, real-time database capabilities, and enterprise storage, thereby enabling its users to experience the enormous power of data. Pivotal Big Data Suite Pivotal Big Data Suite aims at being a database technology which helps organizations accelerate their digital transformation. Pivotal Big Data Suite is based on open source technologies. This big data platform aims to provide a wide and comprehensive base for modern data architectures. Pivotal Big Data Suite can not only be deployed on-premise but also in public clouds. It contains all the elements necessary for batch and streaming analytics architectures. Qubole Data Service Qubole Data Service (QDS) aims at making the platform more accessible to carry out big data analytics for data stored in the cloud accounts of AWS, Google and Microsoft. QDS is an integrated interface which is used to perform ad hoc analysis, predictive analysis, machine learning, streaming for data driven organizations. Also QDS’s workbench feature allows users who do not have knowledge of writing SQL query to query using SmartQuery interface. QDS Data Engines are fully automated and optimized for the cloud. SAP HANA SAP’s big data platform HANA is designed for next-generation applications and analytics. It is an in-memory platform which focuses on running the analytics applications in a smarter way, making business processes faster and creating simpler data infrastructures. It acts as the foundation for all data needs by allowing to integrate all types of data and using advanced analytical processing for deeper insights. Teradata Integrated Big Data Platform The Teradata Integrated Big Data Platform lets its user offload the data and workloads from Teradata IDW to a low-cost system of Teradata. All this can be done without disturbing the execution of Teradata analytics that the user wishes to carry out. This feature of Teradata Integrated Big Data Platform helps you avoid the complexity of adding Hadoop for low-cost storage. Also this integrated big data platform boasts of having global hot spare drives, an optional hot standby node, and redundant power supplies for enhanced system availability.
For organizations today, the most important asset is ‘Data’. And in order to be successful and be ahead in the race of getting customers and market shares, it is important for them to not just  collect this data but also analyze it to use for business decisions and innovations. But how does an organization achieve […]
["AI Trends"]
["big data platform c++", "cloudera", "IBM", "qubole", "SmartQ"]
Manisha Salecha
2016-08-06T05:58:54
2016
944
["Go", "cloudera", "machine learning", "AWS", "AI", "ML", "qubole", "RAG", "big data platform c++", "Aim", "IBM", "analytics", "SQL", "SmartQ", "R"]
["AI", "machine learning", "ML", "analytics", "Aim", "RAG", "AWS", "R", "SQL", "Go"]
https://analyticsindiamag.com/ai-trends/10-most-sought-after-big-data-platforms/
3
10
3
true
false
false
10,093,618
After Apple, Amazon, Google and Microsoft, Meta Now Builds Its Own AI Chips
Meta recently introduced its first in-house silicon chip designed for AI workloads, called MTIA (meta training and inference accelerator). This AI chip is a custom-designed ASIC built with next-generation recommendation model requirements in mind. The accelerator is built on TSMC’s 7nm process at a 25W TDP and provides 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. Meta believes that by having it in-house, they can optimise every single nanometer of the chip so they don’t waste any part of the architecture, in terms of area, alongside bringing down the power for the chip, thereby reducing the cost for the ASIC. The company said that the benefit of building its own ASICs is that they now have access to real workloads that are used by its ads team and other groups at Meta, where it can execute performance analysis on its design, fine-tune, and tweak all the parameters that go into high-performance solutions by incorporating the silicon with the software environment. With this, the team at Meta is able to speed up the software development cycles and deploy the models at a much faster pace and help to improve the user experience. Powered by PyTorch Meta said that it has also developed a compiler technology that runs under the PyTorch environment. “MTIA executes on those workloads with the highest performance and lowest power,” added the team, saying that it achieved the efficiencies compared to today’s GPUs. Further, it is that its new chip was designed in collaboration with a lot of cross-functional teams that care about the chip, the board, the system, the rack, the data center, their constraints and optimisations as well as the software parts of IT firmware, compiler, application level, runtimes, PyTorch, models, application models. “So, all of this has come together to put a system that is optimised and tailored for Meta’s workloads, and MITIA is just one piece of it,” said Olivia Wu, design lead at MTIA. She believes that by having an in-house design, the team is able to take control of their destiny and are able to specify the architecture of the design and match it with the roadmap for the workload that is coming out in the future. Meta vs the world Meta isn’t the only one that is working on developing in-house AI chips. Recently, reports emerged that Microsoft has been working on its own in-house AI processor, called Athena, in partnership with the chip company AMD.  Read: After Google, Microsoft Targets NVIDIA Besides Microsoft and Meta, Google, Apple and Amazon have also been working on developing in-house AI chips. For instance, Google has built a supercomputer to train its models with its TPUs (Tensor Processing Units). Apple has been working on M1 and M2 chips for quite some time now. Amazon, on the other hand, is working on Trainium and Inferentia processor architectures.
Meta recently introduced its first in-house silicon chip designed for AI workloads, called MTIA (meta training and inference accelerator)
["AI News"]
["Meta AI", "tsmc"]
Tasmia Ansari
2023-05-19T14:49:52
2023
475
["Go", "Meta AI", "TPU", "programming_languages:R", "PyTorch", "AI", "programming_languages:Go", "ai_frameworks:PyTorch", "tsmc", "R"]
["AI", "PyTorch", "TPU", "R", "Go", "ai_frameworks:PyTorch", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/why-meta-built-its-own-ai-chip/
3
8
0
true
false
false
10,118,866
Oracle Integrates GenAI to Enhance Supply Chain and Automate KPIs
According to Gartner, 95% of data-driven decisions in supply chain operations are expected to be partially automated by the next year, and 25% of key performance indicator (KPI) reporting will be powered by GenAI models by 2028. In line with this, Oracle recently announced adding generative AI capabilities within Oracle Fusion Cloud Supply Chain & Manufacturing (SCM). This was done to help businesses manage their supply chains more efficiently and effectively. “We are working with customers to help them optimise their supply chains, which has a direct impact on financial KPIs,” said Sunil Wahi, vice president JAPAC, Oracle, in an exclusive interview with AIM. “Every operational KPI is linked with financial KPIs.” “Supply chain is where a whole lot of costs and margins are hidden,” said Wahi, adding that a beautiful supply chain is when it is invisible. “If you are getting your orders and shipments on time, you would never be bothered about what supply chain is running behind.” Oracle has brought in prediction driven supply chain command centres. Wahi said that using this tool, a company can plan its entire supply chain and make decisions on the placement of distribution centres, logistic hubs, RDCs, and so on, which then breaks down into a more visible supply chain for the company. “It functions like a mission control centre for your supply chain, bringing together data, intelligence, and recommendations to give you a holistic view and enable faster decision-making,” he added. Oracle is not Alone AWS Supply Chain recently added generative AI to simplify the data ingestion process and improve customer’s application on-boarding and setup experience. Similarly, Microsoft is also incorporating generative AI capabilities within Microsoft Dynamics 365 Supply Chain Management. For instance, the AI-powered Microsoft Supply Chain Center news module proactively flags external issues such as weather, financial, and geopolitical news that may impact key supply chain processes. Meanwhile, Google Cloud  is working with Accenture and Infosys develop a suite of transformative AI platforms and industry solutions for a range of business scenarios including optimising supply chains, using generative AI. Oracle’s GenAI Prowess Generative AI within Oracle SCM is designed to automate and optimise various supply chain tasks, from inventory management to order fulfilment. “There are multiple generative AI use cases, such as capturing item descriptions in an AI-assisted authoring mode, fulfilling supplier contracts, and capturing manufacturing productivity enhancements with an AI-assisted mobile app,” said Wahi. Moreover, Oracle has introduced generative AI support in Oracle Product Lifecycle Management which helps product specialists create SEO-focused product descriptions quickly. This saves time, reduces errors, and improves overall quality, leading to increased customer engagement and sales for organisations. OCI also uses generative AI to perform root cause analysis in supply chain operations. It ingests real-time data and uses a query-based system to provide actionable insights, helping to identify and resolve supply chain issues efficiently. “We are working with a large pharma customer in India that had just automated their entire supply planning processes on Fusion Cloud. For them, the whole objective was to allocate constrained resources to the right customer orders, which are most profitable,” said Wahi. Generative AI is changing how the supply chain manages sourcing and procurement. It handles purchase orders, negotiates deals, selects suppliers, and assists in contract preparations. For example, Walmart is using generative AI to automate supplier negotiations. According to a report, over 65% preferred negotiating with a generative AI bot instead of an employee at the company. There have also been instances where companies are using GenAI tools to negotiate against each other. Wahi was positive that with generative AI-powered supplier recommendations embedded in Oracle Procurement, organisations will be able to use information such as product descriptions and purchase categories to identify suppliers, improve sourcing efficiency, help lower costs, and reduce supplier risk. “Generative AI itself will be a huge transformative space for us. The partnership with Cohere is extremely important because that’s where we are drawing upon the learnings, and I would say it will be an extremely important partnership which will continue to grow,” concluded Wahi, adding that the company will bring in about 100+ use cases within  Fusion Cloud applications around generative AI.
Generative AI within Oracle SCM is designed to automate and optimise various supply chain tasks, from inventory management to order fulfilment.
["Deep Tech"]
["Generative AI"]
Siddharth Jindal
2024-04-23T13:49:48
2024
690
["Go", "GenAI", "AWS", "AI", "data-driven", "Aim", "ViT", "generative AI", "GAN", "Generative AI", "R"]
["AI", "generative AI", "GenAI", "Aim", "AWS", "R", "Go", "GAN", "ViT", "data-driven"]
https://analyticsindiamag.com/deep-tech/oracle-integrates-genai-to-enhance-supply-chain-and-automate-kpis/
2
10
2
true
true
false
46,748
The Tech Whisperer: New Book Features AI-Authored Chapter About AI
We’ve read numerous books on artificial intelligence. And at this day and age, we’ve also seen “fiction” writing created by AI algorithms. But taking a new step forward and colliding these two genres together is the new book titled The Tech Whisperer (Penguin, 2019) by Jaspreet Bindra. This is because The Tech Whisperer features a chapter on AI, written by an AI itself. “As I was writing about AI for this book, I started thinking of if and how an AI could write about itself. Would it ‘think’ of itself and its ilk differently than how I thought of it? Would it have an opinion, or a way of structuring the description in a more ‘logical’, ‘machinelike’ manner? Will there be emotion in the writing, will there be soul? To find out, I got in touch with Anand Mahurkar, the founder-CEO of Findability Sciences, a Boston, Massachusetts-based AI company. He immediately ‘got’ my crazy idea and jumped on the opportunity to quickly cobble together an AI, using Findability’s existing platforms, which could attempt to write a chapter on itself,” wrote Bindra in a blog. He also explained how they used a product called FP-Summary™ an innovative unsupervised method for automatic sentence extraction using graph-based ranking algorithms used in FP-Cognition™, a graph-based ranking algorithm for creating text summaries. Using this method Findability Sciences Platform has ‘written’ a chapter for The Tech Whisperer. Bindra’s book, as the name suggests, demystifies and simplifies emerging technologies like AI, blockchain, Internet of things, virtual reality, etc. and narrates how companies can employ these to drive their digital transformation. Published by Penguin, the book gives an engaging and forward-looking practitioner’s view which can help business leaders, entrepreneurs and anyone looking to understand digital transformation and technology.
We’ve read numerous books on artificial intelligence. And at this day and age, we’ve also seen “fiction” writing created by AI algorithms. But taking a new step forward and colliding these two genres together is the new book titled The Tech Whisperer (Penguin, 2019) by Jaspreet Bindra. This is because The Tech Whisperer features a […]
["AI News"]
[]
Prajakta Hebbar
2019-10-03T12:47:43
2019
290
["Go", "artificial intelligence", "programming_languages:R", "AI", "digital transformation", "programming_languages:Go", "Git", "R"]
["AI", "artificial intelligence", "R", "Go", "Git", "digital transformation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/the-tech-whisperer-new-book-features-ai-authored-chapter-about-ai/
3
8
1
false
false
false
10,104,951
NVIDIA Debunks AMD’s Claim of Being the Fastest GPU
After AMD announced at its Advancing AI event that its MI300X is faster than NVIDIA’s H100 for inference, NVIDIA has rebuffed AMD’s claim. According to a blog post by NVIDIA, contrary to AMD’s presentation, the company contends that the H100 GPU, when benchmarked appropriately with optimised software, outpaces the MI300X by a substantial margin. The results, obtained using software predating AMD’s presentation, demonstrated a performance that was twice as fast at a batch size of 1. Going even further, when applying the standard 2.5-second latency used by AMD, NVIDIA emerges as the undisputed leader, surpassing the MI300 by an astonishing 14-fold. NVIDIA attributes the disparity in results to AMD’s failure to utilise NVIDIA’s optimised software, specifically TensorRT-LLM. AMD employed alternative software that lacked support for Hopper’s Transformer Engine and missed critical optimizations present in TensorRT-LLM. NVIDIA emphasised that its software, freely available on GitHub, is designed to enhance performance on NVIDIA hardware, a factor overlooked by AMD. The implications of this revelation extend beyond the realm of benchmark numbers. NVIDIA reassures its investors that the company’s leadership in the GPU market remains unshaken. The blog post underscores the importance of maximising GPU performance through the latest inference software, a critical factor in reducing costs and fostering broader adoption of AI. As the AI hardware market evolves rapidly, NVIDIA remains at the forefront, continuously optimising its CUDA ecosystem. The H100, while currently holding the AI performance crown, is soon to be succeeded by the H200. Meanwhile, AMD’s MI300X, positioned as the good alternative to NVIDIA’s AI chip, faces challenges in optimising software to unlock its full potential. The competitive landscape is dynamic, with AMD acknowledging the need for further advancements in its ROCm software, which were announced at the event.
NVIDIA emerges as the undisputed leader, surpassing the MI300 by an astonishing 14-fold.
["AI News"]
["AMD", "amd mi300x", "NVIDIA"]
Mohit Pandey
2023-12-15T10:41:15
2023
289
["CUDA", "Go", "AMD", "API", "amd mi300x", "AI", "RPA", "Git", "Aim", "NVIDIA", "GitHub", "R"]
["AI", "Aim", "CUDA", "R", "Go", "CUDA", "Git", "GitHub", "API", "RPA"]
https://analyticsindiamag.com/ai-news-updates/nvidia-debunks-amds-claim-of-being-the-fastest-gpu/
3
10
1
false
false
false
10,118,215
Alexa Saves Young Girl from Monkey Attack, Aims to Aid Older Adults Too
Alexa’s ability to produce animal sounds through the Wild Planet skill recently helped save a 13-year old girl and her 15-month old niece from monkey attack in Basti, Uttar Pradesh. By asking “Alexa, kutte ki awaz nikalo”, the girl was able to scare away the monkeys. “The option to access a number of useful kid-friendly experiences with simple voice commands makes Alexa a great addition for a family with young kids. Parents often tell us how Alexa has become a companion in their parenting journeys,” says Dilip R.S., Director and Country Manager for Alexa, Amazon India. From listening to Indian folktales to playing animal sounds, Indian households with young kids who use Alexa at home are two times more engaged than other users. Parents of young kids take Alexa’s help in managing their day-to-day parenting tasks and keeping their kids engaged by asking Alexa for rhymes, stories, games, GK-related questions, and more. Users enjoy the ease and convenience of giving simple voice commands to Alexa in Hindi, English, and Hinglish – making the AI a great aid for parents and companion for kids. “While it is a great learning and entertainment tool for kids, Alexa can help parents manage their day-to-day tasks better. Whether it is controlling smart home appliances with voice while juggling numerous tasks or asking for a bedtime story as part of their child’s daily routine, Alexa’s right there to help them,” Dilip adds. Today, families across India are asking Alexa for information, games, quizzes, music, managing day-to-day tasks, stories, and much more. In fact, weekends are family time with Alexa – last year there was a 15% increase in requests to Alexa over the weekends in requests for music with many of them being for kids’ music. The top five, most popular songs for kids on Alexa are: Baby Shark, Lakdi Ki Kathi, Johnny Johnny Yes Papa, Wheels on the Bus, and Twinkle Twinkle Little Star. Indian folktales, like Akbar Birbal, Tenali Raman, and Panchatantra stories, see high interest from customers, especially in Hindi. In 2023, customers asked for these stories on an average of 34 times every hour.
Kid-friendly features of Alexa include interactive games, quizzes, nursery rhymes, animal sounds, and its ability to answer questions about spellings, general knowledge, history and science
["AI News"]
["Alexa", "Amazon", "Amazon Alexa"]
Sukriti Gupta
2024-04-15T16:10:47
2024
353
["programming_languages:R", "AI", "Amazon Alexa", "Amazon", "RAG", "Alexa", "R"]
["AI", "RAG", "R", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/alexa-saves-young-girl-from-monkey-attack-aims-to-aid-older-adults-too/
2
4
0
false
false
false
10,024,293
MLOps, An Insider’s Perspective: Interview With Nikhil Dhawan
For this week’s ML practitioners series, Analytics India Magazine(AIM) got in touch with Nikhil Dhawan, Director of Engineering, MLOps at Dentsu International. He has previously led data engineering teams at KPMG. He has a Bachelor of Computer Science from Guru Nanak Dev University, Amritsar and a Master of Information Technology from IIIT-Bangalore. In this interview, Nikhil shared his experiences from working with data for nearly a decade. AIM: Let’s settle this forever. What is the difference between a Data Scientist, Data Engineer and an ML Engineer? Nikhil: There can be an entire article written on this topic and still might not fit all the cases. I believe it varies between each company. Data engineers are responsible for setting up the environment for movement and transformation of data and can include storage of that data. Data Scientists are people with knowledge about scientific and statistical methods to get insights from that data, including predicting the future behaviour using the current trends. A few years back, software engineers were learning about operations and handling infrastructure for deployment. On the other side, Ops teams were learning development with infra as a code. These two streams led to a DevOps role. MLOps lies between Data Engineer and Data Scientist similarly. Data Engineers are learning about infrastructure to support model lifecycles and building continuous training pipelines. Data scientists want to learn to deploy their models and use them to score the incoming data. ML Engineers create a production-grade data pipeline using infrastructure that converts the raw data to input required for a data science model, hosts and executes the model, and gives output as a scored dataset to downstream systems. An ML Engineer can come from both a data engineer and data scientist background. AIM: What fascinates you about data? Nikhil: For me, it’s less about the data and more about the business use case. Data is the new oil, just as dirty too, and needs to be cleaned first depending on the fuel you want to extract from it. The fuel is decided by what business outcomes you want to get from it. We are over the initial big data era where the focus was to build huge infrastructure and collect as much data as you can. Many businesses, especially in Australia, have realised that the overhead to maintain all the infrastructure, the pipelines to Hadoop, and highly qualified resources for its upkeep far outweighs the benefits unless you have the right use cases planned from the beginning. Fortunately, cloud vendors stepped in at the right time. They brought huge infrastructure with time-based billing, leading to small upfront investment and scaling decisions made easier at later stages. This, combined with high availability, security, compliance with regulations and local laws, led most clients, including financial institutions, to move completely to the cloud. AIM: What books and other resources have you used in your data engineering journey? Nikhil: Most of my knowledge has come from being hands-on and working on the tools/services, but certifications have provided me with a way to prove it to myself and the world. I had an experience of more than a year on Hadoop before I got certification as a Hadoop developer from Cloudera. I learnt the inner workings of the Hadoop ecosystem while studying. I regard certification equal to getting theoretical knowledge of a subject and hands-on experience as practical methods of applying that in real life; both are necessary for a comprehensive understanding. So far, I have certifications from Azure and GCP, and I’m on track to get AWS certification. AIM: What does your data engineering approach look like? Nikhil: My approach to data engineering is to automate the tasks as much as possible. The more we can make data engineering work with the user’s existing ways, the easier it is for clients and businesses to continue using it. Data engineering solutions should work for business users rather than the other way around. If the source data is generated and shared by email, add the trigger to the emails rather than force users to upload it to a portal. Suppose the data is generated regularly. In that case, the solution should handle duplication, archival and retention etc., rather than relying on users to follow some new process to make sure their data goes to update the correct dashboard. AIM: Few words for those who want to get into data engineering roles Nikhil: Traditional data engineering is very quickly moving to the cloud. Most transformation projects are also moving on-premise data to cloud services. I would recommend starting with any of the top three cloud platforms. All the cloud vendors already have managed big data products as part of their offerings, e.g. Azure HDInsights, AWS EMR etc. Other than cloud, few concepts beginners should know about are Python, the most versatile language in the DE/DS world.Linux/bash scripting will always come in handy.Git for version control, a must-have.CI/CD tools. E.g. Gitlab CI/CD or JenkinsDocker/Kubernetes for containerizationIaaC. E.g. terraform to automate cloud builds etc. AIM: What does your machine learning toolstack look like? Nikhil: Python is the de facto go-to language for its ability with data wrangling and data science, along with frameworks like flask/fastapi that allow you to quickly build a PoC with API to serve these capabilities. In the cloud, AWS is considered the most mature solution and has vast offerings, including AWS Sagemaker that we are exploring next. Many of our clients are already tied into the Azure ecosystem by having Office365 or dynamics CRM. Microsoft leverages it, and services like Blob storage, DataFactory, Azure functions, Databricks, AKS and Synapse Analytics are what we use to address the most common business use cases in our domain. Only one of our solutions is on GCP, where we use a serverless service app engine to host the web app. AIM: As A Director Of Engineering, MLOps, what does a typical day look like? Nikhil: My new role is Director of Engineering & MLOps, at Dentsu International, a media, marketing and customer experience management company. This role covers ownership of the data moving from client’s source system into our data science capability and providing output to client’s decisions systems. The raw data that flows in is used to build and train the models that predict future behavior. The models or the scored outcomes are shared with the clients to help make business decisions. A typical day consists of getting business requirements from our partners and clients across Dentsu’s various agencies. I usually have meetings with the client’s IT team to understand the source systems and data source. I spent time understanding the data models and setting up the infrastructure required to build automated pipelines to power the data science engine. AIM: MLOps is on the rise. As an industry insider, what is the ground reality? Is the hype real? Nikhil: I believe the hype is real. There is an increasing demand for people who have experience in model lifecycle management and model deployments etc. This is slightly different from the need for data scientists or data engineers; both are still required for full analytics capability in a team. At dentsu, we prefer anyone with MLOps or AI product exposure; data science knowledge is not a must. The best candidates come from software engineering backgrounds who have done masters or other relevant courses in data science. But they are either too hard to find or costly to hire and retain. Since we are using cloud-native pipelines and services (e.g. Azure MLOps), we tend to lean towards general cloud experience as a minimum and later train them on specific services. AIM: Why should companies invest more in MLOps? Nikhil: Large tech firms have used data science and its various techniques to learn about consumer behaviour for a long time. They have optimised their recommendation engines, have bundled products together, improved targeting for the right customers, increased the basket size and so on. They had the budget to dedicate resources for research, partnership with academic institutes that focused highly on statistical knowledge and theory. They also had a significant engineering function to build infrastructure and tooling required to build on research outcomes. Smaller or business-focused firms don’t have this luxury. There is a big task list on any data science project that ranges from data acquisition, data ingestion, determining or starting with initial algorithms, testing multiple variants including tuning the model and hyperparameters, preparation of the datasets for each experiment, validating and comparing the outputs etc. Finally, once we get the best possible trained model, the engineering task is to deploy the model to score or predict on live data to improve business functions. These are very labour-intensive tasks, prone to manual errors. MLOps helps automate most of these functions to free up the developer resources. Using best practices in MLOps, a company can save money and keep costlier data science resources focused on prediction and other such chores. Also Read: Andrew Ng’s Take On MLOps AIM: Is MLOps, the beginning of the end for data scientist-as-a-career hype? Nikhil: It is very difficult to answer this question as a yes or no, but I think the roles and responsibilities are changing. There are not enough data scientists. Unfortunately, most of the data science resources spend too much time in data preparation work. There is also a big disconnect between what a junior data scientist wants to do vs what a company expects them to do. Many data engineering tasks like exposing the models as an API or feeding the outputs to existing IT systems have started to creep into their task lists. Given that many data scientists do not have much experience in general programming, it becomes difficult to manage. “There is also a big disconnect between what a junior data scientist wants to do vs what a company expects them to do.” MLOps is now splitting from the data scientist role, where it focuses more on productionising part of the data science to support the business decisions. We will still need data scientists to describe the outcome of each algorithm and the reasoning behind it. That is true even when AutoML is used to run the experiments by trying on most of the possible algorithms and variants on the given dataset. We will also need data scientists to tackle an emerging field called “Explainable AI”. Both business and governing bodies can no longer rely on AI (being a black box) to decide an outcome that might directly impact a person. They want to know exactly why a decision was made and what data contributed to it. As data science has gained focus, it has raised questions about choosing the correct dataset, something that AutoML can’t do. Too many examples are out there that show clear bias for certain races or gender due to less diverse data being used to train the models, and its responsibility lies on people rather than the machines. AutoML is helping in executing the experiments at scale, and MLOps helps you build data pipelines around the models. While this might mean we won’t need as many data scientists as before, the science part of data science will still be left with experts in the field.
MLOps is now splitting from the data scientist role, where it focuses more on productionising part of the data science to support the business decisions.
["AI Features"]
["Interviews and Discussions", "MLOps"]
Ram Sagar
2021-04-20T12:00:00
2021
1,865
["data science", "machine learning", "AI", "ML", "MLOps", "RAG", "Aim", "FastAPI", "analytics", "Azure ML", "Interviews and Discussions"]
["AI", "machine learning", "ML", "data science", "analytics", "MLOps", "Aim", "Azure ML", "FastAPI", "RAG"]
https://analyticsindiamag.com/ai-features/mlops-an-insiders-perspective-interview-with-nikhil-dhawan/
3
10
4
true
true
true
10,171,515
Wipro Boosts Regional Commitment with New HQ in Riyadh
Wipro Limited moved its Middle East regional headquarters from Al Khobar to Riyadh, Saudi Arabia, on Monday. The new office was officially opened during a ceremony attended by Wipro’s leadership team, employees, and customers. The shift strengthens Wipro’s footprint in Saudi Arabia, where the company already operates offices in Riyadh, Al Khobar, Jeddah, and Jubail. “This also underscores the attractiveness of Saudi Arabia’s digital business environment. We value the company’s investment in developing national competencies, in line with the objectives of Saudi Vision 2030 and enhancing the Kingdom’s position as a global technology hub,” Mohammed Robayan, deputy minister for technology, Saudi Arabia, said. Wipro has also signed an MoU with Prince Mohammad bin Fahd University (PMU) to set up a centre of excellence in Riyadh. The centre will focus on training Saudi students in advanced technologies, giving them both academic knowledge and hands-on experience with Wipro’s resources. “This strategic move, combined with our ongoing involvement in supporting the goals of the Kingdom, aligns with our vision of driving sustained growth and a future-ready workforce in the region,” Vinay Firake, CEO of Asia Pacific, India, Middle East & Africa at Wipro, added. He also noted that Wipro’s regional push will be supported by the recent appointment of Mohamed Mousa as managing director for Wipro Middle East, who will be based at the new Riyadh headquarters. Wipro’s latest expansion highlights its ongoing investment in Saudi Arabia’s digital future and in building a skilled local workforce. The company has been operating in the Middle East for over two decades, employing thousands of professionals and supporting local talent. Earlier this year, Etihad Airways chose Wipro to lead a major technology modernisation project. The five-year contract aims to enhance operational efficiency and customer experience.
The shift strengthens Wipro’s footprint in Saudi Arabia, where the company already operates offices in Riyadh, Al Khobar, Jeddah, and Jubail.
["AI News"]
["Wipro"]
Shalini Mondal
2025-06-10T13:20:51
2025
289
["Go", "Wipro", "programming_languages:R", "AI", "programming_languages:Go", "Git", "Aim", "R"]
["AI", "Aim", "R", "Go", "Git", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/wipro-boosts-regional-commitment-with-new-hq-in-riyadh/
2
7
1
false
false
false
10,006,070
Why Is Differential Privacy Important For A Data-centric Organisation
For the first time in its 200-year-old history, the US Census Bureau has announced that this year’s survey will implement new standards to safeguard citizen data. The government body is implementing differential privacy for this. Differential privacy as a concept has been around since the early 2000s. Lately, the use of differential privacy has seen a great demand thanks to the increased adoption of data science techniques by organisations. Differential privacy as technology has also been named in the 2020 Gartner Hype cycle. With data comes responsibility. To protect the privacy of data providers is crucial. Be it population census or customer feedback on app stores; no company should be able to trace the source easily. Differential privacy offers a mathematical framework to anonymise data. It is a high-assurance, analytic means of ensuring that use cases like these are addressed in a privacy-preserving manner. Differential privacy aims to ensure that regardless of whether an individual record is included in the data or not, a query on the data returns approximately the same result. Therefore, we need to know what the maximum impact of an individual record could be. This will be determined by the highest, and the lowest possible value in the data set and is referred to as the sensitivity of the data. The higher the sensitivity, the more noise needs to be applied. According to Microsoft, to protect personally identifiable or confidential information within datasets, differential privacy utilises two mechanisms : Some statistical “noise” is added to each result to mask the contribution of individual data points.Information revealed from each query is calculated and deducted from an overall privacy budget to halt additional queries. Here, noise can be a pixelated picture. It does work with protecting privacy, but there is a tradeoff with the accuracy of the algorithms. Top 10 Players developing Differential Privacy tools. (Source: linknovate) Companies like Google even rolled out tools like differential privacy libraries. Let’s’ take a look at a few of these tools: Microsoft’s OpenDP OpenDP is a suite of open-source tools developed by Microsoft and Harvard. OpenDP was developed to provide a privacy-protective analysis of sensitive personal data. The project is focused on algorithms for generating differentially private statistical releases. With OpenDP, the team wants to target applications mainly in the government, institutions where the sensitivity of the data being shared should be safeguarded to enable seamless scientific research. Try OpenDP here. IBM’s Diffprivlib Developed by IBM, Diffprivlib is a general-purpose library. Developers can experiment, investigate and develop DP applications using this library. There are a few key features of this library, which IBM claims, are absent in other popular ones: For accountancy it offers limit privacy spend across multiple operations;Offers a comprehensive collection of the basic building blocks of differential privacy, used to build new tools and applications;For machine learning algorithms, it offers pre-processing, classification, regression and clustering. Try Diffprivlib here. Google’s Differential Privacy library Google released its open-source library last year to meet the needs of developers. Here are some of the key features of this library: It supports most common data science operations. It can be used to compute counts, sums, averages, medians and percentiles, which are widely used techniques for differential privacy.Has an extensible ‘Stochastic Differential Privacy Model Checker library’ to help prevent mistakes.It comes with a PostgreSQL extension and a quick start guide. Developers can include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management. Try it here. Privacy is a cornerstone of data sharing, and differential privacy provides a definitive guide to navigate through the digital realms. Apple too employs differential privacy techniques while collecting feedback from its users in a safe way.
For the first time in its 200-year-old history, the US Census Bureau has announced that this year’s survey will implement new standards to safeguard citizen data. The government body is implementing differential privacy for this.  Differential privacy as a concept has been around since the early 2000s. Lately, the use of differential privacy has seen […]
["AI Features"]
["data management providers", "differential privacy", "IBM"]
Ram Sagar
2020-09-04T10:00:43
2020
610
["PostgreSQL", "data science", "machine learning", "AI", "ML", "RAG", "Aim", "IBM", "differential privacy", "SQL", "data management providers", "R"]
["AI", "machine learning", "ML", "data science", "Aim", "differential privacy", "RAG", "PostgreSQL", "R", "SQL"]
https://analyticsindiamag.com/ai-features/why-is-differential-privacy-important-for-data-organisation/
3
10
0
true
true
true
10,170,046
AI is Changing Healthcare, But Can India Protect Patient Privacy?
India’s digital infrastructure in the healthcare industry has seen rapid technological advancements, reimagining good health along with equitable and efficient care. As per the World Economic Forum (WEF), AI has transformed the pharmaceutical research industry, driving 30% of new drug discoveries by 2025. According to the Global Outlook and Forecast 2025-2030, AI in the drug discovery market was valued at $1.72 billion in 2024 and is projected to reach $8.53 billion by 2030, with a compound annual growth rate (CAGR) of 30.59%. Moreover, companies like IBM Watson, NVIDIA, and Google DeepMind are collaborating with pharmaceutical organisations to support AI-driven drug discovery. In another area of health tech, AI is digitising patient records and decentralised AI models, helping improve diagnostic accuracy while safeguarding patients’ right to privacy. During an interaction with AIM, Rajan Kashyap, assistant professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), pointed out that government initiatives such as increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle. Growth of Healthtech Kashyap believes that the country is making notable strides in the healthcare technology field through several initiatives, including the Genome India project, the Consortium on Vulnerability to Externalising Disorders and Addictions (cVEDA), and the Ayushman Bharat Digital Mission, which aim to improve understanding of India’s clinical health. He pointed out that work being done in areas like genomics, big data analytics, AI, and machine learning (ML) is actively redefining clinical outcomes and operational efficiency. Kashyap highlighted Bengaluru-based startup BrainSightAI, which is innovating diagnostics for neurological disorders. Earlier this year, it raised $5 million from a Pre-Series A round, using which it plans to expand to tier 1 and 2 cities in India and obtain FDA certification for access to the US and allied markets. Moreover, Niramai Health Analytics offers AI-powered breast cancer screening tools. Their Thermalytix device is an affordable, portable and radiation-free method of detecting breast abnormalities, and works for women of all ages and breast densities. Meanwhile, Biocon, one of India’s largest biopharmaceutical companies, uses AI in biosimilar development, which uses predictive modelling to understand the complexities of biologic behaviours, reduce formulation failures and speed up regulatory compliance. The company also introduced Semglee, the world’s first interchangeable biosimilar insulin for diabetes and has expanded patient access through partnerships with Eris Lifesciences. The increasing costs of research and development in drug discovery have forced pharmaceutical companies to welcome innovative solutions, and AI has been a powerful enabler. Is Sensitive Information Handled with Care? While innovations are great for technology development in the healthcare industry, there are growing concerns about data security within healthcare organisations. Netskope Threat Labs reported that doctors have been consistently uploading sensitive patient information to unauthorised websites and cloud services like ChatGPT and Gemini. Kashyap believes patient confidentiality is often overlooked in the healthcare industry. “During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed,” he said. The risk of unintentionally exposing protected health information through AI platforms is high. AI systems are vulnerable to data breaches, hacking, and the potential for re-identification even with anonymised data. According to the National Institutes of Health in the US, the risk increases due to the growing use of cloud-based AI models. This is why healthcare organisations are relocating patient data beyond protective measures into these cloud-based solutions. Kashyap also warns that while anonymisation reduces risks, it does not fully protect against hacking or data breaches. He highlighted that research shows brain scans like MRIs can disclose personal details about a patient, and with further analysis, even sensitive information like financial data could be revealed. “I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he added. Government Initiatives and the Healthcare Industry According to Netskope’s report, organisations should deploy approved generative AI applications to centralise their use in a monitored and secured manner. This approach aims to reduce reliance on personal accounts and “shadow AI”. Although healthcare workers use personal GenAI accounts, the number has decreased from 87% to 71% over the past year as organisations adopt approved GenAI solutions. Moreover, the report calls for deploying data loss prevention policies that define the type of data shared on these platforms while adding another layer of security for healthcare employees. “India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context,” Kashyap said. He suggested that the government must prioritise developing interdisciplinary med-tech programs, particularly those integrating AI with medical education. “Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.
Healthcare employees have reportedly been uploading sensitive patient information to unauthorised websites and cloud services.
["AI Features"]
["ai in health sector", "ai in medicine", "healthtech", "patient privacy"]
Smruthi Nadig
2025-05-15T17:17:00
2025
886
["Go", "ChatGPT", "healthtech", "machine learning", "GenAI", "AI", "ML", "ai in health sector", "Aim", "ai in medicine", "analytics", "generative AI", "patient privacy", "R"]
["AI", "machine learning", "ML", "analytics", "generative AI", "GenAI", "ChatGPT", "Aim", "R", "Go"]
https://analyticsindiamag.com/ai-features/ai-is-changing-healthcare-but-can-india-protect-patient-privacy/
3
10
3
false
false
true
10,085,417
Self-driving Flights of Fancy Settle Down at CES 2023
After a dampened couple of years, the Consumer Technology Association returned to the Las Vegas Convention Centre in its archetypal form with more than 115,000 attendees lining up to see offerings from more than 3,200 exhibitors at the show. As far as conferences go, CES 2023 fared well but there was a marked difference in the mood around the autonomous driving space from a few years ago. More practical, less fanciful With time, CES had turned into a prime setting for car manufacturing companies to show what they got. The excitement and hyperfocus surrounding autonomous driving simply wasn’t the same now. Predictions flew around that self-driving cars and cabs would become a thing of the regular by now. Flying cars even seemed like a closer reality than ever before. Now that the companies were making smaller promises, the products didn’t draw as many gasps from the audience and the chatter was more muted. The buzzwords had changed to become far less cooler—sustainable, realistic and practical. Gary Shapiro, the president of the Consumer Technology Association, or the CTA, admitted to the change saying, “There’s no question that there’s been a shift. The Biden administration has focused more on electric vehicles than they have on autonomous vehicles”. Several transportation vehicles displayed on the show also reflected a more grounded ethos behind them—Stellantis NV showed an electric Ram pickup truck concept that would rival Ford’s F-150 Lightning pickup truck or even the Tesla CyberTruck. Electric truck maker Lordstown Motors showed an Endurance plug-in pickup truck. Volkswagen brought its electric sedan ID.7 to the event this year while Volvo introduced a seven-passenger electric SUV EX90 at the show. When it came to autonomous vehicles, the stars of the show weren’t any cars—the John Deere autonomous tractor instead emerged promising utility for farmers by taking automated farming to the next level. The tractor used sensors and robotics to help farmers plant seeds more precisely and fertilise their crops the moment they were planted. If we expected fully autonomous cars, CES gave us a fully autonomous stroller. The Ella stroller launched by Canadian company ‘GluxKind’ was the one that grabbed the headlines. The stroller however isn’t meant to move independently with a baby inside, the automatic movement is rather meant for when your baby doesn’t want to ride and is being carried by you. The stroller may not have path breaking technology but its price is an eye-watering USD 3,300. High prices, low demand, slow tech The reckoning with ground realities for the self-driving segment wasn’t necessarily a surprise at CES. Towards the end of October last year, Argo AI, the autonomous vehicle technology backed by Ford and Volkswagen Group shut down unceremoniously. Given that cars were still far from touching Level 5 of complete autonomy, Ford had decided to shift strategy for their self-driving division—moving to profit instead of spending more money searching for potential. Ford had taken a USD 2.7 billion writedown after Argo shut down, shifting to semi-autonomous features like its Blue Cruise hands-free driving system. Industry market leaders like Tesla are also suffering from stiff competition and high prices even as the technology isn’t advancing. Last week, Tesla slashed its prices between the range of 6% and 20% in the US for its Model 3 sedan, its Model Y SUV and a handful of other performance models. Tesla also lowered prices for the European segments. Tesla’s price cuts sparked a price war in China for EVs after which Chinese EV manufacturer ‘XPeng’ made major price cuts. Other competitors like Mercedes-Benz also chopped prices. The lesson this segment had eventually learned was that the biggest EV manufacturers like Tesla were just at Level 2 autonomy and were too expensive for a market where demand was dwindling due to a recession. Tesla also had a host of other problems besides the ones that Musk is dealing with while running operations at Twitter. Just today, Tesla’s director of Autopilot, Ashok Elluswamy reportedly stated in a court deposition that a  much-hyped video released by the company in 2016 falsely represented that it was self-driven. Elluswamy stated that the video was ‘intended to portray what was possible to build the system’ and not what the car actually did. Musk then tweeted a link to the video saying that a ‘Tesla drives itself’. The deposition is a part of a full investigation being conducted by the National Highway Traffic Safety Association, or NHTSA, into the safety of Tesla’s celebrated FSD technology. But, it’s not all doom and gloom, with Mercedes-Benz becoming the first automaker to receive certification for Level 3 self-driving in the US. It is however not a mystery that companies just want a break from the self-driving dream.
The reckoning with ground realities for the self-driving segment wasn’t necessarily a surprise at CES.
["AI Features"]
[]
Poulomi Chatterjee
2023-01-20T10:00:00
2023
782
["Go", "API", "programming_languages:R", "AI", "data_tools:Spark", "programming_languages:Go", "BERT", "Ray", "llm_models:BERT", "R"]
["AI", "Ray", "R", "Go", "API", "BERT", "llm_models:BERT", "programming_languages:R", "programming_languages:Go", "data_tools:Spark"]
https://analyticsindiamag.com/ai-features/self-driving-flights-of-fancy-settle-down-at-ces-2023/
3
10
2
false
false
false
37,069
Reliance To Buy Mumbai-Based Haptik To Supercharge Its AI Services
Reliance Jio is poised to create more ripples in the artificial intelligence sector, with the acquisition of Haptik. Mumbai-based Haptik, founded in 2013 by Aakrit Vaish and Swapan Rajdev, deploys state-of-the-art machine learning algorithms to perform NLP tasks. Haptik is funded by Times Internet and Kalaari Capital. Their Conversational AI is built using more than 1 billion interactions, making their chatbots one of the best in the business with clients like Samsung, ICICI Bank and Tata Group, to name a few. Reliance Jio, on the other hand, is already pouring resources into its AI team for building recommendation systems for its Jio TV and other services. With the acquisition of Haptik, it plans to go big in the natural language processing domain. NLP has garnered a lot of interest of late because of its innumerable applications ranging from speech assistants to sentimental analysis to recommendation engines. Reliance is leaving no stone unturned as it already eyed e-commerce space. Reliance has been on a shopping spree this year as it already had acquired Mumbai-based hyperlocal delivery platform, Grab. So far Indian e-commerce and online entertainment space has been dominated by the likes of Amazon and Netflix. Now Reliance plans to grab its own huge chunk of this lucrative market. Since Reliance doesn’t have to bother with the financial/funding hiccups, it is extremely smart of the Jio team to equip their AI arsenal with companies like Haptik. This acquisition means they now can avail the services of a proven talent pool with domain-specific knowledge and they don’t have to re-invent the wheel. According to Inc42, Reliance and the founders of Haptik have closed on a deal of ₹200 crore and the transaction is expected to be completed by the end of March 2019. With this, Reliance’s plan to reign over India’s digital turf is one step closer now.
Reliance Jio is poised to create more ripples in the artificial intelligence sector, with the acquisition of Haptik. Mumbai-based Haptik, founded in 2013 by Aakrit Vaish and Swapan Rajdev, deploys state-of-the-art machine learning algorithms to perform NLP tasks. Haptik is funded by Times Internet and Kalaari Capital. Their Conversational AI is built using more than […]
["AI News"]
["Haptik", "Mumbai", "NLP", "reliance jio"]
Ram Sagar
2019-03-29T11:01:29
2019
306
["Mumbai", "Go", "API", "artificial intelligence", "machine learning", "AI", "chatbots", "recommendation systems", "Git", "NLP", "reliance jio", "R", "Haptik"]
["AI", "artificial intelligence", "machine learning", "NLP", "chatbots", "recommendation systems", "R", "Go", "Git", "API"]
https://analyticsindiamag.com/ai-news-updates/reliance-to-buy-mumbai-based-haptik-to-supercharge-its-ai-services/
3
10
2
false
false
false
10,169,496
Apple Working on New &#8216;Specialised&#8217; Chips: Report
Apple is working on new chips for its upcoming smart wearable glasses, ‘more powerful’ Macs, and AI servers, according to a Bloomberg report. This indicates progress from the company to compete with Meta’s Ray-Ban smart wearable glasses. The Cupertino tech giant is likely to begin mass production of these chips by the end of next year, or in 2027. The report further indicates that the processor designated for these glasses is derived from those used in the Apple Watch, necessitating less energy compared to those found in the iPhone, iPad, and Mac. Furthermore, the AI server chips will be designed to process tasks related to Apple Intelligence, which will then be sent to the users’ devices. This signifies yet another attempt to improve Apple Intelligence, which has been subjected to significant criticism over the last few months. Eddy Cue, Apple’s senior VP of services, revealed that the company plans to integrate AI search in Safari, powered by either Perplexity, Google or OpenAI. Sundar Pichai also revealed that Google is planning to strike a deal with Apple to integrate Gemini into Apple Intelligence this year. CEO Tim Cook, in an interaction with Pichai, said that ‘more third-party AI models’ would ship to Apple Intelligence this year. This will add to the company’s efforts to build powerful, in-house hardware systems, building on the success of the Arm architecture M series chips released a few years ago, which were hailed for exceptional performance and power efficiency. The company recently unveiled its first ‘custom-designed’ modem chip, as a part of the C1 subsystem integrated in the iPhone 16e. The move is set to reduce the company’s reliance on Qualcomm. Recent reports also claimed that Apple plans to integrate cameras into Apple Watch and AirPods. The cameras would help the devices visually observe the user’s environment and use AI to deliver appropriate information.
These chips will be integrated into the company’s upcoming wearable smart glasses and AI servers.
["AI News"]
["AI (Artificial Intelligence)", "Apple"]
Supreeth Koundinya
2025-05-09T11:35:54
2025
308
["Go", "OpenAI", "AI", "programming_languages:R", "Apple", "programming_languages:Go", "Ray", "Aim", "llm_models:Gemini", "R", "AI (Artificial Intelligence)"]
["AI", "OpenAI", "Aim", "Ray", "R", "Go", "llm_models:Gemini", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/apple-working-on-new-specialised-chips-report/
3
9
1
false
false
false
10,140,825
Alibaba Cloud Launches its New Qwen2.5 Series
On 11 November, Alibaba Cloud released its new version of the open foundation Qwen model— Qwen2.5-Coder-32B-Instruct. The company refers to it as a ‘family of coder models’ and not just a model that can code. This model is a significant upgrade from its predecessor, CodeQwen1.5. Ahsen Khaliq, the ML growth lead at Hugging Face, took to LinkedIn to express and said he created a tic-tac-toe game and noted that it was similar to Claude’s artefacts. Alibaba, in its official blog post, said this series will help promote the development of Open CodeLLMs. This Qwen2.5-Coder, the code-specific model series based on Qwen2.5, was recently uploaded on Hugging Face. The architecture of Qwen2.5-Coder is spread across six different model sizes: 0.5B, 1.5B, 3B, 7B, 14B, and 32B parameters. While all sizes share the same architecture in terms of head size, they differ in several other key aspects. Binyuan Hui, core maintainer at Alibaba Qwen, took to X to share an an interesting game he created with Qwen2.5-Coder. I created something interesting with Qwen2.5-Coder-32B…I didn’t write a single line of code; it did everything on its own… pic.twitter.com/hBRG6ltLQF— Binyuan Hui (@huybery) November 11, 2024 Training and Data According to the official research published earlier this year, high-quality, large-scale, and diverse data is necessary for building pre-trained models. With this in mind, the Qwen team with Alibaba group developed a dataset called Qwen2.5-Coder-Data, which includes five primary data types: Source Code Data, Text-Code Grounding Data, Synthetic Data, Math Data, and Text Data. Following the file-level pretraining, the company proceeded to repo-level pretraining to improve the model’s long-context capabilities. In this phase, the context length is expanded from 8,192 tokens to 32,768 tokens, and the base frequency of RoPE is adjusted from 10,000 to 1,000,000. Results The Qwen2.5-Coder series has set a new standard in open-source coding models, particularly with its flagship, Qwen2.5-Coder-32B-Instruct. This model excels in code generation, matching the capabilities of GPT-4o on benchmarks like EvalPlus, LiveCodeBench, and BigCodeBench. Source: Qwen official blog Beyond generating code, Qwen2.5-Coder-32B-Instruct is adept at code repair, helping developers identify and fix errors efficiently. On the Aider benchmark, it achieved a score of 73.7, comparable to GPT-4o’s performance. The model’s strength lies in its understanding of code execution, enabling accurate predictions of inputs and outputs. It supports over 40 programming languages, scoring 65.9 on the McEval benchmark, and leads in code repair tasks, scoring 75.2 on the MdEval benchmark. Additionally, to see how well Qwen2.5-Coder-32B-Instruct matches human coding preferences, a test called Code Arena was conducted. Using a simple comparison with GPT-4o, the team measured which model performed better in each example. The results show that Qwen2.5-Coder-32B-Instruct is strongly aligned with human preferences. Source: Qwen official blog Qwen2.5-Coder with Cursor Even though code assistants are now widely utilised, most still depend on closed-source models. Alibaba and the Qwen team aim for Qwen2.5-Coder to offer developers a robust, open-source alternative. Below is an example of Qwen2.5-Coder in action within Cursor.
The results show that Qwen2.5-Coder-32B-Instruct is strongly aligned with human preferences.
["AI News"]
["Alibaba"]
Sanjana Gupta
2024-11-12T12:26:48
2024
491
["Hugging Face", "TPU", "synthetic data", "AI", "Alibaba", "GPT-4o", "ML", "GPT", "Aim", "R", "llm_models:GPT"]
["AI", "ML", "GPT-4o", "Aim", "Hugging Face", "TPU", "R", "GPT", "synthetic data", "llm_models:GPT"]
https://analyticsindiamag.com/ai-news-updates/alibaba-cloud-launches-its-new-qwen2-5-series/
4
10
0
false
true
true
10,061,374
Icertis CTO Monish Darda on how metaverse will transform SaaS in India
Icertis is the world’s leading contract intelligence management company and India’s second most valued SaaS unicorn. Their AI-powered Icertis Contract Intelligence platform manages global contracts for big firms. Analytics India Magazine learnt more about the start-up and their contract solutions from Monish Darda, CTO and Co-Founder. Darda’s entrepreneurial journey started in 2005, leading up to the creation of Icertis in 2009, with his co-founder Samir Bodas. Icertis is his seventh start-up. His experience includes going through a couple of acquisitions and working in start-up environments for distributed computing for enterprise customers. In addition, he manages AI/blockchain and the technological choices of Icertis. AIM: Tell us about Icertis’ journey. Icertis is a contract lifecycle management platform. Our AI-powered, analyst-validated Icertis Contract Intelligence (ICI) platform turns contracts from static documents into strategic advantage by structuring and connecting the critical contract information that defines how an organisation runs. When Samir and I started Icertis, we had no pain points to work with. Both our companies had been recently acquired, and we wanted to take our learnings to build a consequential company. We discussed the value and culture of the organisation we wanted to create, and that was our starting point. In 2019, the cloud was just coming up; AWS and Salesforce were starting. Azure did not exist yet. So we decided to build our Applied Cloud Framework, a platform that allows us to build apps on top of it and top of the cloud. AIM: What problem does Icertis solve? Samir used to work in Microsoft, and his ex-colleague once mentioned having a problem with their contract management system. We started with a pain point of a particular customer, essentially. We also realised there were several CLM companies in the market, but none had a cloud strategy. So we are the first in API and cloud service. People have the notion of CLM as creating a contract and storing it, but it is just 20-30% of the journey. What happens after your journey is important because it determines whether the contract is playing out in real life. We could see the power of the cloud here. And that was a pain point no customer had articulated, but we identified in the market. No processing in the business can happen without a contract, be it transactions between employers-employee, company-client or even with vendors. Our definition of contract management here was ensuring there is a sync between every rupee that goes out of the company; it reduces risks. AIM: Tell us about your AI-powered CLM solutions. How do you customise them according to different use cases? Can you illustrate through use-case examples? Google, Microsoft, Accenture, Wipro, Johnson and Johnson, Cognizant, Boeing, Daimler and more companies use Icertis to run their business, not just mind their contracts. These are across various sectors. 2000 was the CLM revolution where companies started looking at contracts as important as customers or manufacturing. When it comes to customising solutions, every customer and business is different. Customisations are complex, but we have not built a product; we have built a platform. So our answer is not customisation because that stops the upgrade path and is not beneficial to our customers; we instead build solutions on top of our platform. We don’t specialise; we generalise every new requirement. It is anathema in the company to say customisation; we use the word technical configuration instead. The effect for the customer is the same, but internally, it is different. It helps us build configurations that are generalisable. For instance, one of our largest retail customers had the problem of short contracts. Illustrating the problem, supposedly, during the Christmas sale, they pair up with Samsung with a complex rate and discount structure, showing 50 ads a day for 50,000 impressions, and Samsung will pay x money for 10,000 sales through these ads. This rate structure increases with more sales. These are 15-day contracts that end once Christmas is done; they don’t last long but are $100 million worth. So we built a rebates and discounts application on top of our platform for them. And once we built it, we realised this application can be used well in pharma. I remember the project sponsor actually sending us a note saying this was the first Christmas he was sleeping on the 26th. Applications like these are very heartening. AIM: Could you elaborate on the tech stack powering your platform? Our tech stack is pretty straightforward. We use Azure across the board. Additionally, we use SQL servers, SQL servers on VM, elastic pools, cloud-scale and more. For code, we use React and Python. Our code is written on top of Java. We use Azure Kubernetes for docker containerisation. AIM: How is the Metaverse architecting the future of the Indian SaaS market? Virtual and physical worlds are already colliding. There are so many business opportunities that are already arising because of the metaverse. For instance, Wendy’s has this event where people go there virtually and get tokens that can be used to buy real burgers. There are three fundamental building blocks; game players, hardware/ software providing graphic powers and integrators. Even here, every dollar in and out of the virtual world has a contract behind it. Currency changes between the physical and virtual worlds are important. Every commerce is going to be driven by contracts. In fact, one of the fundamental building blocks of blockchain is smart contracts. Metaverse will change the world as we know it; the pandemic has fastened it and shaved off eight years of transformation. When it comes to CLM solutions for metaverse, I guess that it’s still commerce, so we want to transform its foundation. It is probably going to be the same. However, the experience of signing, integration and negotiation will be different. AIM: How do you plan to use your recent investment Series F funding of $80 million? We’ve been entrepreneurs for quite some time and going through funding rounds. For us, we take the companies we respect and identify their growth rates at our size and revenue. We aim to be in that band, not above or below; we try not to break that and grow too fast. Most of the money goes into sales, marketing, and research and development. These are the major areas for any fast-growing enterprise SaaS company. The other part is our aim to grow geographically. We are already strong in the US, Europe and APAC, but we don’t have investments in Japan or China. That is the third bucket where some money will be going. Our main focus is on creating solutions and selling them.
When it comes to CLM solutions for Metaverse, it is probably going to be the same. However, the experience of signing, integration and negotiation will be different.
["AI Features"]
[]
Avi Gopani
2022-02-23T14:00:00
2022
1,099
["AWS", "AI", "ML", "distributed computing", "docker", "Python", "Aim", "analytics", "Azure", "kubernetes"]
["AI", "ML", "analytics", "Aim", "AWS", "Azure", "kubernetes", "docker", "distributed computing", "Python"]
https://analyticsindiamag.com/ai-features/icertis-cto-monish-darda-on-how-metaverse-will-transform-saas-in-india/
3
10
3
false
true
false
34,564
UK’s Talent Investor Company Entrepreneur First Enters India With $25M To Promote Deep-Tech Startups
Eyeing India’s growing startup ecosystem, UK-based global talent investor, Entrepreneur First has entered the Indian market with an investment of $25 million to promote deep tech startups in the country. The company which helps individuals build globally important company announced that it has established a cohort of  50 founders, in an attempt to help them find co-founders, work on ideas and build startups. They will also receive a sum of $2000 as a monthly stipend during the process of finding a co-founder and developing startup ideas in the first three months. The final list of cohort founders was selected from a list of 900 applicants after a round of interview and shortlisting. Of the chosen candidates, 30% have PhDs across various science and technology streams like aerospace, neuroscience, machine learning, electrical engineering, the company said in a statement. Speaking about the development, Esha Tiwary, General Manager of Entrepreneur First, India, said, “We have remarkable talent available in India. Through Entrepreneur First, we are trying to change the narrative for ambitious people in India who have thus far hesitated to start companies due to several reasons such as lack of a co-founder, no big idea, job security.” Apart from providing stipend, Entrepreneur First will also invest in the selected companies mentor the startups pitch their startup ideas to the company’s investor community to raise seed funding. Established in 2011 in London, Entrepreneur First has created over 200 companies globally and has helped raise $350 million from investors for its incubated companies. EF has an alumni base of 1200 people across London, Paris, Berlin, Hong Kong, Singapore. India is its new sixth location.
Eyeing India’s growing startup ecosystem, UK-based global talent investor, Entrepreneur First has entered the Indian market with an investment of $25 million to promote deep tech startups in the country. The company which helps individuals build globally important company announced that it has established a cohort of  50 founders, in an attempt to help them […]
["AI News"]
["Startups"]
Akshaya Asokan
2019-02-06T10:44:09
2019
271
["funding", "machine learning", "programming_languages:R", "AI", "Startups", "R", "startup"]
["AI", "machine learning", "R", "startup", "funding", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/uks-talent-investor-company-entrepreneur-first-enters-india-with-25m-to-promote-deep-tech-startups/
2
6
2
false
false
false
10,058,779
AI platform Pixis raises USD 100 Mn in Series C from SoftBank
Pixis, formerly known as Pyxis One, has raised USD 100 million in Series C led by SoftBank Vision Fund. General Atlantic, Celesta Capital, Premji Invest and Chiratae Ventures have participated in the round. The California-based company claims to be the world’s only contextual codeless AI infrastructure provider for complete marketing optimization. Pixis is in the process of building a no-code solution with a deployment time of eight seconds. The solution includes options to tweak AI marketing models, corral a team of data scientists to develop models, or press a button to deploy the products and plug-ins with just 30 minutes of training. The company currently has 50 AI models for self-evolving neural networks for AI-based market campaign optimisation and is planning to scale up to 200 more models in the next six months. Pixis had raised $17 million in a Series B round in September last year. Since 2018, the company has seen a 600 percent revenue growth and added up to 100 customers in the mid to large enterprise range. The fresh funds will help the company in scaling the AI platforms and plug-ins and expand its footprints in North America, Europe and APAC. Last year, the company launched their solution in BTC and DTC marketing. now, they are planning to have a solution for B2B and software companies by the end of the first quarter this year. Shubham A Mishra, Vrushali Prasade, and Hari Valiyath founded Pixis in 2018.
Pixis is in the process of building a no-code solution with a deployment time of eight seconds.
["AI News"]
[]
Shraddha Goled
2022-01-19T15:16:06
2022
241
["API", "programming_languages:R", "AI", "neural network", "Aim", "R"]
["AI", "neural network", "Aim", "R", "API", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/ai-platform-pixis-raises-usd-100-mn-in-series-c-from-softbank/
3
6
2
false
false
false
10,004,930
Yellowbrick Hands-On Guide &#8211; A Python Tool for Machine Learning Visualizations
Yellowbrick is mainly designed to visualize and Diagnose the machine learning models. It is a visualization suite built on top of Scikit-Learn and Matplotlib. It helps in the model selection process, hyperparameter tuning, and algorithm selection. Yellowbrick calls an API using the visualizer which is a scikit-learn estimator, the visualizer learns from data by creating the visualization of the workflow of the model selected.  These visualizations allow us to draw insights into the model selection process. In this article, we will explore different types of visualizations that are provided by Yellowbrick and how we can create them according to our requirements. Implementation: Yellowbrick is based on scikit-learn and matplotlib so we need to install both and then install yellowbrick. The command for installing all three libraries is given below: pip install scikit-learn pip install matplotlib pip install yellowbrick Feature Analysis Visualization We will import different functions defined in yellowbrick and scikit-learn for model selection as and when required. We will start by visualizing an advertising dataset that contains 3 features and one target variable ‘Sales’. a. Loading the Dataset import pandas as pd df = pd.read_csv(‘Advertising.csv’) df b. Defining Target and Feature variables x = df[['TV', 'Radio', ‘Newspaper’]] y= df['Sales'] c. Visualizing Features from yellowbrick.features import Rank1D visual = Rank1D() visual.fit(x, y) visual.transform(x) visual.show() 2. Linear Regression Visualization We will create a linear regression model using Scikit-Learn to visualize the Linear Regression using Yellowbrick. a. Creating the model We will create a linear regression model to visualize. from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression x_train, x_test, y_train, y_test = train_test_split(x,y, random_state=1) model = LinearRegression().fit(x_train, y_train) model_pred = model.predict(x_test) b. Visualizing the Model Using yellowbrick to visualize the model. from yellowbrick.regressor import PredictionError , ResidualsPlot visual = PredictionError(model).fit(x_train, y_train) visual.score(x_test, y_test) visual.poof() 3. Model Selection Visualization The model selection visualizer helps us in inspecting the performance of cross-validation and hyperparameter tuning. Let us visualize the feature importance using Random Forest Classifier and Yellowbrick. from sklearn.ensemble import RandomForestClassifier from yellowbrick.model_selection import FeatureImportances model = RandomForestClassifier() viz = FeatureImportances(model) viz.fit(x, y) viz.show() Similarly, we can visualize feature importance using Logistic Regression and yellowbrick. model = LogisticRegression(multi_class="auto", solver="liblinear") visual = FeatureImportances(model, stack=False, relative=False) visual.fit(x, y) visual.show() 4. Textual Data Visualization Yellowbrick can help us analyze the textual data properties also. For analyzing textual data we can read any textual data using the open function and visualize the frequency of the word using Frequency Distribution Visualizer. a. Importing Library and loading dataset from sklearn.feature_extraction.text import CountVectorizer from yellowbrick.text import FreqDistVisualizer corpus = open('text.txt', 'r') vectorizer = CountVectorizer() docs       = vectorizer.fit_transform(corpus) features   = vectorizer.get_feature_names() b. Visualizing The frequency and features or words visualizer = FreqDistVisualizer(features=features, orient='v') visualizer.fit(docs) visualizer.show() 5. Anscombe’s Quartet In the end, let us visualize the Anscombe’s Quartet which is a collection of four datasets that have similar statistical properties in the description format but are very different in the visual format. Anscombe’s Quartet clearly describes why we need to visualize data is an example of why Visualization is important for machine learning. import yellowbrick as yb import matplotlib.pyplot as plt ans = yb.anscombe() plt.show() We can clearly visualize how different these four datasets are irrespective of their similar statistical properties. Conclusion: In this article, we have learned about Yellowbrick, a visualization library used for visualizing machine learning models and algorithms. We saw how we can create different visualizations for different purposes using YellowBrick. This is just an introduction to the capabilities of yellowbrick, it has many more features and functions which are very helpful.
In this article, we will explore different types of visualizations that are provided by Yellowbrick and how we can create them according to our requirements.
["Deep Tech"]
["hands-on", "Machine Learning", "machine vision", "Python", "Python data visualization tools", "python machine learning", "scikit learn", "Visualization"]
Himanshu Sharma
2020-08-17T11:00:52
2020
582
["Go", "hands-on", "scikit-learn", "machine vision", "machine learning", "API", "AI", "programming_languages:R", "R", "RPA", "Machine Learning", "Python", "Matplotlib", "scikit learn", "Visualization", "python machine learning", "Python data visualization tools", "Pandas"]
["AI", "machine learning", "scikit-learn", "Pandas", "Matplotlib", "R", "Go", "API", "RPA", "programming_languages:R"]
https://analyticsindiamag.com/deep-tech/yellowbrick-hands-on-guide-a-python-tool-for-machine-learning-visualizations/
3
10
0
true
true
true
64,816
Top Programming Languages For Blockchain Development
Blockchain has been one of the biggest technology trends in the last few years. Be it cryptocurrency, smart contracts or supply chain tracking applications, there are multiple blockchain use cases both in the enterprise or public blockchain space. The revolution is being led by different developer communities. Developers across big tech companies and startups are building blockchain applications using different programming languages. Here we list down the different programming languages that you can learn to start building blockchain applications- Solidity Solidity is an object-oriented, high-level language for creating smart contracts. Smart contracts are programs which dictate the function of accounts within the Ethereum network. Solidity was inspired by C++, Python and JavaScript and is created to leverage the Ethereum Virtual Machine (EVM). With Solidity, developers can build smart contracts for applications such as voting, crowdfunding, blind auctions, and multi-signature wallets on the Ethereum public blockchain. JavaScript JavaScript is used everywhere on the web and is greatly popular and widespread. Large companies are using it for speed and security across a wide variety of devices. JavaScript has multiple libraries and frameworks, all the way from jQuery and React to Angular and Node, which have proven worthy for web applications. JavaScript is considered secure as apps built on JavaScript are on average less buggy, which is important for writing blockchain apps where transactions are irreversible. C++ We know that Bitcoin was originally written in C++. It is an object-oriented, generic, and functional features in addition to facilities for low-level memory manipulation. C++ has also been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, which allows better control over CPU and memory usage. Python Python is one of the most popular programming languages in the world and enjoys a huge community of developers. Python’s popularity has been growing for years, and even in blockchain, it can be useful for developers to write decentralised applications and systems easily. Because the language is simple and easy to learn, it can be an excellent choice for blockchain projects. Go Go is a general-purpose language designed with systems programming in mind. It is strongly typed and garbage-collected and has explicit support for concurrent programming. Used by Hyperledger — one of the biggest open-source software providers of enterprise blockchain, Go has become a popular language for creating blockchain applications. Due to the simplicity of the language, it is finding traction among blockchain developers.
Blockchain has been one of the biggest technology trends in the last few years. Be it cryptocurrency, smart contracts or supply chain tracking applications, there are multiple blockchain use cases both in the enterprise or public blockchain space. The revolution is being led by different developer communities. Developers across big tech companies and startups are […]
["AI Trends"]
["Blockchain", "Blockchain Technology", "Programming Languages", "simple python project", "Software Development", "supply chain analytics projects"]
Vishal Chawla
2020-05-08T15:52:00
2020
402
["Go", "funding", "Blockchain", "AI", "simple python project", "Programming Languages", "RAG", "Python", "C++", "JavaScript", "Blockchain Technology", "Software Development", "supply chain analytics projects", "R", "Java", "startup"]
["AI", "RAG", "Python", "R", "JavaScript", "Go", "Java", "C++", "startup", "funding"]
https://analyticsindiamag.com/ai-trends/top-programming-languages-for-blockchain-development/
2
10
2
true
false
false
32,180
It’s About Time Data Literacy Became A Part Of Indian School Curriculum
Data has become a part and parcel of all our lives. When so many aspects of our lives are dependent on it, why isn’t there more importance given to data literacy? As much as maths and science are important in the present educational system, in the coming years, data literacy will be crucial at workplaces. Indian schools, therefore, should take into consideration as to how students should be taught to read, understand, create and communicate data as information. According to a study by Ericsson, around 30 million urban Indian teens and pre-teens own a mobile phone. And of this, 20 per cent are actually 11 years or younger. Due to this high accessibility of mobile phone and other gadgets at a tender age, children’s learning capacity has also expanded over the years. Mending The Skills Gap But this often can be a challenge, says a study by Cornell University. It claims that due to the increased vastness of information, the gap between our awareness of information and our understanding of it is growing. That’s why experts are highlighting the importance of training the next generation in the art of deriving intelligent understanding from data. In India, the pertinence of data literacy in the present curriculum is very important, considering the fact that of 10-12 million fresh graduates joining the workforce each year, only 45% are digitally literate. With data creating more work at organisations, the challenge that the workforce faces is to find the right set of people with appropriate skill sets. Speaking on the matter, Arun Balasubramanian, country manager of Qlik said, “This massive skills gap could be caused by the slower evolution of the Indian educational ecosystem. Professional courses which follow old syllabi may be out of sync with industry requirements. Newer technologies such as AI and machine learning are still not a comprehensive part of the academic curricula, while foundational concepts such as data literacy are not given the kind of importance that they deserve in today’s data-driven age.” Though premium institutes like IITs and IIMs do offer courses in data science and with mushrooming private institutions across the country, there is a great need to lay the foundation for data literacy at the school level. Starting At A Grassroot Level By helping students to understand, use and communicate data effectively, not only will schools prep them for the future workforce, but also give them a perspective about how to use data effectively. Data being an integral part of anything that we do, the scope for teachers to introduce data will not be restricted one particular subject as well. In an interesting use case, data literacy was introduced in a liberal arts classrooms at Studio Education, a UK-based private institute. In a post highlighting the success of the programme, John Dietrich, the institutes head of STEAM said, “In addition to the more tangible benefits of utilising data in the classroom, an explicit emphasis on data literacy in the classroom also encourages the development of extremely important soft skills. For example, students learn the importance of good organisation and develop strategies for information management when required to work with large datasets. Students also begin to understand the value of properly planning and preparing long-term assignments before fully progressing through them.” How To Start Inculcating Data Literacy For schools here to start off, here are a few websites that will help students use data in an engaging way: Biodiversity Atlas India:  Is a species-based bioinformatics platform. It is designed for aggregating, displaying and analysing biodiversity data from tropical developing countries and other biodiversity hotspots such as India. TableTop2: Will bring together foundational tools for data literacy to enable students and teachers to use data meaningfully across subjects and classes. It hopes to lay a strong foundation for students to understand and appreciate science and mathematics through data-based inquiry. Social Explorer: Is a suite of online tools and data that allow users to visually explore data indicators across demography, economy, health, religion, crime and more. Users can visualise and interact with data, create reports and downloads for offline processing. iNaturalist: The platform connects its users with a community of over 750,000 scientists and naturalists who can help students learn more about nature. The users are required to record and share their observations to create research quality data for scientists working to better understand and protect nature.
Data has become a part and parcel of all our lives. When so many aspects of our lives are dependent on it, why isn’t there more importance given to data literacy? As much as maths and science are important in the present educational system, in the coming years, data literacy will be crucial at […]
["AI Features"]
["analytics skills", "data literacy"]
Akshaya Asokan
2018-12-26T11:27:34
2018
726
["data science", "Go", "machine learning", "AI", "data-driven", "Git", "data literacy", "RAG", "Aim", "GAN", "analytics skills", "R"]
["AI", "machine learning", "data science", "Aim", "RAG", "R", "Go", "Git", "GAN", "data-driven"]
https://analyticsindiamag.com/ai-features/its-about-time-data-literacy-became-a-part-of-indian-school-curriculum/
2
10
1
false
true
true
10,015,838
8 Cool AI-based Technologies Released In 2020
The pandemic may have put a pause in our normal day-to-day life, but when it comes to the evolving of technology, especially in the case of artificial intelligence, the world has been witnessing interesting and cool technologies being revealed by tech companies, on an everyday basis. In this article, we will cover some of those coolest AI-based technologies, in no particular order, that are released in 2020, amid pandemic. BrainBox AI The AI system for HVAC, BrainBox AI has been recognised as a TIME Best Invention of 2020. The AI technology uses deep learning, cloud computing and custom algorithms to proactively optimise the energy consumption of climate change contributors, including buildings. A fully autonomous solution BrainBox AI is empowering building owners to reduce their carbon emissions while generating significant savings along the way. Tiger Lake By Intel At the CES 2020, Intel unveiled Tiger Lake, which is a code name for the 11th generation Intel Core mobile processors based on the new Willow Cove Core microarchitecture. Built on Intel’s 10nm+ process, Tiger Lake is designed to bring mobile computing to life. According to the official blog post, the processors are designed to deliver double-digit performance gains, massive AI performance improvements, a huge leap in graphics performance and 4x throughput of USB 3 with the new integrated Thunderbolt 4. Intel Xe DG1 GPU During this year’s CES event, Intel has also announced its own graphic cards Xe DG1 GPU, chip maker’s first discrete graphics card in decades. With the official release, Intel’s CEO Bob Swan has also announced in October that the AI-based DG1 GPU is shipping now and will be in systems from multiple OEMs in the fourth quarter. He also mentioned that based on the Xe high-performance gaming architecture, this product would take the discrete graphics capability up the stack into the enthusiast segment. Lightsaber for Coronavirus Recently, Juganu, an Israel-based smart city lighting solutions company, announced the launch of an AI-driven light technology, known as J.Protect. According to sources, it is known to be the first-of-its-kind circadian lighting system that inactivates SARS-CoV-2 – the virus which causes COVID-19. The AI-system is said to use a combination of high-quality surface light mixed with ultraviolet light A (UV-A) and C (UV-C) to inactivate bacteria and viruses – including SARS-CoV-2. KODA Social Robot In October, the AI robotic dog company announced the pre-release of KODA futureproof social robot dog. According to news, the social robot is designed to be functional from pragmatic and emotional perspectives. Alongside, it also comes with the blockchain-enabled decentralised AI infrastructure of Koda that allows the robot dog to serve a multitude of purposes. Each KODA in the dog’s brain is connected to a secure blockchain network allowing for an industry-first decentralised AI mind. This network is used to share data points, process-optimal solutions and learn skills while it forgets the remaining superfluous data. Samsung Neon Earlier this year at CES 2020 event, Samsung unveiled its lifelike artificial humans known as NEON. NEON is an AI-powered avatar developed by Samsung subsidiary STAR Labs. Powered by STAR Labs’ Core R3 and SPECTRA technologies, NEONs are lifelike computer-generated AI avatars. According to sources, the feature AI-generated virtual avatars of NEON are indistinguishable from humans, and each one of them has his/her own unique personality. Embodied Moxie In April this year, Paolo Pirjanian, co-founder and CEO of Embodied announced a social robot, known as Moxie, designed to assist kids with social and emotional learning. According to sources, the robot can understand and express emotions with emotive speech, believable facial expressions and body language, tapping into human psychology and neurology to create deeper bonds. BIC Smart Shaver AI-Enabled Razor At CES 2020, BIC unveiled an AI-enabled prototype shaver that is designed in partnership with Invoxia. According to sources, the prototype razor is the first wet shaver with AI technology that captures data about the complete shaving experience, including temperature, humidity, hair density, shaving speed, number of strokes, time spent shaving, blade dullness, and more.
The pandemic may have put a pause in our normal day-to-day life, but when it comes to the evolving of technology, especially in the case of artificial intelligence, the world has been witnessing interesting and cool technologies being revealed by tech companies, on an everyday basis.  In this article, we will cover some of those […]
["AI Trends"]
["purpose of ai", "virtual invisible network"]
Ambika Choudhury
2020-12-23T18:00:00
2020
663
["Go", "artificial intelligence", "programming_languages:R", "AI", "cloud computing", "virtual invisible network", "Git", "RAG", "GAN", "deep learning", "purpose of ai", "R"]
["AI", "artificial intelligence", "deep learning", "RAG", "cloud computing", "R", "Go", "Git", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/ai-trends/8-cool-ai-based-technologies-released-in-2020/
3
10
1
true
false
false
59,165
Top 5 Books On AutoML To Streamline Your Data Science Workloads
AutoML tools are the need of the hour for data scientists to reduce their workloads in the world where the generation of data is only increasing exponentially. Readily available AutoML tools make the data science practitioner’s work more comfortable and covers necessary foundations needed to create automated machine learning modules. And with the spur in data and the potential that this data holds, data scientists will benefit more by using AutoML capabilities. As we approach the midpoint of 2020, it is slowly being recognised that this year will see an increase in adaptation of AutoML. With the massive potential of AutoML about to burst, non-data science professionals and data science practitioners will look to get a more comprehensive view on the technology. Below are some books that will help gain a better understanding of AutoML’s features, applications, and what the technology is about: Automated Machine Learning: Methods, Systems, Challenges This book gives a comprehensive tutorial level overview of the methods underlying AutoML by giving the readers a complete understanding of the key concepts. There are in-depth descriptions of AutoML systems and implementation in the context of actual systems. Not only does this entail different kinds of AutoML approaches, but it also gives pros and cons about the approach. Some topics included in this book are hyperparameter optimisation, meta-learning, neural architecture search, Auto-WEKA: Automatic Model Selection and Hyperparameter Optimisation in WEKA etc. AutoML Models A Complete Guide – 2019 Edition This book helps one get a clear picture of AutoML and ask the right questions, which will make the AutoML model investments better. AutoML Models A Complete Guide contains the majority of the tools one needs for an in-depth AutoML Model Self Assessment. The book features 900 case-based questions organised into seven core areas of process design, which will give one an idea about where the AutoML models need improvement. These questions will make one diagnose AutoML Models projects, initiatives, organisations and businesses better and implement evidence-based best practice strategies aligned with overall goals. One also learns to better integrate recent advances in AutoML Models and process design strategies into practice in accordance with best practice guidelines. Hands-On Automated Machine Learning This 282-page book is solely aimed at teaching how to automate different tasks in the machine learning pipeline. Authored by Sibanjan Das, Umit Merk Cakmak, Hands-On Automated Machine Learning gives a detailed description on how to work on machine learning pipelines like data processing, feature selection, model optimisation, model training and many more. It also demonstrates how one can use the existing automated libraries, like auto-sklearn and MLBox, and create and extend custom AutoML components for machine learning. The book features building automated models for different machine learning components, understanding each of these components and learn to use different open-source AutoML and feature engineering platforms. Hands-On Artificial Intelligence on Google Cloud Platform This book is written by Anand Deshpande, Manish Kumar and Vikram Chaudhari. In this book, one will be able to understand the basics of cloud computing and explore GCP along with learning to implement machine learning algorithms with Google Cloud AutoML. This book basically acts as a guide and shows how GCP tools can be used to build AI-powered applications with ease and to manage thousands of AI-powered implementations on the cloud. One will also be able to learn how to implement Cloud AutoML to demonstrate the use of streaming components to perform data analytics and understand how DialogFlow can be used to create a conversational interface. With other activities, in the end, one will be able to build and deploy AI applications to production with the help of a use case. Practical Automated Machine Learning On Azure. This book was authored by Deepak Mukunthu, Parashar Shah, Wee Hyong Tok. This provides a mix of technical depth, hands-on examples and includes case studies that show how customers are leveraging AutoML capabilities to solve real-world problems. The book covers how different industries use AutoML, tutorial with AutoML using Azure, exploring algorithm selection, auto featurisation, and hyperparameter tuning, understand how different professions can use and benefit from AutoML with the tool they are already familiar with and finally, how to get started on AutoML for use cases including regression, forecasting and classification.
AutoML tools are the need of the hour for data scientists to reduce their workloads in the world where the generation of data is only increasing exponentially. Readily available AutoML tools make the data science practitioner’s work more comfortable and covers necessary foundations needed to create automated machine learning modules. And with the spur in […]
["AI Trends"]
["Automl", "PowerBI"]
Sameer Balaganur
2020-03-20T12:00:00
2020
702
["data science", "Automl", "machine learning", "artificial intelligence", "AI", "cloud computing", "ML", "RAG", "PowerBI", "Aim", "analytics", "Azure"]
["AI", "artificial intelligence", "machine learning", "ML", "data science", "analytics", "Aim", "RAG", "cloud computing", "Azure"]
https://analyticsindiamag.com/ai-trends/top-5-books-on-automl-to-streamline-your-data-science-workloads/
3
10
2
false
true
false
67,268
Microsoft &#038; Udacity Partner To Launch Machine Learning Scholarship Program
Online learning platform Udacity has collaborated with tech giant Microsoft to confer scholarships for its all-new Machine Learning Nanodegree program in Microsoft Azure. The new Udacity Machine Learning Scholarship Program for Microsoft Azure represents the first of several programs from Udacity and Microsoft to deliver training for Azure cloud services. Applications are opened from June 10 till June 30th and the scholarship will be conferred in two phases. After reviewing the applications, around 10,000 selected candidates will be enrolled to pursue the first phase of scholarship. A unique feature of this program is Azure Labs wherein students work on projects in live Azure environments directly within the Udacity classroom. The phase begins with a two-month-long foundation course in Introduction to Machine Learning on Azure with a Low-code Experience. This is a single course focused on completing prerequisites for the full Nanodegree program where you’ll be supported by a community moderated by Udacity. The top 300 performers from the foundation course will then be awarded the scholarship to the Microsoft Azure Machine Learning Engineer Nanodegree program. Speaking about the scholarships, Gabriel Dalporto, CEO of Udacity shared, “With the Artificial Intelligence continuing to grow at a fast pace and AI engineers in high demand, especially as more enterprises build new cloud applications and move old ones to the cloud, Udacity’s training program for Azure machine learning presents an amazing opportunity for those looking to further expand their skill set. We’re excited to partner with Microsoft to create professionals with these specialized skill sets.” “AI is driving transformation across organizations and there is increased demand for data science skills,” said Julia White, Corporate Vice President, Azure Marketing, Microsoft. “Through our collaboration with Udacity to offer low-code and advanced courses on Azure Machine Learning, we hope to expand data science expertise as experienced professionals will truly be invaluable resources to solving business problems.” Apply here.
Online learning platform Udacity has collaborated with tech giant Microsoft to confer scholarships for its all-new Machine Learning Nanodegree program in Microsoft Azure. The new Udacity Machine Learning Scholarship Program for Microsoft Azure represents the first of several programs from Udacity and Microsoft to deliver training for Azure cloud services. Applications are opened from June […]
["AI News"]
["Azure Machine Learning", "Microsoft", "Udacity"]
Ambika Choudhury
2020-06-12T15:00:36
2020
311
["data science", "artificial intelligence", "machine learning", "cloud_platforms:Azure", "AI", "cloud_platforms:Microsoft Azure", "R", "Udacity", "Azure Machine Learning", "Julia", "GAN", "Azure", "Microsoft"]
["AI", "artificial intelligence", "machine learning", "data science", "Azure", "R", "Julia", "GAN", "cloud_platforms:Azure", "cloud_platforms:Microsoft Azure"]
https://analyticsindiamag.com/ai-news-updates/microsoft-udacity-partner-to-launch-machine-learning-scholarship-program/
2
10
3
false
false
false
10,080,858
The Rising 2023: 5th edition of India&#8217;s biggest gathering of Women in Tech to be held in Bengaluru on March 16 &#038; 17th
Rising 2023, now in its fifth edition, promises to be even bigger and better! Scheduled for March 16th and 17th, 2023, in Bangalore, this two-day conference strives to be the biggest meeting of women industry tech leaders from across the domain and for women professionals from the industry and academia. The Rising 2023, hosted by Analytics India Magazine, provides a much-needed forum for exchanging ideas and serving as an inspiration for other women to participate in STEM and will also highlight the achievements and career interests of women in the technology sector. This year, the momentum has grown five times with a line-up of 50+ Speakers, 30+ talks, and 200+ organisations participating in a two-day interactive summit. What’s more, 700+ attendees representing C-suite leadership, analytics practitioners, developers, IT architects and executives and enthusiasts will walk away with one core learning — how women can sustain a successful career in the ever-dynamic world of technology. There are live Q&A sessions, panel discussions, tech talks and keynote sessions, which will give the delegates more time to dig deeper. For beginners & enthusiasts, it will act as a unique platform to grow her community & network, as well as a place to meet more than 100 leading companies striving hard to build a more inclusive industry. WHEN & WHERE: – March 16-17, 2023 | Thursday – Friday – Hotel Radisson Blu, Outer Ring Road, Bangalore, India Address: 90/4, Dr Puneeth Rajkumar Rd, Marathahalli Village, Marathahalli, Bengaluru, Karnataka 560037 To learn more about the conference, visit: https://analyticsindiamag.com/ What to expect from The Rising 2023 While The Rising turns the spotlight on women in data science, it is open to all, from early-stage professionals to leading visionaries in tech. The event will feature a range of experts who will share their perspectives on building and sustaining a successful career in data science. The conference is a huge opportunity for women from the field to expand their community and network better. The conference will have two separate tracks over the two-day course with keynotes, tech talks, panel discussions and workshops. Apart from these, Rising 2023 will also hold mentoring sessions, hackathons, awards, and exhibitions. Women in Tech Leadership Awards The Rising also hosts the prestigious Women in Tech Leadership Awards, which celebrates tech leaders driving disruption & innovation in india. Recognising the importance of women in technology, the award will honour the achievements of leaders who have made an incredible impact in the tech community. Nominees for the award are selected carefully based on their considerable experience leading innovation and becoming leaders and role models in the Indian tech industry. 2023 Nominations are now OPEN. Leading organisations and individuals can submit nominations for the highest achievers in tech leadership and nominate people who have made powerful contributions and demonstrated expertise in driving business value. We also invite nominations from individuals who believe they have been at the forefront of disruption and have played a key role in helping our tech as a driver for growth and business value. All submissions will be assessed by our panel of editors, and industry veterans and awardees will be selected after a careful review and benchmarked against the best-in-class performance standards. Sponsors and Speakers Every edition of The Rising conference has more than ten organisations as sponsors. Past event sponsors include Fractal Analytics, Gojek, Publicis Sapient, Quantiphi Analytics, Rakuten, Wipro, The MathCompany, dunnhumby, Stryker, SAP, and ThoughtWorks, among others. Become a sponsor and join thousands of machine learning professionals who are active constituents and stakeholders in the Indian Women in AI ecosystem. To know more, you can write to info@analyticsindiamag.com or visit us at: https://analyticsindiamag.com/sponsor/. Early Bird Passes expire on 3rd Feb. Registration and tickets The Rising 2023 will be an in-person event hosted at the Hotel Radisson Blu, Outer Ring Road, Bangalore, India. Attendees will have access to all keynotes, panel discussions and workshops; AIM will also publish the recordings of all the sessions post the event. In addition, there will be a conference lunch held on both days. Group discounts are available. The schedule for the event, along with the list of speakers, will be announced soon. Keep an eye on this space for more information. The Rising 2023 offers a rare and exclusive opportunity for women developers and data scientists to network with experts from the biggest tech companies. We look forward to having you for The Rising 2023! Hurry up and book your seat now.
AIM is set to launch the 5th edition of the biggest women in tech conference, Rising 2023 – scheduled for March 16 and 17th, 2023 | Bangalore. Drawing hundreds of India’s women analytics leaders together to learn and do business.
["Deep Tech"]
["AIM Rising summit", "data science women", "international women's day", "Women in AI", "Women in AI Leadership awards", "Women in Analytics", "Women in Analytics India", "Women in Cybersecurity", "Women in Data Science", "Women in STEM", "Women in Tech", "Women Leaders in AI", "Women's Day"]
Poulomi Chatterjee
2022-11-28T13:47:21
2022
739
["API", "AIM Rising summit", "international women's day", "Women in Analytics", "R", "Women in AI", "data science", "Women in Tech", "analytics", "ViT", "Go", "machine learning", "Women in AI Leadership awards", "AI", "data science women", "Women Leaders in AI", "Women's Day", "GAN", "Women in Analytics India", "Women in Data Science", "Aim", "Women in STEM", "Women in Cybersecurity"]
["AI", "machine learning", "data science", "analytics", "Aim", "R", "Go", "API", "GAN", "ViT"]
https://analyticsindiamag.com/deep-tech/the-rising-2023-5th-edition-of-indias-biggest-gathering-of-women-in-tech-to-be-held-in-bengaluru-on-march-16-and-17th/
3
10
2
true
false
false
10,047,144
Cerebras Unveils World’s Largest AI Chip
In a bid to support the largest models, American semiconductor company Cerebras recently unveiled the world’s first multi-million core AI cluster architecture. This new technology handles neural networks with up to 120 trillion parameters. It is said to have the computing power of a human brain. Large language models like Microsoft NLG, OpenAI’s GPT-3, NVIDIA’s Megatron, and BAAI’s Wu Dao 2.0 have grown exponentially in the last few years. To run these models at scale, companies require a cluster of graphics processors, megawatts of power, and dedicated teams to operate them. As a result, chips that support AI at scale have become crucial more than ever before, focusing on scaling massive memory, compute, and communication. (Source: Cerebras) The Rise of AI Chips According to Technavio, the artificial intelligence (AI) chip market is expected to grow at $73.49 billion, growing at a CAGR of 51 per cent during the forecast period (2021-2025). Two years back, McKinsey reported that by 2025, AI-related semiconductors could account for almost 20 per cent of all demand, accounting for $65 billion in revenue. If this growth materialises as expected, semiconductor companies will be positioned to capture 40 to 50 per cent of the total market share. Some notable companies that make chips used to train AI models include Alphabet, Broadcom, Intel, NVIDIA, Qualcomm, Samsung Electronics, TSMC, and Graphcore. Recently, Tesla also unveiled its supercomputer called ‘Dojo,’ which has a capacity of over an exaflop, which is one quintillion (1018) floating-point operations per second. This chip is mainly used for computer vision for self-driving cars using cameras. Tesla has been collecting from over 1 million vehicles to train the neural network using its in-house chip. Before this, Tesla used the NVIDIA Drive PX2 chip to implement their autopilot and scale the production of autonomous vehicles. The chip’s configuration consists of a mobile processor that can operate at 10 watts and is converted to a multi-chip configuration with two mobile processors and two discrete GPUs, delivering 24 trillion deep learning operations per second. Another chip from NVIDIA PX Xavier consumes only 20 watts of power while delivering 20 TOPS of performance. It is packed with 7 billion transistors. NVIDIA Drive Pegasus uses power for two Xavier SoCs (system on chips) and NVIDIA’s Turing architecture to ensure the capacity of 320 TOPS while consuming 500 watts. The company is designing this platform for Level 4 and Level 5 autonomy. On the other hand, NVIDIA’s GA100 Ampere SoC is at the top with 54 billion transistors. In April this year, NVIDIA also unveiled a new processor called Grace, named after computer scientist Grace Hopper. This chip is designed to accelerate high performing computing and artificial intelligence (AI) workloads at data centres. It is said to deliver up to 30x higher aggregate bandwidth than today’s fastest servers and up to 10x higher performance for applications running terabytes of data. In the same month, Intel had also launched its most advanced and highest performing processor data centre — 3rd generation Intel Xeon scalable processor, aka Ice Lake. Another chip, Intel Mobileye Q4, can perform 2.5 tops while consuming 3 watts of power. This chip has generic multi-thread CPU cores, making it a robust computing platform for ADAS/AV applications. It supports a 40 Gbps data bandwidth. Google has also been working on developing AI chips for years. In June 2021, in a paper, ‘A graph placement methodology for fast chip design,’ the researchers revealed an upcoming version of Google’s own tensor processing unit (TPU) chips, which are optimised for AI computation. Recently, Google announced that it had developed a custom-built SoC, Tensor, to power Pixel phones. https://twitter.com/sundarpichai/status/1422228336533676035 Cerebras AI chip Built on the second generation Cerebras wafer-scale engine (WSE-2), the company looks to address the fundamental challenges using a holistic systems approach for extreme scale. It has designed a purpose-built solution for each type of memory and compute that the neural network needs to untangle and simplify the scaling problem. Cerebras calls this new execution mode ‘weight streaming.’ It unlocks unique flexibility, allowing independent scaling of the model size and the training speed. For example, a single CS-2 system can support models up to 120 trillion parameters. “To speed up training, we can cluster up to 192 systems with near-linear performance scaling,” said the Cerebras team. Here’s how it Works In this mode, they store the model weights in a new memory extension technology called MemoryX and stream the weights onto the CS-2 systems to compute each layer of the network, one layer at a time, as shown in the image below. (Source: Cerebras) On the backward pass, the gradients are streamed in the opposite direction back to the MemoryX, where the weight update is performed in time to be used for the next iteration of training. Here, it has also introduced an interconnect fabric technology called SwarmX, which allows it to scale the number of CS-2 systems near-linearly for extreme, large-scale models. What’s more? Besides scaling capacity and performance, Cerebras’ architecture enables vast acceleration for sparse neural networks. It uses fine-grained dataflow scheduling to trigger computations only for useful work. Thus, allowing them to save power and achieve 10X weight sparsity speedup. Further, researchers can use this architecture to compile the neural network mapping for a single CS-2 system, and the Cerebras software takes care of execution as the model scale, thus, eliminating the traditional distributed AI intricacies of memory partitioning, coordination, and synchronisation across thousands of small devices.
A single CS-2 system can support models up to 120 trillion parameters.
["Global Tech"]
["Cerebras", "Intel", "oneAPI"]
Amit Naik
2021-08-27T16:00:00
2021
907
["Go", "artificial intelligence", "Cerebras", "OpenAI", "AI", "neural network", "TPU", "Scala", "computer vision", "deep learning", "R", "oneAPI", "Intel"]
["AI", "artificial intelligence", "deep learning", "neural network", "computer vision", "OpenAI", "TPU", "R", "Go", "Scala"]
https://analyticsindiamag.com/global-tech/cerebras-unveils-worlds-largest-ai-chip/
4
10
2
false
true
true
25,377
Genpact Acquires Commonwealth Informatics, Establishes New Standard For Pharmacovigilance AI
Genpact, a professional services firm focused on delivering digital transformation, announced that it has signed a definitive agreement to acquire Commonwealth Informatics, Inc. (CI), a noted provider of cloud-based drug safety analytics products and services for medical research and healthcare delivery headquartered near Boston, Mass. Terms of the deal were not disclosed. This transaction is not expected to be material to current year financial performance, announced Genpact in an official statement. With this acquisition, the Genpact pharmacovigilance artificial intelligence (PVAI) suite of capabilities is expected to be the first in the industry to establish a closed-loop, machine learning system across the entire pharmacovigilance information value chain. Incorporating the Commonwealth Vigilance Workbench (CVW) software and the deep expertise of the CI team, Genpact will enable life sciences companies to establish a new approach for pharmacovigilance – one which leverages the wealth of available data in an effort to better predict and prevent adverse effects of medicines and thereby protect patient safety, while at the same time improving data quality and operational efficiency. Genpact’s PVAI capabilities incorporate natural language processing and machine learning techniques to reliably and accurately extract and process adverse event data from unstructured and partially-structured source documents. The integration of CVW with Genpact’s PVAI offering for automated and intelligent case processing is expected to create a pharmacovigilance system that continually learns from the wider healthcare delivery ecosystem, accelerates and enhances signal detection and evaluation, and better protects patient safety. Commonwealth Informatics is led by a widely-respected service and development team whose products and services span signal detection, signal evaluation, risk assessment, benefit-risk assessment, and population health informatics. CVW consists of a set of integrated modules that are delivered via a software as a service model. Commonwealth Informatics developed the modules in close collaboration with leading pharmacovigilance teams within pharmaceutical companies and regulatory agencies. The modules provide support for generating and assessing evidence from individual case safety reports, clinical trial data sets, and electronic health records. CVW is used by life sciences and biotech companies, healthcare providers, and government agencies such as the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare Products Regulatory Agency (MHRA). “With the integration of Commonwealth Informatics’ ground-breaking signal management solution into the Genpact PVAI suite of capabilities, we can better help life sciences companies achieve end-to-end transformations of their pharmacovigilance operating models, serving as their single go-to partner,” said Balkrishan “BK” Kalra, business leader for Life Sciences and Healthcare at Genpact. “We believe that our enhanced PVAI suite has the potential to be revolutionary in its impact across the industry, shifting resources from transaction processing to analysing the effects of medicines on public health. Genpact’s PVAI offering is currently being implemented by a number of large pharmaceutical companies, including a top-five global pharmaceutical leader. With the added capabilities from CI, we are excited to provide a comprehensive solution capable of transforming pharmacovigilance.” “Commonwealth Informatics is thrilled to join forces with Genpact and combine our software and expertise in close collaboration with our customers to create the future of pharmacovigilance systems,” said Geoffrey Gordon, founder and president, Commonwealth Informatics.“We are also excited to use this strong foundation as the basis for future work with healthcare organisations to close the loop and realise the full potential of a learning healthcare system.” Genpact currently serves the majority of the top global life sciences companies, helping pharmaceutical and medical devices companies pursue global growth, achieve cost reduction, increase speed to market, and improve regulatory compliance by providing a range of digital solutions, analytics services, and business process transformation expertise.
Genpact, a professional services firm focused on delivering digital transformation, announced that it has signed a definitive agreement to acquire Commonwealth Informatics, Inc. (CI), a noted provider of cloud-based drug safety analytics products and services for medical research and healthcare delivery headquartered near Boston, Mass. Terms of the deal were not disclosed. This transaction is […]
["AI News"]
["AI (Artificial Intelligence)"]
Prajakta Hebbar
2018-06-12T12:55:48
2018
594
["Go", "machine learning", "artificial intelligence", "AI", "Git", "RAG", "analytics", "data quality", "GAN", "R", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "analytics", "RAG", "R", "Go", "Git", "data quality", "GAN"]
https://analyticsindiamag.com/ai-news-updates/genpact-acquires-commonwealth-informatics-establishes-new-standard-for-pharmacovigilance-ai/
2
10
3
false
false
false
10,082,791
It’s Time for ‘Swachh Antariksh Abhiyan’
Thousands of rockets and satellites are launched into space every year, sadly, quite a few of them live on as debris. So many dead satellites are polluting our space, and India is among the few countries in the world that are thinking along the lines of space sustainability, calling for a plausible ‘Swachh Antariksh Abhiyan’. Earlier in March, NASA issued its Orbital Debris Quarterly News which reported a total of 25,182 space objects including spacecraft and spent rocket bodies. India has 217 of these, which includes 114 space debris objects. In the same month, ISRO built on its ‘Project NETRA’ (Network for Space Objects Tracking and Analysis) and deployed new radars and optical telescopes with a range of 1,500 km to detect objects and debris as small as 10 cm in size. India is not new to the space sustainability project. ISRO initiated Project NETRA, its early warning system for detecting debris and other space objects for protection of Indian satellites, back in 2019. In December, Union Minister Jitendra Singh informed the Lok Sabha that ISRO was putting appropriate measures in place for managing the increasing debris in the low Earth orbit (LEO). These included tracking and monitoring of space objects along with collision avoidance mechanisms for satellites. The need for debris detection and removal has increased since 2021 after a grim scenario when ISRO had to perform 19 collision avoidance manoeuvres, jumping from 12 in 2020, 14 of which were in LEO. ISRO also monitored 4,382 events in which space objects closely approached Indian assets, with 84 of them as close as a kilometre away. Ever since the Indian government threw the space sector open, ISRO saw over 60 startup registrations until July. Many of these are dealing with space management and researching innovations for space debris removal. In the same month, Jitendra Singh inaugurated the ISRO System for Safe and Sustainable Operation (IS40M) at the ISRO Control Centre in Bengaluru for monitoring and mitigating collision threats. Read: ISRO’s Upcoming Space Missions 2023 The UN’s Committee on the Peaceful Uses of Outer Space issued Guidelines for the Long-term Sustainability of Outer Space Activities highlighting the importance for organisations and governments to take action for the mitigation of space debris. The purpose of IS40M is to aid India in meeting the targets put forward by the UN and achieving Space Situational Awareness (SSA) goals. ISRO has also been an active member of the Inter-Agency Space Debris Coordination Committee (IADC) that coordinates and develops technology for sustainable space operations and IS40M is India’s bid towards outer space development and SSA. In addition to ISRO, in August, Bengaluru-headquartered startup, Diganatra Research and Technologies began building India’s first SSA observatory in Uttarakhand. Apart from tracking debris in LEO, the facility will be capable of monitoring Geosynchronous Earth Orbit (GEO). International contributions NASA has been actively addressing the orbital debris issue. In September, it announced the funding of research proposals for analysing the economic, policy, and social issues related to space sustainability. Their expert committee finalised three research proposals from three different universities across the globe. On December 16, NASA partnered with AST & Science to sign a spaceflight safety agreement to benefit from AST’s BlueWalker 3 that is highly equipped for manoeuvring through space and avoiding spacecraft and debris in orbit. Earlier in December, NASA Johnson Space Center also designed Active Debris Removal Vehicle (ADRV) to remove large orbital debris from LEO by deorbiting it. The ADRV is a single-use low cost vehicle that can capture tumbling debris objects and reposition or deorbit them. The small form factor of the vehicle allows NASA to launch eight of them in one payload. One of the world’s first Space Sweeper, developed by a private Japan-based company, Astroscale Inc, was announced in August 2021, and planned to declutter space by the onset of 2024 by dragging them into the planet’s natural incinerator – the oxygen rich atmosphere. In September, the company also received funding from the UK Space Agency to harness its Rendezvous and Proximity Operations (RPO) for the mission COSMIC (Cleaning Outer Space Mission through Innovation Capture) to remove defunct British satellites by 2026. Last week, ThinkOrbital, a US based company founded by former SpaceX VP, Lee Rosen, also announced a space infrastructure to enable in-space manufacturing for businesses and the military focusing on removal or recycling debris. Apart from the risk of collision and environmental damage, the presence of orbital debris can make it more difficult and costly to access and use outer space. Debris can interfere with the launch and operation of satellites, as well as the planning of future space missions. Read: Big Tech Loves Space, But Not Enough
Out of 25,182 space objects and debris globally in 2022, India has 217.
["AI Features"]
["Isro Satellites"]
Mohit Pandey
2022-12-20T11:00:00
2022
777
["Go", "funding", "programming_languages:R", "AI", "innovation", "RAG", "Isro Satellites", "ViT", "GAN", "R", "startup"]
["AI", "RAG", "R", "Go", "GAN", "ViT", "innovation", "startup", "funding", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/its-time-for-swachh-antariksh-abhiyan/
3
10
2
false
true
true
10,167,545
DataSwitch&#8217;s DataMaps: A Unified Approach to Data Traceability and Transformation
DataSwitch has unveiled a new product called DS DataMaps, which aims to build traceability of data and transformation flow from data pipelines, helping to re-engineer the data platform and providing explainable data flow for data producers and consumers to enhance adoption and trust in the data platform. However, challenges arise when transitioning to modern data platforms like data lakehouses, which necessitate significant data restructuring and technological changes. DataSwitch CEO Karthikeyan Viswanathan described that there are three types of data migration and transformation—rehosting, replatforming, and reengineering. According to him, reengineering is the most extensive transformation which involves changes to infrastructure, technology, and the data structure itself. “When changing infrastructure, I’m moving from on-premise to cloud. I’m changing from one technology to another technology. However, during reengineering, I’m changing the infrastructure, I’m changing the technology, and also changing the data structure,” he added. Reengineering, the most intricate of these approaches, involves a comprehensive overhaul of infrastructure, technology, and data structure. He explained that this allows customers to feed more data to the AI systems for analysis and other applications. The Role of DataMaps in Data Restructuring DataMaps provides essential data traceability and lineage, which are indispensable for successful reengineering. “I need to find the traceability of data. Then only I can restructure my data in the new world,” Viswanathan said. The tool operates through a series of key functionalities, including metadata extraction, which consolidates metadata from various data pipelines into a common format. It also provides end-to-end traceability, identifying links between data elements to offer comprehensive job level and end-to-end lineage. Furthermore, DataMaps utilises predictive mapping, using pre-configured data models and automatic mapping to generate code for the new platform. Its polyglot code generation capability allows the creation of code for diverse platforms, such as Snowflake or Databricks, based on specific customer needs. Citing an example, he said that users can modify how data is structured. For instance, if a field called ‘Party’ in the old system needs to be renamed ‘Client’ in the new system, the tool makes this transformation easy. To facilitate testing and validation, the tool features synthetic data generation and live unit testing. Finally, it provides visual pipeline outlining, offering a clear representation of the data transformation process. “Think about the simple analogy of Google Maps,” Viswanathan said. “We apply the same concept to data.” He explained that just as Google Maps builds a knowledge graph to determine the best routes, DS DataMaps builds a knowledge graph to track data movement across systems Moreover, DataMaps integrates with DataSwitch’s existing product suite, including DS Migrate and DS Citizen. DS Migrate manages infrastructure and technology migration, while DS Citizen handles data modeling and code generation. DataMaps serves as the central tool for data restructuring during re-engineering. The comprehensive tool suite from DataSwitch maintains data quality and reduces mistakes by 70%, enabling more reliable strategic choices. Its plug-and-play solutions simplify the transformation process and accelerate project delivery by 60%. The launch of DataMaps highlights Dataswitch’s broader strategy of addressing challenges in data engineering and operations. As companies adopt cloud-native architectures, the demand for intelligent lineage and migration tools continues to grow.
DataMaps provides essential data traceability and lineage, which are indispensable for successful reengineering.
["AI Highlights"]
[]
Siddharth Jindal
2025-04-09T18:38:20
2025
520
["Go", "Snowflake", "AI", "data pipeline", "Aim", "data engineering", "Rust", "R", "data lake", "Databricks"]
["AI", "Aim", "Snowflake", "Databricks", "R", "Go", "Rust", "data engineering", "data pipeline", "data lake"]
https://analyticsindiamag.com/ai-highlights/dataswitchs-datamaps-a-unified-approach-to-data-traceability-and-transformation/
3
10
0
true
true
false
10,085,272
Delhivery acquires supply chain software firm Algorhythm Tech
Gurugram-based logistics firm Delhivery on Tuesday announced that it has successfully acquired supply chain software firm Algorhythm Tech. This development comes after the company announced in December 2022 that it would acquire Algorhythm Tech for INR 14.9 crore in an all-cash deal to enhance its integrated supply chain solutions offering. Following this deal, Algorhythm Tech will operate as a wholly-owned subsidiary of the popular logistics service provider. The Algorhythm Tech acquisition has been funded by the proceeds from Delhivery’s initial public offering in May 2022 when it raised INR 5,235 crore. The acquisition is being seen as a strategic move to help the logistics firm enhance its supply chain business offerings, providing value-added services and optimising costs. The firm, months before its IPO, had bought California-based Transition Robotics in December 2021. In August 2021, it had acquired Spoton Logistics. Founded in 2003, Algorhythm Tech was the brainchild of Abhaya Borwankar, Ajit Singh, and Sandeep Pendurkar. The company offers intelligent, connected planning and optimisation solutions for manufacturing, supply chain, inventory planning, sales and distribution, etc. On the other hand, Delhivery is one of India’s largest growing logistics providers. It aims to build the operating system for commerce through a combination of world-class infrastructure, logistics operations of the highest quality and cutting-edge engineering and technology capabilities. The company has built a nationwide network encompassing all states , servicing over 18000 pin codes and has set up 21 automated sort centres, 96 gateways, 93 fulfilment centres and 2948 direct delivery centres.
Gurugram-based logistics firm Delhivery on Tuesday announced that it has successfully acquired supply chain software firm Algorhythm Tech. This development comes after the company announced in December 2022 that it would acquire Algorhythm Tech for INR 14.9 crore in an all-cash deal to enhance its integrated supply chain solutions offering. Following this deal, Algorhythm Tech will […]
["AI News"]
["delhivery", "Mergers and Acquisitions"]
Aparna Iyer
2023-01-17T15:00:33
2023
248
["Go", "programming_languages:R", "AI", "IPO", "programming_languages:Go", "Aim", "ai_applications:robotics", "delhivery", "Mergers and Acquisitions", "R"]
["AI", "Aim", "R", "Go", "IPO", "programming_languages:R", "programming_languages:Go", "ai_applications:robotics"]
https://analyticsindiamag.com/ai-news-updates/delhivery-acquires-supply-chain-software-firm-algorhythm-tech/
3
8
1
true
false
false
66,222
Wrapping Up- Top ML Talks From Day 2 At AIM’s Plugin 2020
One of the biggest online conferences brought to you by Analytics India Magazine, “Plugin” aims to bring the best brains together from all around the world to talk about cutting-edge innovations in a distinctive virtual setting. The two-day virtual event indeed gave the attendees direct access to the brilliant minds of AI and data science ecosystem. The second and last day of the online conference “plugin” has started with a number of parallel tracks and lots of knowledge sharing through demos and workshops from the top analytics leaders of the industry. Below here we listed a quick glance to all the talks of Day 2 at Plugin 2020. A Glance from the Tech Talks How to develop Credit risk models (scorecard) using Machine learning technique and its use in underwriting strategy development The first Tech Talk of the day was delivered by Sanjay Kar, Head of Analytics at Equifax India where he talked about how banking and finance domain became the early adopter of data analytics. He discussed various topics including credit risk management, approaches and steps to develop credit risk modelling, a brief on underwriting strategy, how to develop strategies, prepare data, as well as validation and calibration. Identifying Model Drift Before It Is Too Late In this talk, Jacqueline Long, Principal Solutions Architect- Global Tech Practice contactless commerce is reshaping consumer behaviour and the “new normal” has brought about an immediate need to adjust to the global COVID-19 situation. Jackqueline explained the phases of Governed ModelOps Process, what are its prototypes, deploy and production environment. She further discussed the meaning of model drift and how business decisions that rely on analytical models could be suffering from model drift. The Value of Data Analytics in the Smart Factory Sudhir Padaki, Director of Business dev- Data & Analytics at Altair discussed how to access data from the shop floor through OT/IT convergent solutions and build useful analytics to tackle Asset Monitoring, Predictive Maintenance and how one can leverage analytics on a shop-floor in real-time to identify hidden indicators for future downtimes or identify anomalies. He discussed the scenarios with a demo where he showed real-time asset monitoring, real-time scoring, among others. Approaching (Almost) Any NLP Problem This talk was delivered by Abhishek Thakur, who is the world’s first 4x Kaggle Grandmaster and Chief Data Scientist at Boost.ai. In this talk, Abhishek discussed on one of the popular topics of machine learning techniques, Natural Language Processing (NLP) along with various other machine learning techniques. He talked about the uses and applications of NLP, how to perform pre-processing on text data and solve a complex problem, various machine learning and deep learning models along with a demo on Quora duplicate question identification. Leveraging Game Theory for Explainable AI (XAI) Shashank Shekhar, Head-Advanced Analytics & Data Sciences at Subex started the talk with his journey in Subex. He explained the definition of Explainable AI and why it is the importance of using explainable AI, the taxonomy of interpretability that includes pre-modelling explainability, in-modelling explainability and post-modelling explainability, the difference between interpretability and completeness trade-off, the difference between interpretability and accuracy trade-off, among others. He further discussed Dominance analysis and how it is similar to Shapley approach but the latter has additional features. How to Use Predictive Analytics in Cricket In this talk, Netali Agrawal, Lead Business Analyst at Infosys talked about a predictive analytics model in cricket where the goal of this model is to predict the winner for ICC men world cup 2019 based on the historical match data player-wise. She started from the problem statement and depicted the steps that took place in the model including feature engineering, data preparation, prediction on World Cup dataset, validation, among others. During the talk, Netali showed how she calculated the team strength based on the individual player batting and bowling strength who played in respective matches. Production Machine Learning & MLOps Lavi Nigam, Data Scientist at Gartner talked about the difference between the current state of machine learning pipeline and new-age machine learning pipeline. Lavi shed light on the popular Machine Learning pipeline by Microsoft and explained that at the current scenario, developers talk mostly about the model development rather than the deployment. Lavi also pointed out that the current generation most stresses on the Data Science and algorithm part rather than the engineering part of machine learning and artificial intelligence. A Black-Box Approach to Data Science to Focus on What Really Matters to the Business In this talk, Gianluca Gindro, Head of Data Science at Kuoni discussed the importance of managing a black box and how to manage a black box from the business point of view. Treating data science as a ‘black-box’, the speaker focused on the key contact points of data science with the business that should never be neglected and also in practice how the same machine learning model can take different shapes depending on changing business needs. A Glance from the Knowledge Talks Developing a Product Centric, AI Driven Blueprint for the Enterprise Johnson Poh, Head, Group Enterprise AI at UOB Bank discussed the essentials of artificial intelligence and its role in this new paradigm. In this talk, Poh covered several fundamental topics including AI, Big Data, Analytics and its broad field with the help of a Venn diagram, the relevance of emerging technologies, how to implement a data-driven strategy taking on the lens of people, process and technology, how AI changed the financial industry, among others. Leveraging Data & Analytics for Social Goodness In this talk, Anirban Nandi, Head of Analytics at Rakuten covered how can data sciences help solve some of the pertinent problems of the society and seek help from fellow professionals to contribute to this cause as well. Anirban mentioned some of the available data sources including social media, mobile, biometric, satellite images, etc. and explained Multiple indicator cluster survey by UNICEF and how MICS plays a central role in the new 2030 agenda for Sustainable Development data landscape. My Experiences with Managing/Modeling/Analysing Data for Fast Growth Nikola Sucevic, Head of Advanced Analytics at Smartfren PT talked about R&D efforts that exist on operator sides nowadays and how to manage teams of talented data scientists. He discussed how to model the future for real-time strategic decisions for investors and decision-makers in the telecom industry He showed a demo on planning board where he discussed the churn rates, growth rates, customer churn variations, and other such. Artificial Intelligence Essentials for Business Leaders In this talk, Bhagirath Kumar Lader, Chief Manager, Business information System at GAIL discussed the essentials of AI, deep learning and machine learning, how to distinguish between the hype and reality, applications of AI in business, among others. Bhagirath explained the modern Analytics journey that includes descriptive analytics, diagnostic analytics, predictive analytics, etc. as well as the wall of analytics that needs to be broken through prescriptive and predictive analytics. Impact of Data on Bringing Efficiencies in Logistics Vikram Khurana, Head of Analytics and Business Intelligence at Delhivery talked about the abstraction of problems to run a logistics business and explored some of the analytics case studies in logistics which are helping an organization in this modern era of data. Vikram explained that most of the solutions in the logistics industry are defined around 3 important variables that are space, location and time as well as discussed some use cases on network design and projections. Role of Data Strategy in Data Science and AI initiatives In this talk, Sateesh Rai, Head-Analytics at Orient Electric talked about the importance of data in an organisation. Sateesh explained the components of data strategy and depicted that CXOs need to understand why and how a data strategy would make a difference, how data was created and used. He discussed the various challenges in the organisational growth and business strategy that include lack of a clearly articulated business strategy, lack of cross-business, lack of priority, etc. and how business strategy comes first than data strategy. The Artificial Intelligence as a Service (AIaaS) Opportunity Anish Agarwal, Director- Data & Analytics at RBS India talked about a number of topics related to Artificial Intelligence as a Service or AIaaS including benefits, importance, applications, among others. He discussed how this service model is aspiring to become just as widely adopted based on its potential to drive business outcomes with unmatched efficiency, how AIaaS can serve customers better and how this platform allows individuals and companies to experiment with AI for various purposes without large initial investment and with lower risk. Cognitive Digital AI Platform for Telecom In this talk, Sanjeev Chaube, EVP & Head-Big Data & Advanced Analytics at Vodafone Idea discussed how to improve the customer experience and increase profitability in telecom using Data Science, AI & Advanced Analytics Technologies. Sanjeev started the talk explaining the telecom market, how telecom as horizontal supports multiple domains including hospitals, airport, retail, etc., what is cognitive digital AI platform intuition, how to derive actionable insights and other such. Role of Data & AI in Customer Centricity Vijay Balakrishnan, Group Chief Fata Officer at Michelin talked about Customer Centricity and how it has become the heart of many organisations globally. He discussed the capabilities of a customer-centric organisation that includes understanding unique problems, the context of needs and delivering consistent and explained that a customer-centric organisation must-have features like building customer empathy into processes and policy, delivering value across customer’s lives, motivating employees to stay engaged and much more. Digital Fluency: Creating a Data-Driven Organization In this talk, Michael Ferrari, Global Head of Climate & Agronomic Decision Sciences at Syngenta explained topics like quantamental that is a fusion of fundamental and quantum, digital fluency and hoe data science is related to digital fluency. He discussed how the notion of the ‘one size fits all’ data scientist has evolved to be a fictitious character, the benefits of focusing on Beta instead of Alpha, and the importance of domain expertise for data science to continue to evolve and provide value to organisations. Data Literacy — It is not a Math Skill. It is a Life Skill The last talk of the day was concluded by Kirk Borne, one of the top AI influencers and Principal Data Scientist at Booz Allen Hamilton. In this talk, Kirk discussed the meaning of data literacy and how it is more than just numbers and measurement of things, the complexity of big data. He further discussed on various important topics that are related to data and people often overlook or misunderstand the terms such as data awareness, data relevance, data literacy, data science and data imperative.
One of the biggest online conferences brought to you by Analytics India Magazine, “Plugin” aims to bring the best brains together from all around the world to talk about cutting-edge innovations in a distinctive virtual setting. The two-day virtual event indeed gave the attendees direct access to the brilliant minds of AI and data science […]
["AI Trends"]
["plugin online conference", "plugin virtual data science event", "retail bi prescriptive"]
Ambika Choudhury
2020-05-29T20:23:28
2020
1,764
["plugin online conference", "plugin virtual data science event", "data science", "artificial intelligence", "machine learning", "AI", "retail bi prescriptive", "ML", "MLOps", "NLP", "deep learning", "analytics", "xAI"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "NLP", "data science", "analytics", "xAI", "MLOps"]
https://analyticsindiamag.com/ai-trends/wrapping-up-top-ml-talks-from-day-2-at-aims-plugin-2020/
3
10
5
false
true
true
10,109,589
7 Must-Read Generative AI Books
Generative AI has gained significant attention in 2023. As everyone is busy experimenting with it and building innovative applications and tools for the betterment of humanity, it becomes increasingly more important to understand the basics and technical nuances, and not just fall prey for the hype. Here AIM has listed the top seven must read generative AI books of 2023 for machine learning engineers and data scientists, enhancing your understanding and skills in the field of Generative AI. Table of contentsGenerative AI with Python and TensorFlow 2Generative Deep LearningGenerative AI with LangChainGenerative AI on AWSArtificial Intelligence & Generative AI for BeginnersGenerative AI in PracticeThe Equalizing Quill Generative AI with Python and TensorFlow 2 by Joseph Babcock and Raghav Bali In this book, Generative AI with Python and TensorFlow 2 by Joseph Babcock and Raghav Bali gives you a glimpse of generative models evolution, from Boltzmann machines to VAEs and GANs, learn TensorFlow model implementation, and stay updated on deep neural network research. Access the Book here. Generative Deep Learning By David Foster  (Author) & Karl Friston (Foreword) Generative Deep Learning by David Foster and Karl Friston talks about machine learning engineers and data scientists how to create generative deep learning models using TensorFlow and Keras, including VAEs, GANs, Transformers, normalizing flows, energy-based models, and denoising diffusion models. It covers deep learning basics and advanced architectures, providing tips for efficient learning and creativity. Access the book here. Generative AI with LangChain By BenAuffarth Generative AI with LangChain by Ben Auffarth explores the functions, capabilities, and limitations of LLR models like ChatGPT and Bard, and how to use the LangChain framework for production-ready applications. It covers transformer models, attention mechanisms, training and fine-tuning, data-driven decision-making, automated analysis and visualization using pandas and Python, and heuristics for model usage. The goal is to provide a comprehensive understanding of LLMs and their potential for enhancing our understanding of the world. Access the book here. Generative AI on AWS by Chris Fregly, Antje Barth, Shelbee Eigenbrode You’ll learn the generative AI project life cycle including use case definition, model selection, model fine-tuning, retrieval-augmented generation, reinforcement learning from human feedback, and model quantization, optimization, and deployment. And you’ll explore different types of models including large language models (LLMs) and multimodal models such as Stable Diffusion for generating images and Flamingo/IDEFICS for answering questions about images. Access the book here. Artificial Intelligence & Generative AI for Beginners by David M. Patel For those eager to delve into the world of AI, particularly the buzz around generative AI, and seeking practical ways to harness tools like ChatGPT, MidJourney, or RunwayML for both business and personal advancement, this comprehensive guide is an invaluable resource. It begins with an exploration of AI’s history and its key components, delves into machine learning types, and discusses the crucial roles of data and algorithms. The guide further elucidates the major fields of AI, including NLP, computer vision, and robotics. In its deep dive into generative AI, it explains the concept, types, and offers business case studies, alongside a step-by-step approach to building and developing generative AI models. The final part focuses on practical applications in various fields like copywriting and graphic design, presenting the best AI tools of 2023 and addressing ethical considerations. Access the book here. Generative AI in Practice by Bernard Marr In Generative AI in Practice, renowned futurist Bernard Marr offers readers a deep dive into the captivating universe of GenAI. This comprehensive guide not only introduces the uninitiated to this groundbreaking technology but outlines the profound and unprecedented impact of GenAI on the fabric of business and society. It’s set to redefine all our jobs, revolutionize business operations, and question the very foundations of existing business models. Beyond merely altering, GenAI promises to elevate the products and services at the heart of enterprises and intricately weave itself into the tapestry of our daily lives. Access the book here. The Equalizing Quill by Angela E. Lauria As AI technology rapidly advances, AI-assisted book writing is becoming increasingly accessible to writers of all backgrounds. Learning how to unlock the potential of large language models is critical for communities who have been disenfranchised and are ready to make a bigger impact on society’s thinking. It is time to read The Equalizing Quill and finally make your voice heard. Access the book here.
Here AIM has listed the top seven must read generative AI books of 2023
["AI Trends"]
["AI (Artificial Intelligence)", "Machine Learning"]
Arya Vishwakarma
2023-12-28T11:35:46
2023
720
["GenAI", "artificial intelligence", "machine learning", "AI", "neural network", "ML", "Machine Learning", "computer vision", "NLP", "deep learning", "generative AI", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "neural network", "NLP", "computer vision", "generative AI", "GenAI"]
https://analyticsindiamag.com/ai-trends/7-must-read-generative-ai-books-for-unleashing-your-technology-prowess/
3
10
3
true
true
true
10,073,330
Could Astro Compete with Next.js to Become the Next Big Framework?
Developers have fallen into a problem-solution loop. There is a continuous effort to solve one or another problem in the developer community. Ironically, there are so many solutions now that there is a need for another solution for the existing ones. The same issue has emerged in the development of websites. End users love fast-loading websites. But developers love to build them with big heavy frameworks. The key to better performance is to use less JavaScript. Astro allows users to build a site with their favourite framework or multi-frameworks at the same time and renders them to static html build time. It takes a different approach from the existing rendering patterns used by many other frameworks such as Gatsby, Next.js, Remix.run, and others. Astro webpages are entirely static, with no JavaScript coding whatsoever. When a component (for example, image carousels, dark and light mode) requires JavaScript code to run, Astro merely loads that component and any necessary dependencies. Other site components remain static lightweight HTML. The getting started guide from Astro is an excellent injunction in acquainting yourself with Astro. While other frameworks like Angular, Svelte, and Vue are focused on developing dynamic sites, Astro has been able to find the sweet spot between static and dynamic sites. Solution to key problems in websites Managing hydration is one of the major issues in websites. Static web pages are hydrated before a user can interact with them, which decreases performance. The longer the hydration process takes, the longer users have to wait to interact with a website or an online application. Astro overcomes this issue by loading certain page components as needed and leaving the remainder of the page as static HTML. This is known as partial hydration. Astro-created websites are static by default. That is, no JS would be served; instead, all JS would be stripped during the process. Since users do not have to wait for the complete page to load before interacting with the web page, the partial hydration process is critical to having the Island architecture load faster than the single-page application architecture. The Island design allows components to load independently of one another and render in isolation. With code splitting, a minimal amount of JavaScript is required for a route while lazy loading the remaining components. Astro supports code splitting by default; they do so based on page routes. Additionally, it includes routing that may be used to create new pages. After the Beta release, Astro instantly announced experimental support for server-side rendering (SSR). This is crucial since server-side applications render faster and are naturally SEO-friendly. Divided opinions Several developers believe that Astro might completely alter how websites are built, particularly those that don’t use a lot of client-side javascript. Performance would be excellent by default in such a configuration, and owing to the templating system and MDX-like capabilities, maintaining a codebase would be simple enough for everyone—even for users who are unfamiliar with client-side libraries. However, many others believe that Astro might not be a replacement for all the frameworks. It is most likely to co-exist with other frameworks. Astro has released its version 1.0 which is still in the Beta phase. It is therefore tough to situate this platform in the larger framework universe as of now. Next.js is the biggest competitor to Astro. The latter is still new in the market and widely believed to be nascent in comparison to the formidable Next.js.
Astro lets developers find the sweet spot between static and dynamism; build fast-loading sites.
["AI Trends"]
[]
Tausif Alam
2022-08-23T13:00:00
2022
570
["ELT", "programming_languages:R", "AI", "ML", "programming_languages:Java", "JavaScript", "R", "Java", "programming_languages:JavaScript"]
["AI", "ML", "R", "JavaScript", "Java", "ELT", "programming_languages:R", "programming_languages:JavaScript", "programming_languages:Java"]
https://analyticsindiamag.com/ai-trends/could-astro-compete-with-next-js-to-become-the-next-big-framework/
3
9
1
false
true
false
10,115,381
When to RAG, and When (Not) to XGBoost
“If you had asked AI experts what an LLM was before [the launch of ChatGPT in] 2022, many probably would have answered that it’s a law degree,” quipped Oliver Molander in his post on LinkedIn, adding how many find it extremely difficult to accept that AI is much more than just LLMs and text-to-video models. And the real winner, when it comes to tabular data and making sense of sheets, is XGBoost (aka Extreme Gradient Boosting). It excels on all fronts, amid all the hype around other deep learning techniques, even LLMs, or the recent one Retrieval Augment Generation (RAG). XGBoost 2.0, launched in October last year, made it perform even better on several new classifications. Though techniques like XGBoost, deep learning, or RAG, are not directly comparable, their functions are the same – to retrieve, make sense of information, and generate outputs. Many such cases.Reject FOMO, embrace tradition. https://t.co/1b3rW5kRVK— Bojan Tunguz (@tunguz) March 10, 2024 Heard of the New XGBoost LLM? Despite the advancements in generative AI and the proliferation of LLMs, the practical utility of XGBoost remains unparalleled, particularly in domains reliant on tabular datasets. XGBoost’s interpretability, efficiency, and robustness make it indispensable for applications ranging from finance to healthcare. The hype around LLMs and RAG techniques has made people forget about the importance of other ML techniques, such as XGBoost. VCs are so hell-bent on hopping onto the GenAI and LLM bandwagon that every new terminology is often mislabelled as a new type of LLM. But in reality, a huge chunk of the return of investment is concentrated around predictive ML techniques and techniques such as XGBoost and Random Forest. The majority of business use cases out there for AI/ML are done with proprietary tabular business data. When dealing with tabular datasets, efficiency is paramount. XGBoost’s versatility extends beyond classification to regression and ranking tasks. Whether you need to predict a continuous target variable, rank items by relevance, or classify data into multiple categories, XGBoost can handle it with ease. XGBoost’s interpretability, efficiency, and versatility make it the preferred choice for many predictive modelling endeavours, particularly those reliant on tabular data. Conversely, the evolving capabilities of LLMs and the augmentative potential of RAG offer tantalising prospects for knowledge-intensive applications. RAG is Too Good, But Not so Much A study conducted in July 2022, analysing 45 mid-sized datasets, revealed that tree-based models such as XGBoost and Random Forests continued to exhibit superior performance compared to deep neural networks when applied to tabular datasets. RAG burst onto the scene in 2020 when the brainiacs at Meta AI decided to jazz up the world of LLMs. It’s a game-changer. Designed to give LLMs the much-needed information techniques, RAG swooped in to fix the problems that haunted its predecessors – the dreaded hallucinations. With RAG, customers can add another dataset, and give the LLM fresh information to generate the answer from. Some call this “fancier prompt engineering”. This is what enterprises need to generate insights from their own data. But even then, this technique has not completely fixed the hallucination issue within LLMs, but arguably made it even worse since people started trusting these models even more. However, the deployment of RAG is not without its challenges, particularly those concerning data privacy and security. Instances of prompt injection vulnerabilities underscore the need for robust safeguards in leveraging RAG-enabled models. Traditionally, there have been two distinct groups in the ML ecosystem: the tabular-data-focused data scientists who use XGBoost, lightBGM, and similar tools, and the LLM group. Both these groups have used separate techniques and models. “I have always been a big fan of XGBoost! There was a time, I was more of an XGBoost modeller than an ML modeller,” said Damein Benveniste from The AiEdge on LinkedIn. Everyone is focussed on developing LLMs.— Marc (@marccodess) August 29, 2023 The LLMs produce textual output, but the focus here is on using the internal embeddings (latent structure embeddings) generated by LLMs, which can be passed to traditional tabular models like XGBoost. While Transformers have undoubtedly revolutionised generative AI, their strengths lie in unstructured data, sequential data, and tasks that involve complex patterns. Krishna Rastogi, CTO of MachineHack said, “Transformers are like the H-bombs of machine learning, and XGBoost is the reliable sniper rifle. When it comes to tabular data, XGBoost proves to be the sharpshooter of choice.”
“What’s your thoughts on XGBoost 2?” “I’ve never heard of that LLM.” This was a dialogue Molander recently had with a VC who proclaimed themselves as being “AI-first investors”.
["AI Features"]
["XGBoost"]
Mohit Pandey
2024-03-12T14:30:00
2024
723
["ChatGPT", "GenAI", "machine learning", "Meta AI", "AI", "neural network", "ML", "deep learning", "XGBoost", "generative AI"]
["AI", "machine learning", "ML", "deep learning", "neural network", "generative AI", "GenAI", "ChatGPT", "Meta AI", "XGBoost"]
https://analyticsindiamag.com/ai-features/when-to-rag-and-when-not-to-xgboost/
3
10
2
true
false
false
10,019,888
Meet Kautilya, The Youngest AI Programmer In The World
The lockdowns in the aftermath of COVID-19 pandemic have made all of us homebodies. The smart alecs among us saw the opportunity in difficulty. Meet Kautilya Katariya, who took up computer programming and built AI applications when the times got tough. The computer whiz-kid from the UK became the Guinness World Record holder for the youngest AI programmer at six years old after completing a series of computer lessons from IBM. According to an IBM blogpost, Kautilya began reading IBM course materials to understand computer programming and concepts of coding languages like Python. He has now completed five different courses in Python and IBM in November 2020 including ‘Foundations of AI’, ‘Python for Data Science’, and a course from the IBM cognitive class. Analytics India Magazine caught up with the seven-year-old, and his mother Trupti and father Ishwari to understand what made him fall in love with coding. AIM: What got you into computer programming? Kautilya: I read some books and watched a few videos on YouTube about Technology and AI. And everything pointed to me that cool things are either run by a computer programmer or made using programming. I got fascinated by how things actually work. AIM: How did your parents help you when you showed interest in learning programming and AI? Kautilya: When I started asking lots of questions about computers, maths, robots and artificial intelligence, my parents provided some nice books about basic computers, science and technology concepts. I read them all and started asking more questions as I got extra time at home due to the COVID lockdown since the schools shut down. My parents gave me a laptop with an internet connection to explore the computer world on my own. My parents understood my curiosity and helped me by providing all the necessary means to explore the world of technology and AI. AIM: What are the sources you would suggest for young people to start learning computer programming? Ishwari and Kautilya: In our opinion, young people should start with some basic books about computer programming available to them written in the language they can understand or most comfortable with. Young people should first learn about basic logic by block-level programming, and MIT Scratch is one of the best platforms available to learn. Once they are comfortable with logic and basic algorithms, Python.org is the next computer language to learn along with JavaScript and HTML. There are plenty of free videos and educational material available online for this. For AI and Machine Learning, lots of free material are available online, but IBM’s Machine Learning for Kids is a good place to start. AIM: What’s next for you? Kautilya: Next, I am learning new and advanced concepts about Applied Artificial Intelligence from free IBM courses, for example, building a chatbot for my website, which can answer questions about programming and AI. I have not decided yet about the long term but maybe in the future, I will try to learn and work in the cognitive computing field. AIM: What message do you have for young people who find coding difficult? Kautilya: I think computer programming is really fun and is similar to solving puzzles. If we think that we are just trying to solve puzzles, coding won’t feel that difficult and you may start enjoying it. AIM: When Kautilya showed curiosity about computers, what was your approach to help him? Trupti and Ishwari: Kautilya is a keen reader and reads all kinds of books on various topics along with making mischief in and out of the house. He showed a special interest and curiosity in computing. At the age of six, along with swimming and cycling, computing and puzzles became his favourite topic to discuss. So we provided him books of his interest to satisfy his curiosity. Being a good reader, he was absorbing knowledge like a sponge. He always finished his schoolwork quickly. When his curiosity became more demanding, we provided him with a laptop to explore the world of technology to let him utilise his extra time. Initially, he was exploring by watching various free YouTube videos on computing, algorithms, coding, programming and AI. Then, he enrolled for some courses available online for free from universities like Stanford, MIT and tech giants like IBM on various platforms like edX, IBM Cognitive Class to gain some structured knowledge about various concepts related to computer programming, Machine Learning, AI and Data Science. AIM: What advice do you have for parents who are considering coding lessons for their kids? Trupti and Ishwari: We think every kid is special and has lots of potential and calibre in them. As a parent, we need to identify and provide all necessary means to excel in the area of the kid’s interest. Age should not be the bar to start learning coding at a young age, like any other sports or language. Consider coding as another mental exercise to develop logical thinking and problem-solving capability in the kids rather than an additional subject to teach them. We think every child is curious by nature and young minds are open to learning new things. We need to give them the right exposure, and they would love to involve and solve problems like playing with puzzles. Coding is one of the best tools to channelise.
The lockdowns in the aftermath of COVID-19 pandemic have made all of us homebodies. The smart alecs among us saw the opportunity in difficulty. Meet Kautilya Katariya, who took up computer programming and built AI applications when the times got tough. The computer whiz-kid from the UK became the Guinness World Record holder for the […]
["AI Features"]
["Interviews and Discussions"]
Kashyap Raibagi
2021-02-09T16:00:00
2021
882
["data science", "machine learning", "artificial intelligence", "AI", "ML", "Python", "Aim", "analytics", "JavaScript", "R", "Interviews and Discussions"]
["AI", "artificial intelligence", "machine learning", "ML", "data science", "analytics", "Aim", "Python", "R", "JavaScript"]
https://analyticsindiamag.com/ai-features/meet-kautilya-the-youngest-ai-programmer-in-the-world/
3
10
0
false
true
false
10,066,753
After 20 years, Apple pulls the plug on iPod
Two decades after the launch of the first iPod, Apple has decided to discontinue the product that revolutionised the way we consumed music. Apple has shipped more than 450 million iPods, the first of its kind device to ‘hold 1,000 songs in your pocket’. “Today, the spirit of the iPod lives on. We’ve integrated an incredible music experience across all of our products,” Greg Joswiak, Apple’s senior vice president of Worldwide Marketing. The popular iPod models include iPod Nano, iPod Shuffle and the latest iteration, iPod Touch. Touch was released in 2007 and last updated in 2019. The model will remain available “while stocks last”. “Music has always been part of our core at Apple, and bringing it to hundreds of millions of users in the way iPod did impacted more than just the music industry — it also redefined how music is discovered, listened to, and shared,” said Greg Joswiak. iPod fans have taken to Twitter bid adieu. Say goodbye to iPod. pic.twitter.com/0WXYeOwiPJ— Joe Rossignol (@rsgnl) May 10, 2022 https://twitter.com/TmarTn/status/1524086699688669184?s=20&t=Yo6l6zNggj4OXgdCMcoHpg I've had my ipod classic since 2006. Last year it finally stopped working so I replaced the very scratched up face, the battery and replaced the HDD with an SD card adapter. Now a full charge lasts a month with daily use and it holds almost 500GB of music. I love this thing pic.twitter.com/pl9f1HPnz1— Jeff Kelley (@iamjeffkelley) May 10, 2022 To me, the IPOD is the most innovative product of our lifetimes. It led to the changes in the way we communicate, function, and collaborate. It was not 1st in class but it was the most efficient.Heck of a run IPOD! @Apple— Bhrett McCabe, PhD (@DrBhrettMcCabe) May 10, 2022 https://twitter.com/kryvenko/status/1524097454869815303?s=20&t=Q5vX5dBHYEHmiW3LQi23OQ
The popular iPod models include iPod Nano, iPod Shuffle and the latest iteration, iPod Touch.
["AI News"]
[]
Avi Gopani
2022-05-11T12:16:50
2022
281
["Go", "programming_languages:R", "AI", "IPO", "programming_languages:Go", "R"]
["AI", "R", "Go", "IPO", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/after-20-years-apple-pulls-the-plug-on-ipod/
3
6
2
true
false
false
66,211
Artificial Intelligence Essentials for Business Leaders
AI has become the need of the hour and all the industries are now integrating analytics and AI to drive the decision-making process. Bhagirath Kumar Lader, who is the  Chief Manager (Business Information System) at GAIL led us through a session briefing Artificial Intelligence essentials for business leaders in today’s age. Lader is one of the key members of the digital transformation team at GAIL and carries huge knowledge about how AI, ML and DL are crucial to businesses. He gave us a quick overview of the motivation for AI, AI essentials, AI hype vs reality while taking us through use cases. Motivation For AI While AI is a crucial part of businesses, one of the key drivers of its implementation is its ability to make the decision which is usually considered the forte of humans. Integrating decision making systems in work take off the workload and AI makes it that much easier as it relies on data for its decision-making capabilities. “Where humans would have typically relied on gut feelings, the availability of data and using AI on it makes it more efficient and reliant,” said Lader. Taking us through the early journey of analytics where it was first used in 1880 by the US Census Bureau to today’s age where it has reduced the time drastically in every process, he spoke about how analytics journey has evolved from descriptive and diagnostic analytics to predictive and prescriptive analytics. It now looks at the current and past data to find trends and patterns to help forecast the probability of situation occurring again in the future. He also spoke about how analytics also suggests decisions, actions and implications from predictive models to improve decision-making. Wall In Analytics While predictive and prescriptive analytics are now widely used, Lader shared how there is a wall that comes as a hindrance when it moves from descriptive to predictive analytics. “ While Business Analytics has many domains, it is highly important to understand each one of it to be able to break this wall and move into analysing correlations, root causes, forecasting and optimisation, which essentially are a part of predictive and prescriptive analytics,” said Lader. How AI Differs From Humans Once the above challenge is overcome, another crucial factor while designing AI system is to bring cognitive ability into the systems, which is nothing but the ability to do reseasoning, problem solving, planning, abstract thinking, complex idea comprehension and learning from experience — all of which are unique to humans. “Intelligence is the measure of cognitive capabilities, which if machines show can be considered showing artificial intelligence,” said Lader. Lader addressed some of the commonly faced questions while adopting AI such as what are basic cognitive operations, what necessary conditions should a formal language fulfil in order to be an adequate tool for describing the world in s precise manner and unambiguous way, can reasoning be automated, how can we construct an AI system. “While AI systems should be able to emulate human thinking, it should also be able to learn from experience, arrive at conclusions, understand complex real-world use cases, participate in natural-language dialogues with people, have cognition abilities among others,” he said. He shared that AI can be created in four ways — thinking humanly, thinking rationally, acting humanly and acting rationally. While explaining how machines can be made to act humanly, he gave an instance of the Turing test approach, where he developed an operational test for intelligent behaviour. “This test predicted that machine might have a 30% chance of fooling a person for five minutes,” he shared. It also suggested knowledge, reasoning, language understanding and learning to be the major components of AI. The thinking humanly approach is the cognitive science approach meaning that making machine to think like humans, for which we need to get inside the actual workings of human minds. Coming to the rationality part of it he said that AI system needs to have rational thinking where it knows the ‘right thing’. “While talking about autonomous vehicles we often talk about how whether it would be able to differentiate between an old woman and a tree. This is what is rational thinking which many systems currently do not pose,” he said. Talking about the goal of AI, Lader shared that the goals are driven by two groups — expert system which should demonstrate intelligent behaviour regardless of their resemblance or non-resemblance to human intelligence and model human intelligence which is made with the aim to someday replicate or surpass human-level intelligence, which is still a distant reality. Machine Learning & Deep Learning Another aspect of intelligent machines is machine learning which is a category of algorithms that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. Lader explained that for machines to learn there is a need for the generous sample for algorithms to learn. The kind of machine learning models are: Supervised learning: some of the examples are linear and logistic regression, multi-class classification, neural networks, support vector machines etc. It can be classified into two categories of algorithms – classification and prediction. Unsupervised learning: It is a class of ML technique to find patterns in data without any label. Some of the most common unsupervised learning methods are cluster analysis, autoencoder, GANs.Reinforcement learning: It is all about taking suitable action to maximise the expected reward in a particular situation. The reinforcement agent decides what to do to perform a particular task. In the absence of training dataset, it is bound to learn from experience. On the other hand, deep learning is a part of a broader family of ML methods based on artificial neural networks with representation learning. Some of the common DL algorithms are CNN, RNN, image recognition, among others. “All these techniques are extremely crucial in facilitating the intelligent machines that we have today and are used to improve the performance of a system beyond that it provided by other analytics techniques”, he said on a concluding note.
AI has become the need of the hour and all the industries are now integrating analytics and AI to drive the decision-making process. Bhagirath Kumar Lader, who is the  Chief Manager (Business Information System) at GAIL led us through a session briefing Artificial Intelligence essentials for business leaders in today’s age. Lader is one of […]
["AI Features"]
["AI (Artificial Intelligence)", "AI Algorithms", "Business Intelligence", "business intelligence use cases", "how to implement business intelligence", "Intel"]
Srishti Deoras
2020-05-29T16:07:01
2020
1,001
["how to implement business intelligence", "artificial intelligence", "business intelligence use cases", "machine learning", "AI", "neural network", "ML", "image recognition", "predictive analytics", "AI Algorithms", "Aim", "deep learning", "analytics", "Business Intelligence", "Intel", "AI (Artificial Intelligence)"]
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "neural network", "analytics", "Aim", "predictive analytics", "image recognition"]
https://analyticsindiamag.com/ai-features/artificial-intelligence-essentials-for-business-leaders/
3
10
1
true
true
false
10,132,928
AMD to Acquire ZT Systems for $4.9B for Expanding AI Data Centres Ecosystem
AMD, in a strategic move to strengthen its AI ecosystem, announced the acquisition of ZT Systems for $4.9 billion. The deal, involving both cash and stock, includes an additional contingent payment of up to $400 million based on performance metrics. ZT Systems, a New Jersey-based company specializing in compute design and infrastructure for AI, cloud, and general-purpose computing, will be integrated into AMD’s computing infrastructure design business. AMD plans to sell ZT Systems’ data center infrastructure manufacturing arm to a “strategic partner.” The company also works closely with NVIDIA and Intel. AMD, which has already invested $1 billion in its broader ecosystem, sees this acquisition as pivotal in enhancing its expertise in AI systems design, encompassing silicon, software, and systems. “Our acquisition of ZT Systems is the next major step in our long-term AI strategy to deliver leadership training and inferencing solutions that can be rapidly deployed at scale across cloud and enterprise customers,” stated AMD chair and CEO, Dr. Lisa Su. ZT Systems’ CEO, Frank Zhang, will lead AMD’s manufacturing business, while President Doug Huang will oversee design and customer enablement teams, reporting to AMD’s executive vice president, Forrest Norrod. The deal is expected to finalize in the first half of 2025. “We are excited to join AMD and together play an even larger role designing the AI infrastructure that is defining the future of computing,” said Frank Zhang, CEO of ZT Systems. “For almost 30 years we have evolved our business to become a leading provider of critical computing and storage infrastructure for the world’s largest cloud companies. AMD shares our vision for the important role our technology and our people play designing and building the computing infrastructure powering the largest data centers in the world.” Last month, AMD announced that it has signed a definitive agreement to acquire Silo AI, which completed just last week, in an all-cash transaction valued at approximately $665 million. The acquisition is expected to close in the second half of 2024, subject to regulatory approvals. The acquisition of ZT Systems is the latest move by AMD to bolster its AI capabilities. Over the past year, alongside ramping up organic R&D efforts, AMD has invested over $1 billion to grow its AI ecosystem and enhance its AI software expertise.
AMD plans to sell ZT Systems’ data center infrastructure manufacturing arm to a “strategic partner.”
["AI News"]
["AI Data Center", "ai funding", "AMD", "Mergers and Acquisitions"]
Mohit Pandey
2024-08-19T17:33:54
2024
376
["AI Data Center", "AMD", "API", "programming_languages:R", "AI", "RAG", "ai funding", "GAN", "Mergers and Acquisitions", "R"]
["AI", "RAG", "R", "API", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/amd-to-acquire-zt-systems-for-4-9b-for-expanding-ai-data-centres-ecosystem/
2
6
2
false
false
false
10,164,724
ElevenLabs Unveils Scribe, a Speech-to-Text Transcription Model to Rival Otter, TurboScribe, and Others
ElevenLabs has launched Scribe, a new speech-to-text tool that promises the highest accuracy in the field. This position positions the company among notable competitors like Google, Otter, Fireflies, and TurboScribe, all of which are established in speech-to-text technology. ElevenLabs is popularly known for its text-to-speech and AI voice generation technologies. With Scribe, the users get a product that does the opposite using their expertise in the speech synthesis field. Scribe transcribes speech in 99 languages, with features like word-level timestamps, speaker diarisation, and audio-event tagging. The transcription is aimed to be delivered as a structured response for seamless integration. For its accuracy, ElevenLabs states that they tested it using FLEURS and Common Voice benchmark tests across all supported languages and found that it consistently outperformed models like Gemini 2.0 Flash, Whisper Large V3, and Deepgram Nova-3. “Whether it’s meeting summaries, movie subtitles, or even song lyrics, Scribe delivers the lowest automated transcription word error rate in Italian (98.7%), English (96.7%), and 97 other languages,” said ElevenLabs. They emphasise that their technology addresses languages such as Serbian, Cantonese, and Malayalam with low word error rates. The developers can integrate Scribe using their Speech-to-Text API to get structured JSON transcripts with non-speech event markers, speaker diarisation, and word-level timestamps. Scribe is priced at $0.40 per hour of input audio, and for the next six weeks, it offers an extra introductory discount. If you are a creator or business, Scribe can be accessed directly via the ElevenLabs dashboard to upload audio or video files and generate formatted transcripts. Currently, the offering focuses on higher accuracy. A low-latency version of real-time applications will be released soon, according to ElevenLabs.
ElevenLabs claims to have launched the most accurate speech-to-text and transcription model on the market.
["AI News"]
["Speech Recognition"]
Ankush Das
2025-02-27T11:38:52
2025
275
["Go", "API", "Gemini 2.0", "programming_languages:R", "AI", "ML", "programming_languages:Go", "Aim", "llm_models:Gemini", "Speech Recognition", "R"]
["AI", "ML", "Gemini 2.0", "Aim", "R", "Go", "API", "llm_models:Gemini", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/elevenlabs-unveils-scribe-a-speech-to-text-transcription-model-to-rival-otter-turboscribe-and-others/
3
10
1
false
false
false