hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,440
https://devpost.com/software/ment-heal-mental-health-online-services-hfxsg2
MENT-HEAL PROTOTYPE DIGITAL WORLD YOU ARE NOT ALONE REACH OUT PEER TO PEER LENDING SELF CARE MENTAL HEALTH Our Inspiration If you look historically, we clearly see during economic downturns this long tail of mental, behavioral and often medical challenges. You see increases, which is all well documented in depression, anxiety, substance abuse and domestic violence Though the COVID-19 situation is far different. We are talking about really isolating in population of people. Significant percentage of unemployment, co-locating family members again and not providing them outlets to help manage a lot of the stress of this ordeal We feel like we will see substantially more challenges in the behavioral health space coming out of this than we have seen in the Great Recession, in which we did see profound changes in behavioral health and mental illness So we believe this platform will be able to break the barrier and reduce the gap of mental illness where we are expecting more people to move their mindset from Reactive to Proactive mental wellness, just by a click of a button What it does Description Ment-Heal is a mental health services designed to be accessed through USSD, mobile app and web browsers. It has several categories of services; Mental wellness: App for sleep and breathing, education tools and games that train you to have better mental health B2B Tools and sourcing: Provides search engines, provider tools and back office resources and B2B/corporate mental health programs and services Measurement and testing: Assessment, passive and active tracking or measurement of mood or other subjective tools and remote monitoring Telehealth: Remote engagement, treatment or interaction with a clinician Digital therapeutics: A resource that is used in conjunction with medication or behavioral intervention (digital medicine) Peer to Peer: Fighting isolation, bringing people together to be vulnerable with each other and creating connection for the purpose of well being How we built it We built this platform through variety of steps. First it was ideas to incorporate different categories of mental health services under one roof. We wanted to make a platform that can make a change in the world Challenges we ran into In that we wanted the platform to be diverse, the biggest challenge was getting different services incorporated in one platform. Programming is another ordeal, we decided to fill the app with games and educational tool that will capture more people of different age bracket to use our platform. Sometimes we got stack and never knew how to continue, so teamwork, passion and resilience is what we decided to dedicate into the project, though it is still a learning experience. Shortage of mental health specialist to offer the service was challenging too and to make people embrace that mental health is real and finally inadequate funds during piloting session was another nightmare Accomplishments that we are proud of This is the future of medical industry just by the click of a button. Getting all those categories together in one platform is a success from our end. We are mostly proud because the platform cut across different age bracket which is a plus What we learned Our minds just need a little twist to be in a position to face reality of life though it is easier said than done, and that mental illness is a silent epidemic What's next for Ment-Heal (Mental Health Online Services) After creating this platform, we plan to work hand in hand with like minded organizations and National Youth Council to do marketing and more bench-marking on how to get to different communities with less boring and more exciting ways through mental health games and interacting more with our platform. In future, we will add more doctors to enhance the telehealth and digital therapeutics to improve turn around time of service delivery and precision. This will assist more people complete their mental illness sessions Built With bootstrap javascript Try it out emilyakoth440093.invisionapp.com
MENT-HEAL (Mental Health Online Services)
MENT-HEAL is start up National COVID-19 platform accessible by public and partners through a mobile app, USSD and Web browsers.
['EMILY AKOTH', 'Johnstone Mutua']
[]
['bootstrap', 'javascript']
16
10,440
https://devpost.com/software/ukulima-bora-apk
My inspiration :My inspiration for this project has been the hard work that my fellow Kenyans have put into farming since the Covid-19 pandemic started. Many people saw their livelihood was threatened and there as a last resort many Kenyans ventured into farming as the basic need for their upkeep and health is food. Others opted to venture into it as a income generating activity. It has been seen over the last few months many people have been selling their produce even on the road sides with the hope to sell and make money and at many times some are not able to sell their produce as their not able to market their produce due to the Covid-19. Furthermore many people are afraid of moving out of their houses due to the pandemic therefore the seller customer relationship was compromised. I therefore thought of a need to bridge this gap by introducing the UKULIMA BORA apk which will be a mobile application that will be used so as to accomplish this. What it does: This application will provide a channel where farmers and customers will connect on a virtual scale and also do business through mobile banking with the application acting as a middleman between the buyer and seller. OBJECTIVES. To grow the agricultural sector in these tough Covid-19 period. To promote farming whether large scale, small scale or even garden farming as a final resort to provide food during this COVID-19 period. To reduce the spread of Covid-19 in the community by moving from place to place in search of food. To provide an easier way of getting agricultural products from the seller to the buyer. To promote the life of the local farmers as they will earn an income which will help in their day to day upkeep. To teach on the agricultural aspect of life through tutorial videos which will be made by farmers and posted on various platforms such as YouTube. To also provide a channel of local barter trade where the farmers can exchange produce if they are in surplus. To advise farmers on better farming methods by incorporating other organizations through the help of the National Government in conjunction with the County Government. To bridge the gap between technology and farming in a way that it will even reach the local people far away from the city. Most importantly, to help those in need during this tough Corona period by also creating a channel whereby people are able to donate food through the app to the Covid-19 relief fund program. THE PLATFORM. The platform will be composed of: 1.TUTORIALS. With the aid of this program established farmers will have a chance to show their prowess in farming by making videos on how to grow certain crops which they are good in and post them online through various media platforms. By posting these videos online the farmers can reach a wide range of upcoming farmers and also make money through the videos made. The farmers can also use this platform to share their professional advice through webinars and seminars(which are under strict Covid-19 regulations) and through this the farmer can also be paid for such programs. 2.CHAT BOX. This is a forum where established and upcoming farmers get the chance to interact and also advice on another on farming and other related activities. 3.BUSINESS PLATFORM. This is where the farmers with the help of these app post their produce online where interested customers will by the produce whether wholesale or retail with the guidance of the application. Farmers will register with the application , avail their produce to the concerned people who will then make sure it is posted online and once it is bought the money is wired to the farmer through relevant mobile banking platforms. 4.BARTER TRADE PLATFORM. As farming is an activity in which one can produce more than the intended amount , the surplus can be put on a section designated for registered farmers and they can interact to exchange agreed upon produce so as also to diversify on the crop production. DONATION PLATFORM. As we are aware of the harsh economic period that has hit our great nation , through the application a section will also be created for donating foodstuffs to those hit by the pandemic as a sign of great unity as we are all Kenyans. The foodstuffs shall be supplied to different parts of the country. How I built it : This is an idea in the making and I hope it can pass this stage so that more work can be done on it, as I believe it will be beneficial to a lot of people. Challenges I ran into: I would say that the greatest challenge on my side has been the communication about the challenge as it has also not been easy to access the necessary bundles to go onto the internet and see the challenge. For example most time has been dedicated trying to get jobs that would lessen the burden of the pandemic. Seeking of information from local people as some would ask why you are doing this , in fear of the fact that you have bad intentions. They want to see something recognized so as to believe and therefore my hope is that when we go into the field again with the #fursavsvirus team better results would be achieved. Help required in developing the app for the prototype. Finding partners willing to help in the project at this time was also hard. Accomplishments that I'm proud of : This challenge has given me a chance to think outside the box on ways of improving different communities, their relationships and also the unity of the Nation all together especially during this hard period. What I learned : There is a need to sensitize the public on working smart rather than working hard. What's next for Ukulima Bora apk. I intend to also include the the grocery dealers 'mama mboga' in this initiative by including organizations which I have in mind would be really happy to boost this project.
Ukulima Bora apk.
To improve the agricultural sector and also help people during this tough Covid-19 period.
['Brian Kimuya']
[]
[]
17
10,440
https://devpost.com/software/food-security-for-all
Food security for all at the green shade of the pyramid is our goal. Inspiration~ I grew up in the village in a close knit family with all my parents as farmers. To my teen old self, I thought we were poor because we could take arrowroots cashew nuts, ground nuts, ghee, milk,fish, purple potatoes, Mexican merigold, millet /sorghum porridge etc. I hated this! I loved beacon and fries in my composition at school 😊 little did I know, that we were the richest and strongest in immunity and health. We hardly sickle and up to now. Food is medicine. I learnt. I moved to the city where I would take my favorite fries and the likes of indomie because it's convenient for me to cook after a tiresome day. I started to experience belly fats and bloating. My best friend Christine lost her beautiful friendly mother to cancer. I could see family stressed out on facebook Twitter and whatsApp struggling to raise treatment of cancer, diabetes, obesity or high blood pressure. In this same year my uncle and cousin was diagnosed with cancer and diabetes respectively. We've suffered treatment burden henceforth. So I said to myself, let me create ready market for the small holder farmers to earn a living from their hard work as the consumers access natural nutritious food for their households based on their dietary need and health conditions, so that they would build and strengthen their immunity to keep away nutrient chronic diseases. People with these chronic diseases have high risks of contracting covid 19. This needs address. What it does~ MediFood avails dietary specific food to our customers based on their health needs. Offer diet counseling and monitoring customers of adherence to diet through a web of: MEDIFOOD CLINIC~Profiling customers in their dietary health needs, offer diet counseling and monitoring customer adherence. MEDIFOOD KITCHEN~ Prepare and deliver food orders by customers at their convenience offline and online. MEDIFOOD STORE~ Recieve supply of food from the small market farmers sell online and offline to our customers. How I built it. Build from research, personal experience, people food wishlist and the prevention and containment measures for Covid 19. Challenges I ran into. Partnering with mobile app delivery service to make it easier and safe as well as transportation of these produce from the grassroots due to poor road transport systems. Accomplishments that I'm proud of. I started from raising awareness to change the perception that agriculture is not being poor or backward as most people think, especially youths. And it made sense to most of my audience (youth) I'm glad. I have also managed to make my audience understand the relationships between nutrient chronic diseases and foods we consume. What I learned. Anything is possible with GRIT and inclusion. Achieving food security is possible is possible using local resources sustainably. What's next for Food security for all. Quality food security free from harmful chemicals and substances and can be consistently available, affordable and accessible to all globally. Built With experience indegenousknowledge research
MediFood+ Kenya.
Create platform that collect and pool various food produce from neglected rural farmers in Kenya. Profile consumers based on dietary factors and deliver them food that fits their dietary needs.
['Akoth Victorine']
[]
['experience', 'indegenousknowledge', 'research']
18
10,440
https://devpost.com/software/eat-more-greens-keep-healthy-campaign-gki1a7
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Inspiration; Makueni County is classified as a food deficit and high poverty area. The area does not produce enough food to feed its large population due to the erratic rainfall. This project is a response to the local community’s desire to mitigate acute and frequent food shortage situations in the area. In response to this, CYENI has developed ready for scaling a creative and innovative kitchen gardening venture to enable its members meet their financial and nutritional needs. The project scaling shall be the through its Network of 500 members with aid of a team of consulted experts for technical support in a participatory model. This is intended to build the capacity of the participants in managing of the vegetables production venture. What it does The proposed Kitchen gardening project is an agribusiness social venture planned to help over 500 low income households replicate a proven innovative water efficient modified hydroponics farming model developed by CYENI YOUTH GROUP. The project aims to coordinate a network of mini gardens run by households in semi-arid Mtito andei division of Makueni county to grow organic vegetables for home consumption with surplus for sale. The income from this front will help households fight COVID-19 such in purchase of soap and sanitizers. How we built it We use recycled polythene bags, recycled water bottles and soil to develop water efficient and high yield kitchen garden for vegetables such as spinach, sukumawiki, tomatos, cabbages. We have successfully established a prototype training farm which we use to train and practical demonstrate to help other households replicate at their home with ease. About 50 members have been trained and are ready to replicate the model. Based on our experience into existing models and techniques for implementing community farming activities in the area and with the support and knowledge base of our technical support and expertise networks, the group will run the project in a two-stage process. a. PILOT DEVELOPMENT The first stage of the project is already done, involved the setting up of a successful demo site of the project site and establishing basic structures to enable production at the farm. The participants in this exercise shall include the experts to advise on the best production practices. b. SCALE UP AND REPLICATION The second stage will commence once the members are able to sustain operate the production process, to constitute a 3 -month pilot production period. The participants shall be assisted by the consulted experts in running the garden. During this stage more resources are provided for them to scale up what was learned during the first stage. The second stage also will focus on giving the members equipment and the tools that will make the operation sustainable and to ensure that they are able to run it without or with minimal external supervision and assistance. Challenges we ran into Lack of adequate funds to scale our pilot project to reach thousands needy household in Makueni county and beyond Limited marketing and information to potential beneficiaries of our project. Lack of vital implements, tools and inputs such green nets. Lack funds to support training of our interested groups. Accomplishments that we're proud of Established a successful locally made hydroponics kitchen that conserves water and produces high yields of fresh vegetables. Mobilized 100 households willing train replicating of our successful model Trained 50 members of youth group ready for replication of the pilot What we learned There is high demand for fresh green vegetables in the area. Hydroponic farming id low cost yield and more water efficient than rain fed agriculture. Hydroponics is suitable for areas with erratic rainfall such a semi -arid Makueni County. Large production will be required to achieve economies of marketing this could be done by helping more farmer replicate the project and selling surplus production jointly as a cooperative. What's next for ORGANIC VEGETABLES HYDROPHONICS KITCHEN GARDEN (EAT MORE GREENS KEEP HEALTHY CAMPAIGN) Train and support 500 household replicate our proven kitchen garden project. For a cooperative society for joint selling of excess production. Acquire vital tools, implements and input to help smooth production and marketing logistic and operations. Popularize our innovation for increased replication in wider makeuni county Built With farming hydroponics kitchen-garden nutrition vegetation Try it out www.facebook.com www.facebook.com www.facebook.com
WATER EFFICIENT HYDROPONICS KITCHEN GARDEN PROJECT.
We train household replicate a successful low cost water efficient and high yield model of homemade organic vegetables hydroponic kitchen garden model through training and practical demonstration
['John Muli']
[]
['farming', 'hydroponics', 'kitchen-garden', 'nutrition', 'vegetation']
19
10,440
https://devpost.com/software/farm-for-life
Inspiration . to see the income that farmers gets when they have market. What it does.To ensure that farmers are empowered by getting direct market from different part of the country . How I built it.by having network with the farmers and consumers and because of that l have joined different groups of farmers in this country why l have been sharing with the idea of working as a network. Challenges I ran into. the main challenge l have faced is lack of resources to get more farmers as my focus. Accomplishments that I'm proud of.l am proud of achieved to started a store which can work as outlet of our farmers the other is have achieved to support a number of farmers from different part of the country . What I learned.if farmers can be support we can sort the issues of food security What's next for farm for life.the next phase is to ensure we get as many farmers as we can and by this l am working on the website to ensure we link farmers and the market direct with out having brokers and make sure we train farmers on the important of using technology ,because with corona now as Freshchoice l am able to get products from farmers and connect with consumers . Built With html5 javascript mysql php
farm for life
To empower the farmer through networking by us of the technology. .
['Mash Peter']
[]
['html5', 'javascript', 'mysql', 'php']
20
10,440
https://devpost.com/software/plant-signal-lwnmxp
Testing the app locally with a farmer The team at a local farm testing the application. Local farmer testing the application This is the dashboard of the portal that will be managed by the Agrovets and also Plant signal. Where the farmers can order for chemicals App Homepage App menu Remedy recommendation section App market section Cart section Checkout section Inspiration Remember what caused part of downfall in the agricultural economic state in 2017? 2018? 2019? 202... no, wait, we are stopping the trend. We realized that many farmers lack knowledge on what affects their crops. This leads to huge losses on the total production of the farm outputs. An example is 40% of total food production in the country being affected by pests and diseases. Enormous amounts of toxic chemicals are dumped on land every year, making industrial agriculture and general food production unsustainable. This also affects the country's economy negatively at a large scale. We aim at rectifying this. The Covid-19 Pandemic has loomed and farmers cannot get help from the agricultural extension services due to the social distancing regulations by the World Health Organization hence they suffer from random attacks by pest and diseases and they are not conversant of what measures to take. Farmers getting less money from their produce because of unregulated plant diseases. The cost of educating the farmer on the farm inputs is quite expensive, and therefore we came up with a solution to cater for all of these: Plant Signal What it does Plant Signal is a free, offline, interactive, easy to use smartphone app for our farmers. The farmer detects pest and diseases, just by taking a picture of the suspicious crop, gets real-time diagnosis of the plant and recommendation of unharmful agrochemicals to apply to resolve the issue and get a plenty harvest. The main features of the application are: Camera for pest and disease detection Recommendation for plant remedies Connection to agrochemical stores for remedy purchases Live chat with remote agricultural extension officers. E-commerce platform to obtain farm utilities. Deliveries for ordered chemicals/farm inputs How I built it The app is powered by a strong Artificial Intelligence network that delivers a diagnosis in seconds. The app majorly depends on machine learning algorithms where we have been able to use close to 25000 data sets of various images of diseases that commonly affect the crops. The application is constantly updated for high accuracy achievement. The application also leverages offline capability therefore enabling farmers to use the app to diagnose plant diseases without Internet connection. This makes the application usable in extreme remote areas The application is built using java programing language and integrated with machine learning. The portal that the farmers interact with the agrovets and agrochemicals is is built using PHP, html and JavaScript Challenges I ran into Some of the challenges that I faced is getting finances to keep the machine learning model running and also updating the datasets. Enrolling farmers to the platform as most of then do not have smartphones. Eliminating counterfeit chemicals in the market. Accomplishments that I'm proud of We have 100+ users who have already downloaded the app from Google play store. There are over 200 Beta testers on the platform. The prototype woks just as i expected The solution provides an offline mode hence the farmer incurs no cost of bundles What I learned It takes two to tango so as to make a society realize the worth of tech. We are ahead of tech as a team but the country is still a step behind in embracing tech. The agriculture sector is revolving in a very fast tune to adopting tech and we are in the right and best time to make this happen. This is the very first startup that is addressing the farmers challenge from the grassroot. What's next for Plant Signal Integration of a WhatsApp bot where the farmer can be able to interact with our virtual agent and get fast responses immediately. Built With html java javascript machine-learning php Try it out play.google.com 1drv.ms drive.google.com
Plant Signal
A solution for early detection of pests and diseases to eliminate pre harvest losses using artificial intelligence, and provisioning for healthy farm inputs from agrochemical stores.
['melody tangus', 'Shadrack Kiprotich', 'Maria Njoroge', 'Delivce Mwas', 'Aaron Kipkoech']
[]
['html', 'java', 'javascript', 'machine-learning', 'php']
21
10,440
https://devpost.com/software/digisomo-learning-leasing-digital-study-stations-and-books
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Successfully made prototype of the solar-powered mini-PC digital learning station. Piloting the solution with students in Kibera slums. Reaching out to students in need of the digital learning station and access to virtual tuition. Johnstone Mutua - Co-Founder at Digisomo Oliver Omondi, Co-founder at Digisomo Inspiration; According to Usawa May 2020, 82% of children cannot access any form of digital learning in Kenya. This is mainly due to the affordability of digital learning equipment despite the fact that there are numerous digital platforms available after the closure of school due to COVID-19. On March 15, 2020, the Kenyan government abruptly closed schools and colleges nationwide in response to COVID-19, disrupting nearly 17 million learners countrywide. The social and economic costs will not be borne evenly, however, with devastating consequences for marginalized learners. This is especially the case for students in slums and rural areas. Also, many schools cannot afford to install and maintain a computer lab, both due to inadequacy of funds and lack of connection to the grid, this is majorly for children in poor rural and informal areas. What it does We lease and maintain solar-powered study mini-computers to individual school children, teachers and computer labs to schools for a monthly fee to help students to access digital content in pay as you go arrangement. Users do not need a huge upfront financial outlay for purchase of computer equipment. Lease to own plan helps training facilitators own their personal laptops and desktops. This will help children access digital education both at home and in schools. We provide robust mini-computer with massive offline content such as khan academy, Wikipedia, and thousands of offline pre-downloaded classroom content running on a stable virus-free Linux platform. We also provide virtual classes, either pre-recorded or live stream on the internet through our virtual tutors. We provide free offline Kytabu app which has almost all primary and high school Kenya Government accredited curriculum books from grade 1 to form 4. We also provide a subscription to over 15 online learning platforms such as Kusoma, and daily internet bundles to enable students have access to the online content. The study stations will come with access to a web-based school management system that can also be used for student self-continuous assessment tests and final exams. All the equipment will be insured against fire, theft, and floods. Free preventive and curative maintained servicing and IT product support to all leased equipment. The units come with solar or mains power bank and lights to help study when there is not light e.g. in case of blackout where clean reading light in not available. For students who cannot afford leasing digital leaning stations, we provide an option of leasing hardcopy textbooks and revision materials that are delivered to users' doorstep. How we built it We acquire and equip a full learning station comprised of a study desk, chair, mini desktop computer, solar power system, and a power bank. We have 3 working prototypes and we charge Ksh.50 per day and a total of ksh 1500 per month for the use of the station at home. Challenges we ran into Lack of adequate funds to scale our project to reach thousands of students, pupils, teachers, and schools in need of digital learning equipment. Limited marketing and information to potential beneficiaries of our project. Need for an e-commerce platform to enable the wider sale of and distribution of our solution. Accomplishments that we're proud of Established 3 successful prototypes of solar powered fully equipped digital leaning stations. We acquired the license of the 'Kytabu' offline app which has all most all K.I.C.D accredited textbooks from grade one to form 4. Successfully piloted the solution with 3 students from Kibera slums for 1 month. Participated in the installation of a solar-powered thin client mini desktops computer lab for an organization in Kibera sponsored by Uber. What we learned There is high demand for digital learning in Kenya There are numerous online digital learning portals available but most students do not have digital equipment to access them. Parents and students do not have large amount of upfront money required to buy digital equipment but are able to afford monthly fees for use of the equipment. Our solar power backup is vital to ensure uninterrupted learning due to blackouts and where there is no connection to grid. Books and revision materials are also in high demand but parents do not have enough money to buy all books for all their children - leasing of books helps students access all vital books affordably. What's next for DIGISOMO LEARNING (LEASING DIGITAL STUDY STATIONS AND BOOKS) Fabricate and equip more digital study stations to meet the demand targeting 10,000 students, 2,000 teachers and 300 schools in rural and urban poor in 3 years time. Acquire and lease K.I.C.D accredited books targeting 20,000 students in 1 year period. Establish a studio for remote support, coordination and virtual teaching. Engagement and training of 10 IT support cum virtual class tutors - to help clients in the installation of units, equipment maintenance, and IT support and conducting virtual classes. Establish help desk stem with hotline, help desk system e.g. team viewer for product support. Establish an e-commerce platform to enable clients to order and purchase or lease digital learning stations and books which will be delivered to their doorstep. Built With e-learning hardware smis virtual
DIGISOMO LEARNING (LEASING DIGITAL STUDY STATIONS AND BOOKS)
We lease and maintain solar powered mini-study computers (and books) to school children, teachers & schools for monthly fee to promote digital learning through stand alone computers and computer labs.
['Johnstone Mutua', 'DIGISOMO EDUCATION', 'Oliver Omondi']
[]
['e-learning', 'hardware', 'smis', 'virtual']
22
10,440
https://devpost.com/software/majisafi-vc85rp
Inspiration Being able to wash hands using clean and affordable water especially for the poor backgrounds What it does A water filter system that is able to supply approximately 300 liters of clean water How I built it Using 90% of natural resources and 10% recyclables Challenges I ran into Inadequate financing Negative attitude from the community members Accomplishments that I'm proud of Becoming close to building a large scale prototype that is able to filter water for use by the local residents What I learned Patience and importance of putting the needs of the target audience first What's next for MajiSafi Being able to supply clean filtered water and reducing dependency on water services provided by the government while empowering locals from poor backgrounds to be self sufficient to provide for themselves water. Built With hardware
MajiSafi
Having clean water for use during this pandemic
['Emma Okello']
[]
['hardware']
23
10,440
https://devpost.com/software/stay-safe-stay-swagged-e7sqkb
Inspiration to sensitize the public and youth with the regard of dresiing decently What it does sensitizes the public on safety measures. How I built it simple pen and paper drawing Challenges I ran into limited time. Accomplishments that I'm proud of two complete art portfolios. What I learned Practice makes perfect. What's next for Stay safe stay swagged Becoming a brand name Try it out www.google.com
Stay safe stay swagged
To sensitize the public and youth in safety measures for the novel corona virus through art.
['Erick Karanja']
[]
[]
24
10,440
https://devpost.com/software/masomo-hayakomi-initiative-bn5792
Access to text books Access to revision papers Revision books Inspiration After the sudden closure of all learning institutions in March 2020 due to Covid-19, primary and secondary school students have been finding it difficult to learn at home. Many cannot access digital learning that happens online through platforms like YouTube, Viusasa, Websites and TVs due to poor network, expensive data bundles, lack of access to electricity, Smartphones and Tvs. According to Usawa Agenda May 2020 Learning Report , only 21 out of 100 primary school students and 29 out of 100 secondary school students can access online learning. Personal studies have also not been effective due to lack of access to adequate study/reading materials. This problem affects both the students who learn online and those who don't. However, if these students can be provided with a simple and affordable way to borrow reading materials such as text books, revision books and past papers from the school libraries near them, studying at home can be easy. What it does Masomo Hayakomi Initiative will ensure that students easily access, borrow, use and return reading materials such as Textbooks, Revision books and Revision papers available in the schools near them. One of the key goals of Education in Kenya is promoting social equality and responsibility through providing an inclusive and equitable access to quality and differentiated education to all learners. MaHa, in line with this key goal, will grant all primary and secondary school students the opportunity to access the reading materials from the schools within their locality irrespective of whether they study in that school. To avoid congesting at the library premises searching for books, the school will have a Facebook page under its name where the librarian will be posting on a daily basis a list and/or pictures of available study materials. The students then visit the page, view all the available materials and request for them either in the comment section or via a message. The Librarian then replies with the time to collect the books from the premises. By allowing students to access the libraries near their residential area and by ensuring no congestion in the libraries, the risk of contracting COVID-19 will be minimized and learning will be enhanced. Since a most students can access Facebook through their gadgets, or through borrowing from parents/guardians, relatives and close friends, the initiative will be successful. And with the free Facebook mode in operation, it means that the students can access the library services free of charge and even if they are to use the data mode, the highest cost they can incur for purchase of data bundles is only ten shillings. Where returning takes longer than necessary, or materials have been damaged or lost, the school can liaise with the area chief and/or Nyumba Kumi Leader(s) to help in recollection or replacement of the reading materials. How I built it I improved the idea through gathering information from different stakeholders, that is: primary and secondary school students, parents, area chief and Nyumba Kumi leaders through a questionnaire. I also created a sample Facebook page to demonstrate how posts about material availability can be made and how students can borrow books from the comments section. The gadget that one uses only has to access Facebook and not necessarily a smartphone. Challenges I ran into During data collection, some respondents took long to fill and hand in the questionnaires. It also took a longer time to access the area chief and the Nyumba Kumi leaders Accomplishments that I'm proud of I was able to collect responses from 5 secondary school students, 5 primary school students, 4 parents, 2 Nyumba Kumi leaders and one area Chief. The sample Facebook page also received responses during the period which shows that if the initiative is implemented, it will be highly embraced and quite easy for operation by the user. What I learned All the students I received responses from showed a great desire for a good access to adequate reading materials while at home. For instance, a secondary school student who came home with a Mathematics textbook has been studying Maths only but doesn't have a Chemistry text book. At the same time, there is a student in a similar situation with a Chemistry textbook but does not have a Maths text book. A platform where these students can access the books they don't have will be of great benefit. All the ten students responded in the affirmative concerning doing personal studies at home. They however pointed out the challenge of accessing adequate study materials. 9 out of the 10 students indicated that they haven't been taking any digital learning citing challenges like electricity, lack of TVs, expensive data bundles and poor connectivity. All the students indicated that they could access Facebook through a parent's, close friend's or relative's gadget. The parents offered to allow their children to use their gadgets to access library services through Facebook or to help them access them through friends' or relative's gadget. The chief and Nyumba Kumi leaders also pledged their support in ensuring that reading materials are recollected or replaced. What's next for MaHa? The initiative is ready for implementation as it is. However several additional features can be incorporated in the initiative on request or out of need. These include inter-library borrowings, availing SATs for the students, involving teachers to assist and address specific students needs such as making clarifications, giving assignments and marking tests. Also, a way to hold online discussions can be devised. There is also a great need for schools to work closely together in order to verify where need be, the personal and academic details of the students. Built With facebook facebook-messenger images Try it out www.facebook.com
Masomo Hayakomi (MaHa) Initiative
Represents how primary and secondary school students can access reading materials from school libraries within or near their residential area during and after the Covid-19 pandemic through Facebook
['Mathew Mwangi']
[]
['facebook', 'facebook-messenger', 'images']
25
10,440
https://devpost.com/software/creating-employment-through-digital-work
Inspiration Having learned online work on my own and pay University fee from online work, I realized that many other young people can easily be taught the skills to enable them contribute to economic development and improve their lives as well. What it does It is a mobile learning platform where young people are trained and mentored on various online work skills remotely but without internet. This therefore enables young people in rural areas, slums settlements, peri-urban communities as well as urban dwellers to learn on the same pedestal. How I built it I researched for algorithms that could enable unlimited communication on a GSM network but without the ordinary cost implication so that as much as the communication is as clear as any gsm communication, the cost is insignificant. I then ask the learners to have their own computers and phones (bring your own device philosophy). I then ensure that the trainers are individuals who are already doing online work so that the learners get training and mentorship at the same time. With a trainer learner ration of 1:1 it makes learning more organic than mechanistic so much so that after training, the learner can easily start working. The training takes an average of one month pending on the student's ability to conceptualize the learning concepts and content. Challenges I ran into Finding an algorithm that uses GSM network while scaling down the cost of communication as much as possible was not easy given that I have no background in telecommunication engineering. It took a lot of research work and reading to be able to piece together seemingly different components to make a value laden compendium. Accomplishments that I'm proud of I have used online work to educate myself through the University and am now awaiting graduation. I have also trained three individuals and they are now working online on their own. I have also registered a copyright for the innovation and I have the copyright certificate. I have also registered a company and I have a company registration certificate. What I learned Character is made in the crucible of adversity What's next for creating employment through digital work Expansion of the program Built With a byod crypto gsm on over phylosophy voice
Remote work
creating digital work through training and mentorship
['Charles Akoth']
[]
['a', 'byod', 'crypto', 'gsm', 'on', 'over', 'phylosophy', 'voice']
26
10,440
https://devpost.com/software/jiranis-food-c3s7v1
2nd UNWTO Gastronomy Competion Winners ON social impact category 2nd UNWTO Gastronomy Competion Winners featured on Ghana tourism magazine Inspiration 2 years back we were posted in da resalam for an official contract, its while there that we wanted to access local food but we could not, we had gotten tired of junk food. One of our colleagues was connected to a local woman who would come to our apartment every evening to cook good local food. it was amazing! she would tell us things about the city which we found interesting, we paid her every end month so she would earn money to support her family. That experience was the origin of jiranis food which seeks to connect travelers with locals who can offer local food experiences ,they can dine with them in house or the locals can just deliver. What it does Jiranis food is a website that connects travelers with locals who are best with cooking local African food. its an online market place where locals can signup and list unlimited cuisines from their community. Travelers will in turn be able to book the experiences and have them in home or for deliver at a cost. How we built it When we came back to Nairobi we quickly registered a company and started working on it. we had initially started with a business model which from users feedback we noted could not work. so we settled on experiences as the value proposition to the end customers/travelers. Since we all had background in tech we did the whole design and development of the system. We started with a small no of users who would test and give us feedback so that we can iterate and launch and relaunch Late last year we were admitted into a program by cchub in collaboration with ihub under the project with potential to impact the community where we have been getting support . We relaunched early this year. Challenges we ran into We have been having adoption challenges since jiranis food business model is a new concept but with iteration and users feedback we have found product market fit. We also had no early investors so cashflow was a challenge but we have been bootstrapping ever since. Marketing strategies and having strategic partnerships with tourism stakeholders has also been a challenge since we don't have the right networks. Accomplishments that we're proud of We were accepted by cchub/ihub program that was looking for startups that has potential to impact communities at scale We recently won the Global 2nd UNWTO Gastronomy tourism startup competition under social impact category. We have more than 200 hosts who have posted more than 400 cuisines to choose from. What we learned We have iterated several times as a result of getting feedback from users. Every product requires to be used by the actual users so that they can give feedback. What's next for Jiranis Food Our mission is to impact 1 million lives in the next 5 years by providing a platform where the vulnerable group like the youth and women can have direct income from food tourism. This will impact their lives both socially and economically and support 2 UN sustainable development goals. We are also looking for partners and investors who can support us as we prepare to scale to other markets not just in Kenya We are also working on our mobile app for Android and Ios to reach more audience which will be released in the next 30 days. Built With amazon-web-services apache javascript mysql paypal php s3 yii2 Try it out jiranisfood.com
Jiranis Food
Jiranis Food connects travelers with local people offering local food experiences.Dine with locals the local way!!
['peter muchemi', 'ISAAC GAKAMBI']
[]
['amazon-web-services', 'apache', 'javascript', 'mysql', 'paypal', 'php', 's3', 'yii2']
27
10,443
https://devpost.com/software/art-museum
inspiration The genesis of this skill dates back a few years to AWS re:invent 2018. The Art Institute of Chicago had recently released a treasure trove of Creative Commons images (and audio tour snippets!) from their collection, which inspired a prototype at the Alexa hackathon that year. It was super fun to make and well received, but the idea never made it past that proof of concept. what it does Art Museum is a voice first art museum. It lets you traverse a vast art collection with simple language. As a starting place, you can go broad: “I want to see a painting”.“Show me another one like that”. And as you explore the collection, you can drill down. “Show me paintings from France”. “Show ones with horses in them”. “Bring me to sculptures from India.” “Actually, show one from Germany”. Each item is accompanied by a short form audio segment from the museum tour, bringing rich context to each piece as you view it. how we built it Of course none of this would be possible without the Art Institute of Chicago – a world class museum with a world class API (shout out to Nikhil Trivedi, the museum’s Director of Web Engineering & Experience Design for his guidance along the way!). Their catalog is vast, so the first thing we did was filter records that were in the public domain AND included bonus audio content. This left us with hundreds of records, but much more manageable than the full catalog. The API is full of rich information about each piece, but as with any voice project, the content is never just plug and play. To make this work, Katy and I built our own API in front of theirs, essentially designing a layer of conversational metadata to supplement their records so they would seamlessly integrate with our Alexa Conversations sample dialogs and custom slot values. We spent a ton of time on this, ultimately landing on category, origin, and detail as our three parameters. How would someone actually ask for a painting? They’d probably describe it! So we ran the catalog of images through AWS Rekognition to bring some additional descriptive tags into the mix. Our dataset is a blend of existing metadata from their API, some supplemental descriptive tags from Rekognition and of course a lot of elbow grease to smooth it all out. APL for Audio was also clutch. In the past you would have to mix the ambient museum audio into the dialogue lines, which is time consuming and often impractical. APL-A allowed us to mix a randomized assortment of ambient museum sounds to add some gallery vibe during the speech prompts. It also allowed us to serve the museum clips at full fidelity (would have been a shame to crunch them for SSML). The other linchpin was Alexa Conversations – which we utilized for dialog management, context carryover and state management. Building that scaffolding by hand with intents and session attributes is possible but it would be really hard and flimsy. Outsourcing the state management piece took a huge burden off the development process. challenges we ran into That being said, Alexa Conversations is crazy! It truly is a new paradigm for skill building – and it took a TON of experimentation with different model structures to achieve the experience we were hoping for. We had to scrap everything and start over four or five times. And each training data experiment can take many hours to design, build, debug, and observe. So working with this technology is a commitment. I’d have break through moments where I thought I’d figured something out, then 5 minutes later I’d have no idea what was happening. Major shoutout to the Alexa Conversations team for rolling up their sleeves and getting in the trenches with us the last few weeks and especially all this weekend. Their collaboration and partnership is why us and so many others made it to the finish line. accomplishments that we’re proud of Katy and I have always been excited about the intersection of short form media and voice. We’ve explored this with other projects by creating the media and building an experience around it. With Art Museum, we took an existing collection of media and made it more accessible. what We learned A lot. what's next for Art Museum Hopefully a lot. Built With airtable alexa alexa-conversations amazon-dynamodb amazon-web-services apl apl-a art-institute-of-chicago-public-api jovo lambda node.js rekognition s3
Art Museum
Go to the museum. With your voice.
['John Gillilan', 'Katy B']
['Finalist', 'Grand Prize']
['airtable', 'alexa', 'alexa-conversations', 'amazon-dynamodb', 'amazon-web-services', 'apl', 'apl-a', 'art-institute-of-chicago-public-api', 'jovo', 'lambda', 'node.js', 'rekognition', 's3']
0
10,443
https://devpost.com/software/meeple-buddy
Meeple Buddy in Action Inspiration I am a tabletop game enthusiast and like to play tabletop games with my family and friends as often as I have opportunity. But the one challenge we always have is deciding which game to play. Nobody wants to commit to any specific choice and generally says "I don't know, you pick." But then when I pick a game, they're not sure that's what they want to play. I needed a better way. What it does Meeple Buddy solves the problem of what game we will play. Using Meeple Buddy, I can load games that I own into my Meeple Buddy "game collection". Then whenever we're ready to play a game, I can ask Meeple Buddy to find a game given the number of players and a complexity (easy, medium, difficult, or any). Meeple Buddy will pick a matching game from among the games in my collection and suggest it for us to play. How I built it I developed Meeple Buddy as an Alexa-hosted skill, using the new Alexa Conversations to drive the dialogs for adding games and picking games. Because I also intend to support conventional request handlers in this skill for other purposes later, I'm not using Conversations as the default dialog manager, which means I needed to delegate between standard intent handlers and Conversations. I did much of the work in the browser-based forms and code editor, despite the fact that I typically prefer working completely local with VSCode to edit all skill artifacts. I chose to work in the browser-based forms, however, for two reasons: To gain the experience of working in that model and because on at least one occasion my attempt to deploy my code via the ASK CLI broke my interaction model underlying my Conversations model. Challenges I ran into The biggest challenge I faced was that there is no way of defining a Conversations model in JSON or some other data structure outside of the developer console. I found working in the browser-based forms cumbersome. But even more troubling is that because there are no file artifacts associated with the Conversations models, there was no way to manage my work in source code control and thus no way to roll back my work to the last known working state when I made mistakes. Another huge challenge that I faced is that when there are two or more slots of the same type and similar utterances being gathered in the course of a dialog, there exists ambiguity and the answer to one prompt might populate the wrong slot. For example, in my skill a game can have a minimum and maximum number of players, both expressed as slots of type "AMAZON.NUMBER". While I can define utterances for each to avoid ambiguity (e.g., "at most {maxPlayers}" or "no less than {minPlayers}"), it's more likely that the user will simply answer with a number, requiring me to define utterances such as "{minPlayers}" and "{maxPlayers}". Since both are of the same type, the utterances are ambiguous. I found that when Alexa would request the maximum number of players, both the minimum and maximum slots would be filled with the same value. I worked around that issue by creating request-args slots that asked for both minimum and maximum to be given in a single utterance (e.g., "{minPlayers} to {maxPlayers}"). This works, but occasionally, it will ask for the number of players twice (I presume that it's once when the dialog is requesting the minimum and once when requesting the maximum). This is non-ideal, but based on discussions in the Slack channel, there appears to (at least for now) be no other way. I also found it inconvenient that I couldn't test the Conversational part of the skill using ask dialog . And there were several other minor challenges, such as how to ensure that the session is closed after the dialog completes, but I found help for those things in the Slack channel. Accomplishments that I'm proud of Quite simply, I'm proud that I was able to build a complete and useful skill using Alexa Conversations in spite of the aforementioned challenges. I'm also proud of the fact that as I was developing it, I demonstrated my work to my non-technical friends and family and I could see their faces light up---they "got it" and could see how this skill would be useful. What I learned The most basic thing I learned was how to work with Alexa Conversations and to see its potential for future projects. What's next for Meeple Buddy There are several things I have in mind for Meeple Buddy going forward that just didn't make it into the version that was submitted for the challenge. These include: Creating a database full of tabletop games to draw upon so that if the user wants to add a known game to their collection, they can bypass the dialog that prompts them for complexity and minimum/maximum players. The Conversations-based dialog created for the challenge would still come into play for games that are not known in the database. Although I did some minimal APL so that users with screen-enabled devices wouldn't be looking at a blank screen, I have bigger plans to make those screens more animated and interesting. Perhaps showing box artwork for the suggested games. I would like to add a way of recommending games to users based on the games in their collection and possibly offer them for purchase through the skill. This might be tricky, however, because as far as I know, there's no way to sell Amazon-fulfilled products through custom skills. (I could be wrong.) I'd like to consider a way of treating game expansions as separate concepts to the base games and provide some useful interactions for choosing which expansions to play with a game, if there are any available for the chosen game. As I created the video for submission, I discovered several opportunities to improve the interaction model that I didn't catch while testing. Built With ask conversations javascript
Meeple Buddy
Meeple Buddy helps tabletop gaming enthusiasts decide which game to play. Add games to your collection, then ask Meeple Buddy to pick a game for your based on the number of players and complexity.
['Craig Walls']
['Finalist']
['ask', 'conversations', 'javascript']
1
10,443
https://devpost.com/software/refugee-restrooms
Email with restroom search results that gets sent to customers. It contains up to 10 results with clickable navigation URLs. Architecture Diagram What is new (Final Round)? Google / Apple Maps navigation using ' Alexa for Apps ' On mobile devices, users now get an option to launch Maps applications with directions to the top restroom preloaded. Thanks to the 'Alexa for Apps (preview)' team for whitelisting the skill to integrate with their 'preview' software. Search by address, city and state. Previously address based search was limited to zip code which is neither natural nor granular enough. With the latest release, users can search using full addresses like '2121 Denny Way', '535 Pontius Avenue, California' etc. Personalized results on ambiguous addresses. 'Alexa, find restrooms at Union Street' is an ambiguous query because there are many 'Union Streets' across US. The latest update leverages user's current location and device registration address to personalize the search results. So, a user in Seattle will find restrooms near 'Union Street, Seattle' and a user in California will find restrooms near 'Union Street, California' . Note: If the user hasn't granted permissions to use their location, we perform a generic search. Such users can always try again by specifying city and state. Note: The signals are used only to influence the search results and not as a strict filter. If users are not satisfied, they will have to provide city and state. Robust input correction Alexa Conversations models were updated to allow for more robust input corrections and non linear dialogs. Users can switch between location based, zip code based and address based searches and correct their inputs multiple times in the same session. Better error case handling Instead of ending the session when users provide an unsupported address, we now let them choose to provide an alternate address or end the session. How did Alexa Conversations Help?? Heavy lifting outside code: In the most complex case, the skill collects 7 inputs from user. That would have been a lot of boiler plate cumbersome code in a traditional skill where as in AXC, I just had to configure dialogs and dialog acts and AXC does most of the heavy lifting before delegating over to the developers Lambda. Reliable input corrections: In one of the paths, the skill collects full addresses where the likelihood of an ASR mis-recognition is higher. That makes input correction very important. I'm pleased with how easy and reliable AXC's input corrections are once the dialogs are modeled properly. I tried correcting all inputs several times in a single session and it works! Error handling: 'Request Alt' is a poorly documented but a powerful dialog act. It helped me build dialogs to nudge the user away from unsupported addresses like '24th Avenue' towards supported addresses like 'Union Street' without having them to start over. I could retain all other inputs and just get them to correct the invalid input without leaving AXC. AI predictions: While building the AXC model, I only used the word 'restrooms' in all my utterance sets. When I asked a friend to test the skill, they surprised me by asking for 'washrooms'. I was pleasantly surprised to see AXC handle utterances asking for 'washrooms' just fine. Even today, my utterance sets only contain 'restrooms' but AXC is able to handle other variations like 'washrooms' and 'toilets' just fine. APL / APL-A: My skill has 21 APL-A responses and 10 APL responses. Each of the APL-A responses has multiple variations to make the skill sound human. In a traditional skill developers wouldn't create so many multimodal responses because it is too cumbersome. AXC made it significantly easier and decoupled from code which encourages developers to create more multimodal responses. Key Test Cases: 1. search for restrooms near me (try this on a mobile device) 2. search for gender neutral restrooms by address 3. search for accessible and toddler friendly restrooms by zipcode 4. switch context by asking for help after initiating a search 5. initiate a search and when Alexa asks you to confirm the inputs, change the inputs 6. try the skill with email and location permissions enabled and disabled 7. try a non-existent zip code like 98100 8. try an unsupported address like 24th Avenue Inspiration In an ideal world, every individual could use the restroom that aligns with the gender they identify with. We do not live an ideal world though and one of the biggest battlefields upon which the fight for transgender rights is taking place daily are restrooms (especially when they are traveling to a new place and don't know where to find a safe restroom). Despite legislative victories in recent years regarding restroom usage, many transgender individuals still face both verbal and physical harassment simply for using the restroom of their choice. Nobody should have to face that - and that is why I created the 'Refugee Restrooms' skill. What it does Refuge Restrooms is an Alexa skill that seeks to provide safe restroom access for transgender, intersex, and gender nonconforming individuals. The skill also supports users looking for accessible restrooms and restrooms with changing tables. The skill's primary goal is to help users who are traveling or planning their travel. For users who are traveling- The skill supports restroom search by proximity to their current location. We use their phone's location to search for restrooms near by. If the user hasn't consented to use their geo location, we request the user to grant location permissions or nudge them to search by zipcode instead. Once a restroom is found, we also offer to launch Google Maps or Apple Maps on their phone with directions to the restroom preloaded. For users who are just planning their trip- The skill supports restroom search by full address or zip code. A bug in Alexa Conversations prevents us from supporting numbered streets like 24th Ave, 1st street etc. I'm working with an Amazon solution architect to fix the bug and in the meantime we implemented an experience to let the user provide an alternate street address without numbers. For ex, 601 Union Street. In either case, the criteria that we currently support are unisex restrooms (for gender non-conforming users), accessible restrooms (for differently abled customers) and parent friendly restrooms (for parents traveling with a kid who needs diaper changes). User's get the results delivered through multiple means - Through voice (the top result) On their echo devices with screens (the top result, uses APL) In their Alexa companion app (up to 5 results) Users also get an email with clickable Google Maps links to the top 10 restrooms, sorted by distance. Most importantly, users on mobile devices also get an option to launch either Google Maps or Apple Maps with directions to the top restroom loaded and ready to go. How I built it Architecture Diagram Alexa Conversations The skill requires the following inputs from the user - Whether they want to search by proximity to their current location or at a specific address or zip code. The street, city and state if they are searching by address. The zip code if they are searching at a specific zip code. One or more of the three supported search criteria. (unisex, accessible, parent-friendly). Users can also convey that they don't have any search criteria and that any restroom works for them. (In future we plan to add more criteria like restroom ratings, how recently it is updated etc.). In the most complex case, we need to collect seven pieces of information from the user. To write a skill that collects seven inputs from the user would be quite tedious using the traditional custom skill model. That is when I learned about Alexa Conversations. I was able to use the power of AI to easily handle the entire conversation around gathering inputs using Alexa Conversations. Once all the inputs are collected and confirmed, I transfer control to the custom skill to actually search and present the restrooms. AWS Simple Email Service As I built the skill, I realized that just searching for restrooms and giving a voice response isn't really sufficient. In most cases, people don't need the restroom right away and are just planning ahead. It is also very important that they have a clickable navigation link (like Google Maps) which is not possible through voice or Alexa companion app. So, I decided to enhance the experience by sending an email to the users with the search results (only if the user gave permissions to use their email). The email lets us provide rich information which is prohibitive in a voice interface. The email contains up to 10 results and each result has directions, notes, ratings, Google Maps links, the features of the restroom etc. I used AWS SES to implement this. It was my first time using SES and I was pleasantly surprised how easy it is to send templated and personalized HTML emails. Alexa for Apps 'Alexa for Apps' is a preview technology by Alexa that is not yet generally available. It lets us launch 3P applications on a mobile device from an Alexa skill. I was convinced that 'Refugee Restrooms' is a great use case for 'Alexa for Apps' where we can launch Google Maps or Apple Maps with directions to the top restroom result. Thanks for Alexa for Apps team which was very responsive and helpful to first approve my skill and secondly for their technical help as I tried to get it working with the skill. Challenges I ran into Getting a hang of all the concepts of Alexa Conversations took some time. The cook book code samples and the office hours helped a lot. I habitually rely a lot on ASK CLI based replay-testing while developing skills. However, that is not yet supported for Alexa Conversations based skills. So, I had to double down on unit testing to make sure I keep my development turn around time short instead of having to deploy the skill every time I make a minor change. End of the day, I ended up with as much test code as source code giving me a lot of confidence as I make changes to the skill. Figuring out the right technology to send emails to the customers was challenging. Once I settled on AWS SES, it was quite easy. In this skill, I also decided to try slots with multiple values feature recently introduced by Alexa. It made my skill model significantly simpler. Figuring out how to launch Google Maps and Apple Maps on Android and iOS devices was a challenge. It was a technical challenge and also a UX challenge because I haven't designed a skill that launches other applications before. Accomplishments that I'm proud of Bugs I found While building a skill on a new platform is quite fulfilling, I'm actually quite proud of the number of bugs / issues I found with the Alexa-Conversations model. I submitted about 10 issues on the Amazon developer forum and hopefully, these reports help make the platform better as it grows out of beta. Here are some potential bugs in Alexa Conversations that I submitted - Data binding not supported in Alexa Conversations APL responses Poor error messages Lists in API Arguments are not being sent as slots Wrong locale format in Alexa Conversations request objects When utterance set is bulk edited, slot types are being lost resulting Incorporating APL Incorporating APL in Alexa Conversations responses was a good accomplishment. It dramatically enhanced the Alexa Conversations experience on Echo Show devices and FireTVs. I plan to invest in it even more. Testing At the onset, I decided to invest a lot in testing my code. I wrote as much test code as source code and I can confidently say every branch has an integration test . I'm very proud of this accomplishment and it helped me iterate very quickly with my code and make changes confidently. What's next for Refugee Restrooms The skill currently supports searching at a specific zip code but that is quite limited. User's planning to travel somewhere would be more likely to ask for restrooms by city or full address. I will add support to search for restrooms by full addresses. This is now implemented. The Refugee Restrooms database backing this skill is international. I need to expand the skill to other locales. Arguably, safe restroom access is even more vital outside the United States. The Refugee Restrooms database has attributes like rating, recent usages etc. Adding the ability to filter by these attributes will let the users make an informed choice while choosing their restroom. This is an area where Alexa Conversations really shines because I can add more search criteria without having to handle everything myself in the skill code. The skill is currently one-way. There is no way for users to provide feedback. User feedback is vital for the crowd sourced refugee restrooms database. I need to extend the skill to make it possible for users to provide feedback. Currently, I send emails to customers with search results. Users on the go would probably prefer an SMS. I plan to add support for that. Built With alexa-conversations alexa-skills-kit amazon-web-services ask node.js ses simple-email-service vscode Try it out www.amazon.com github.com
Refugee Restrooms
Refuge Restrooms provides safe restroom access for transgender and gender nonconforming people. The skill also assists people looking for accessible and parent friendly restrooms.
['Babu Sabbavarapu', 'Appala Naidu']
['Finalist', 'Best Alexa Conversations Visual Skill']
['alexa-conversations', 'alexa-skills-kit', 'amazon-web-services', 'ask', 'node.js', 'ses', 'simple-email-service', 'vscode']
2
10,443
https://devpost.com/software/the-conpiracy-theory
Conspiracy Theory Voice Design Process Conspiracy Theory Voice Design Process Conspiracy Theory Dialogs Fighting fake facts through play In a world of fast information, with fake facts and fake news, we are all susceptible to narratives gone viral. We wanted to design a game to activate and sharpen the critical thinking skills necessary to survive the torrent of information and news we are faced with daily. The World Economic Forum ranks the spread of misinformation and fake news as among the world’s top global crisis, and a 2019 study by the University of Baltimore found it costs the global economy $78 billion annually. We as citizens have a responsibility to weed out fact from fiction. Working against us though is the human tendency to believe any information repeated often, as our brains cognitively prefer processing shortcuts. Luckily, our brains also love puzzles, problem solving and play. We created a game that combines the intrigue of conspiracies with fact based information. What the skill does When you say, "Alexa, open the Conspiracy Theory", you enter an escape room game of puzzles and clues built around a conspiratorial narrative. The first game of the series, Moon Landing, revolves around the space explorations spearheaded by the Russians and Americans in the 1950s-1960s. Our story begins at a federal building where a tipster has informed you that the Apollo 11 moon landing tapes, missing for over 40 years, are hidden inside. You are an investigator who wants to recover and analyze these tapes. A security guard will let you in to the building, but the rest is up to you. You can use commands like "inspect" and "look at" to explore the space while trying to complete your mission. This skill is designed for teens and adults, with a focus on reinforcing true information, with future customizable options to return to where the player left off and the ability to add more games following conspiracy scenario narratives. How we built it We combined conversation design with developer prowess to create this custom Alexa game skill using: Alexa Conversations Alexa Presentation Language (APL) Alexa Presentation Language for Audio (APLA) Speech Synthesis Markup Language (SSML) Alexa Emotions Audion assets: Original theme music was designed using Amper Music's artificial intelligence composer. Open sourced audio was sourced through Alexa Skills Kit Sound Library, NASA and other open banks. Product management and communication: Trello and Slack Dialogue scripting: Google Sheets, inspired by Hillary Black's script template Flow design: Miro Visual design: Figma Video creation and editing: Filmora 9 Challenges we overcame We faced a series of obstacles which we are proud to have solved together. We are a six person team with members in three countries, four different time zones and working in two languages. Work was often started by one team member and completed by another, requiring excellent communication to avoid bottlenecks and duplication of work. Our team had never worked together before and for the majority of the team this was their first voice project and first hackathon. We worked with many new processes and tools, both on the design side and the developer side. Many of these processes are not documented by the larger community, requiring strong improvisation and recovery skills as we went along. Alexa conversation is in beta so a lot of code needed to be created from scratch. Blogs with examples and information don't yet exist in English, and much less in Spanish, the language our developers work in. This added a double work load to our developers, as this project was limited to the US version of Alexa. Conversation design tools such as Voiceflow and Botmock aren't yet able to be integrated with Alexa Conversations (beta) which meant we needed to recreate their functions with other design tools. There were a large number of conspiracy theories that needed to be organized and investigated before choosing one and developing a natural flow of puzzles around it. Accomplishments that we're proud of We are very proud to have worked so well together as a team, and a very international team at that. We learned to use Alexa Conversations and improved on the developing and design side through the process. We followed a scrum methodology, with daily standups to make the deadline, working around different time zones where so many of us were "in the future". We finished our MVP with a very short turn around time and are excited to continue moving the game forward. What's next for The Conspiracy Theory Our future involves further developing the script and puzzles for the Moon Landing episode. In the short term we plan to: Develop two additional rooms that are ready to be coded Create a multimodal version, prototyping and testing images and texts throughout the game to make it accessible for the deaf community And in the long term we would like to Translate the game into Spanish Explore using virtual reality for the images in the game Develop future conspiracy based episodes Built With alexa-conversations amazon-alexa creativity
The Conspiracy Theory
Where does the cover up end and the conspiracy begin?
['Vicente Guzmán', 'Amy Oliver', 'Dannon Tabing', 'Jessi Willey']
['Finalist', 'Best Alexa Conversations Audio Skill']
['alexa-conversations', 'amazon-alexa', 'creativity']
3
10,443
https://devpost.com/software/date-a-voice
date.a.voice splash date.a.voice logo What's New!! Over the past few weeks our team has been working hard on making Date.A.Voice even more interactive and engaging through adding new visuals, immersive background audio, and male dating options . On top of that, we've improved the flow of the dialogue to make the dates more smooth and conversational. To learn about the updates in more detail check out the recaps below. A quick video showcasing the new features: Visuals Date.A.Voice now has a unique avatar for each of your favourite dating companions, each with their own personality and design! Whether you're someone who enjoys rocking your favourite pair of jeans or a fancy top, we've got you covered with someone who matches your style. We've also included dynamic visual responses in addition to our custom audio responses. You'll be able to see your date's questions and responses in speech bubbles as well as a unique visual background depending on the setting of the date. Your date will also react to each of your responses so you'll be able to tell if your date is going well or if the love ship has sailed 🚢. Male Voices Date.A.Voice also now allows user to pick between male and female dating options with custom Amazon Polly voices for each character. We hope that this addition will allow for all users to find enjoyment from our skill. Immersive Audio Date.A.Voice now supports immersive background audio for each date! Whether that's the sound of waves crashing along the sandy beach 🏖or the clinking of utensils in a restaurant 🍷 we've added sounds to make the conversation feel even more real. Inspiration Amazon Alexa has always struggled with making real, meaningful conversations. However, the new Alexa Conversations API promises to deliver exactly that: smart, real conversations. Meanwhile, dating simulators have always been heavily text-based and are visual experiences. Naturally, my curiosity led me to consider the possibility: is it possible to design a voice-based virtual date experience with Alexa? This is the challenge we decided to take on. What it does Date.A.Voice allows you to go on a date with multiple possible voice assistants. The personality of each partner determines how they will react to your actions and words. Try to make your partner happy, and you will maybe find success! Love is just one invocation away! How we built it Overall, a lot of effort went into research and planning the architecture of the entire app so that when it's all put together in the Alexa Developer Console, it runs smoothly. We used Alexa's new Conversations API to build the main model of the skill to design real, fluid conversation. We used Amazon Polly and Speech Synthesis Markup Language (SSML) to simulate different voices. We also experimented with different tones for each voice in order to reflect the mood of your date partner during the conversation. If your answers to the questions fit the personality of your date you'll notice that their tone will reflect that. You'll also find that during the date you're able to change your answers to date questions. This was an important feature we wanted to include because a conversation should feel seamless and flexible. Alexa Conversations API was able to make that even easier with built in functionality to repeat previous questions. Challenges we ran into Because Conversations API is very new, there is a lack of developer experiences with the technology. This required us to really get creative and learn from existing Alexa skills, great documentation and tutorials, as well as other sources of inspiration to create this ground-breaking app. Even so, the Conversations API beta is incredible in the scope of its features and we were impressed by what it could do and what we could make out of it. We encountered some source control issues, but we were able to resolve it by communicating well and backing up our code on Github. Accomplishments that we're proud of We are incredibly proud of the fact that we were able to leverage the strengths of Alexa Conversations while still putting in all the features that make for a creative and interesting dating simulator game. We were able to utilize our unique and diverse strengths as a team by giving ownership of specific flows to each team member so they would feel committed to their role and could make for seamless experiences. For example, we would have a script lead to focus on making the lines for each character sound consistent and fluid, a database lead to handle the storage and retrieval of data, a technical lead to own the functions and handlers etc. The most crucial step was the synthesis of our ideas, which we did by having weekly meetings to bridge the different parts of the skill we were working on. We are proud that we were able to successfully combine our skills to create an incredible skill experience. What we learned We learned a lot about Amazon's APIs and developer tools for Alexa skills. Before, we had never thought of creating our own Alexa skill, let alone one as complex as this one, with 5 unique characters to interact with. We also learned about important elements of game design and script writing, as equal effort was spent architecting the solution as well as actually building the interaction model and writing the Node.js codebase. What's next for Date.A.Voice Next steps for Date.A.Voice include incorporating male voices to create for a more expressive and inclusive experience. This may involve tweaking some lines for the characters, but in general, we have written the characters to be gender-neutral. As well, due to our success, we feel capable of taking on more complex interactions that leverage further Alexa Conversations tools such as context carryover to make a more realistic conversation experience. We also want to incorporate rich audio soundscaping using the APL for audio beta feature to manage and mix audio files. By including the sound of waves crashing on the beach, or ambient sound at a restaurant we hope to create a more immersive experience that we can customize to each date’s experience. Built With alexa amazon-polly conversations-api node.js ssml Try it out www.amazon.com
date.a.voice
date.a.voice presents the first ever "voice assistant" dating simulator, now accessible on Amazon Alexa. Lockdown is the perfect time to practice your dating skills and try wooing a virtual partner!
['Jeffrey Zhang', 'Jack Yao', 'Jeffrey Liu', 'Richard Yang', 'Danny Lan']
['Finalist']
['alexa', 'amazon-polly', 'conversations-api', 'node.js', 'ssml']
4
10,443
https://devpost.com/software/the-great-gatsby-review-session
Inspiration One of my favorite school memories was of my literature class where we had intimate discussions on books and short stories together. Not only was it a fun way to learn, it was a smart way to learn. "[T]he dual action of speaking and hearing oneself... has the most beneficial impact on memory," a study by the University of Waterloo found. This project hopes to nurture discussion between Alexa and students. What it does Provides review sessions on the first three chapters for free Goes over multiple-choice questions on the story and literary devices (symbolism, polysyndeton, etc.) Converses with the user in forming an answer to a free response question Lets customers review the rest of the book (chapters 4-9) through in-skill purchase How it's built Uses an intent-based dialog manager to handle chapter selections and multiple-choice question answers Uses Alexa Conversations to handle answers to free response questions; this allows: collection of multiple parts of a free response answer - who, what, where, when, why, and how conversational memory without complicating the back-end less time spent on dialog creation Uses a Node.js back-end to delegate between the two dialog managers Challenges I ran into Learning how to use Alexa Conversations was a big challenge that I could not have overcome without the help of Justin Jeffress , Sam Ingbar, Nathan Grice, and the other wonderful people of the Alexa team . What's next A time when we can ask Alexa, "start a review session on Romeo and Juliet" A time when we can ask Alexa, "start a review session on Night by Elie Wiesel" A time when we can ask Alexa, "start a review session on Midnight's Children" What I learned When dealing with an input set as infinite as the English language, Alexa Conversations is a must-have tool. Built With alexa amazon-alexa node.js Try it out www.amazon.com
A Review Session On The Great Gatsby
An Alexa Skill that allows students to review F. Scott Fitzgerald's novel, The Great Gatsby, through multiple-choice and free response questions - powered by Alexa Conversations.
['Ace Don Adriatico']
['Finalist']
['alexa', 'amazon-alexa', 'node.js']
5
10,443
https://devpost.com/software/the-great-expedition
First step of an expedition Note that bringing the tool box unlocked a new option Potential result of an expedition tl;dr The Great Expedition is essentially a set of non-linear adventures in an persistent environment, meaning that every expedition is something like a “Choose your own adventure game” but the choice you make influence future games. In addition, you have a pool of characters and items, from which you select a subset for each expedition which can open up even more options. Successful expeditions can provide you with new items, characters or even new regions. Alexa Conversations makes the selection of region, characters and items much easier than traditional Intent based Dialog Management. Inspiration Even as a child, I found 'choose your own adventure' books fascinating. I remember trying to reverse-engineer a particularly hard puzzle by going through each and every section. With The Great Expedition I wanted to bring that feeling of exploration and progress to Alexa. The beta release of Alexa Conversations finally made it possible to build a robust and user-friendly way of selecting items, characters and location. What it does In ' The Great Expedition ' you start in London at the turn of the century. The Royal Exploration Society has tasked you to find long lost artifacts, rare animals and priceless works of art. You will need to mount expeditions to far corners of the world. Fortunately for you, you have already some competent companions and helpful items at your disposal. However, you cannot take everything with you on an expedition at the beginning. Your points in leadership determine how many companions you can take with you. The same goes for knowledge and your items. Lastly, there is morale which determines how long you can motivate your crew on an expedition. Should morale drop to 0, your crew will mutiny. Expeditions are essentially 'Choose your own adventure stories'. Meaning, during your expeditions you often have multiple options and you will need to decide which to pursue. Note that different characters and items might unlock new options. Successful expeditions can unlock new characters, items, and even new locations. Returning rare animals and treasures will also earn prestige with the Royal Exploration Society. How I built it ' The Great Expedition ' is an Alexa-hosted node.js skill using DynamoDB for persistence. Challenges I ran into As a software developer, I initially found it difficult to use the developer console to create and annotate dialogs, response, utterances etc. But after a while, it became actually easy and enjoyable due to the validation and pre-selecting of variables and options among other things. Accomplishments that I'm proud of Submitting a working and hopefully enjoyable skill. I was very proud, the first time my Alexa Conversions Dialog finally worked using context carryover, list items and corrections What I learned Time-management and focusing on the core idea. Designing and developing the skill as a single person meant that I needed to focus on the core experience in order to deliver something valuable at the deadline. See the next paragraph ;) What's next for The Great Expedition Regarding the content, I have a lot of ideas in my head about new locations, expeditions, characters and items. Technically, I would like to polish the presentation using more APL features, e.g. adding a morale tracker and enabling character/item selection via on-screen buttons. In addition, I would really like to play around with APL for audio to add effects or background sounds. Built With alexa amazon-alexa amazon-web-services node.js
The Great Expedition
Can you explore the far corners of the world and find long-lost artifacts, rare animals and priceless works of art? But beware, dangers lurk at every corner
['Ben Freiberg']
['Finalist', 'Best Game Skill']
['alexa', 'amazon-alexa', 'amazon-web-services', 'node.js']
6
10,443
https://devpost.com/software/smartnest-home-consumption-tracker
UI medium size start Water report example UI medium size Electricity report example UI medium size Water report small size UI small size Ultrasonic smart watermeter bought from china for the tests Testing communication of water meter Electricity meter bought from Germany for the tests Bulk production of water meters for a block of apartments in the Czech Republic Block of new apartments to use smart metering devices that can be connected to the skill Inspiration I live in the Czech Republic, where the water and electricity are charged with a monthly fixed payment, and at the end of the year, people pay for the extra water and energy they used. One year the water bill was almost 6 times the normal value and it was because one of the water valves was broken and was supplying water to a water heater without stopping for almost a year. We realized that it would be really useful if Alexa could give short water and electricity reports to let us know that everything is normal or warn us about higher consumptions on time. What it does Smartnest Plus allows Smartnest users to keep track of their water and energy consumption. Users can ask Alexa to tell them Water or Electricity reports and these can be a day, week, or month report depending on the period the user is interested in. Alexa would start by telling them if the consumption for the selected period is normal or is higher or lower than the average. this would be an example of a basic report: USER: Tell me my weekly electricity consumption report. ALEXA: Your electricity consumption this week has been higher than your weekly average by 15% Users can also ask for a detailed report in which Alexa would tell them the exact consumption value for the selected period. for example: USER: Tell me a detailed electricity report. ALEXA: Your electricity consumption this week has been 123kWh, 15% higher than your weekly average. Users can use Alexa conversations to switch between report types, periods, average or detailed reports. How I built it Smartnest Plus was built using Nodejs and Amazon Lightsail for the back end. Smartnest provides an MQTT Broker that allows different types of devices for smart homes and IoT to connect and interact with other devices. Two new device types were added to the service, Water meter, and Electricity meter. After users create these virtual devices they have to connect them to real devices using any compatible board than can communicate using MQTT. Then the devices can send daily updates about water or electricity consumption, this data is stored and processed to generate the user reports. The skill was build using the new Alexa Conversations as the core component of the functionality, this made us save a lot of development time and let us focus on the experience we wanted to offer to the users. We also had time to build a User Interface using Alexa APL language. This UI was designed inspired on futuristic UIs like from Iron Man movies, it was necessary to build Vector graphics, add transparencies, and make the whole interface responsive to adapt to all screen sizes. We also added special animations to some components to create a better futuristic feeling. Challenges I ran into The first Challenge was to find a way to get the data from the metering devices, after doing big research we found that some water and electricity meters can be read using an external microcontroller, this allowed us to start sending data to Smartnest cloud. Another challenge was after receiving the data from the devices, it was necessary to store them in an optimized way and to calculate the right consumption in order to create reports in the future. Another challenge was to get used to the new feature Alexa Conversations but the Slack channel and the Alexa developer Console helped me through it. Another challenge was to make the user interface responsive to make sure it would work in all Alexa devices no matter their screen size, so I learned to resize and reorganize components depending on the screen size. A huge challenge was certification because I was not aware of some requirements mi skill had to fulfill but after some attempts, the certification team helped me to fix the issues and to have my skill ready for publishing. Accomplishments that I'm proud of After overcoming all the mentioned challenges we were really proud of the user interface and how it can show a lot of useful information about the consumption, the user is able to compare the current consumption with up to three previous periods, it is also possible to see the average consumption and if the consumption is higher than the average the bar would turn red. The communication with the real devices is something we are also proud of and the algorithm to receive and process consumption data. We are also proud to be able to offer this service free of charge for anyone interested in having control over their home resources. Smartnest Cloud service has more than 2000 users and more than 4000 connected devices, we expect that in the future many more devices will be added because of this new feature. What I learned I have learned many useful techniques to properly build Alexa skills and how to work with the new feature Alexa conversations. also to create User Interfaces using Alexa APL, and create consumption reports. What's next for Smartnest Plus - Water and Energy Consumption Tracker We will send an update to the web, Android, and iOS App to allow the users to input the consumption data manually. Ther is a project of 1000 apartments in the Czech Republic where they will install these smart meters, we will provide this skill so the users of these flats can have their consumption over control. We will improve the service for this project in the Czech Republic by adding the European units of water and electricity consumption. We will provide youtube tutorials to teach everyone how to keep track of their water and electricity consumption using this skill. We would like to start offering the service of installing these smart metering system in any house. And as a long term objective we would like to start selling smart meters all over the world. Built With javascript react Try it out www.smartnest.cz
Smartnest Plus - Water and Energy Consumption Tracker
Smartnest Plus allows you to keep track of your water and energy consumption, avoid surprises, and detect leaks or high consuming devices fast, get your day, week, or month consumption reports.
['Andres Sosa']
['Finalist']
['javascript', 'react']
7
10,443
https://devpost.com/software/school-quiz
First version of the skill icon Skill Icon and Logo samples from Hana Sharratt Hana Sharratt our fantastic graphic artist First version of our happy path conversation scripts 1 First version of our happy path conversation scripts 2 Alexa Conversations setup Finalist Skill Improvement Period Updates 18/10/20 We welcome Professor Learnaloud to the team, who will run you through your study plan. Listen to hear more about the audio improvements we've made in preparation for the finals Improved experience with Alexa Conversations so you can change your study plan easier. Implemented APL for Audio to provide a richer sound experience with new sound effects and voices. Implemented Reminders API to create a reminder schedule unique to your study plan. Increased the number of questions available by 50%. Known Issues We're working with Amazon to address a bug with the Conversations experience Submission information Our submission is team effort by two companies Rabbit & Pork (John Campbell & Jamie Poole) and The Audio Tailors (Josiah Smithson) Inspiration Our inspiration for this skill came from the many children around the world that have been missing out on key time in school. Over the next year many children are having to catch up with their studies and some retake exams. We wanted to create a skill that would enable them to prepare for exams in a non screen environment and in a fun way. We want to also at the same time bring a rich audio experience to the user, something they might be familiar with when playing other games. What it does Learning Out Loud has two elements. First using Alexa Conversations we gain vital information from the player to customise the playing experience. We ask them what year they are in, what subjects they are studying and when their exams will take place. This then allows us to calculate what type of questions to ask from the database of questions we have. We then move into the quiz mode where the user is asked 10 questions a round. Each question will be related to the subjects they are studying and adjusted to their year (harder questions for those older). Each question is multiple choice, the player can answer by either saying A, B, C or D or say the answer just read to them. As the user progresses we keep a track of their score. In future plays of the same subject we’ll give them feedback on what their previous highest score was. At the end of the round, the user is given their score and then can take another subject or more questions from the same subject. How we built it Ideation We had a number of ideas at the start of the project, including some different to Learning Out Loud. We had a video meeting session where team members gave their ideas. Each idea had to meet the following criteria a) could we use Alexa Conversations in the idea? b) could this be a voice first experience? c) could the skill help in the current pandemic in some way? d) would we be able to use rich audio in the experience? e) has this exact idea been done before? After deliberation we decided that Learning Out Loud was the best option moving forward. We felt that the customisation nature was something new not seen before. Testing the conversation tool With the idea Jamie tested the new Alexa Conversations to see how it would work alongside our existing knowledge of Alexa Skills. This made sure we were confident the skill would be able to take advantage of the new Alexa Conversations Happy path scripts John created a number of happy path scripts (see images below), these included a first time user, returning user, correct answers, repeat requests and incorrect answers to see how the conversation flowed. Annotations added marked out dynamic slots, language changes for singular and plurals and most importantly where rich audio could be added Briefing The Audio Tailors team for audio We know from previous experience that skills are much more engaging and memorable with rich audio. Jamie and John sent a brief over to Josiah to create the 13 pieces of audio to be included in the skill. We decided we needed the following pieces of audio; Opening and exiting of the skill Calculation sound for after the Alexa conversation Intro to the round for different subjects (Maths, English, Science, Geography, History) Correct, pass, incorrect and countdown to answer End of round The Audio Tailors Listen to the Audio Story of Learning Out Loud https://soundcloud.com/josiah-smithson-603020330/the-audio-story-of-learning-out-loud All of the sounds that needed creating had specific purposes to fulfill. We knew that “Learning Out Loud’s” efficacy would rely on a smooth and fun user experience so we took the time to find the most relevant and recognizable sounds to ensure the intention and purpose of each sound would be absolutely clear to the user. As specialists in Audio Branding we were not only eager to use sound to better the overall efficacy of “Learning Out Loud” but to also highlight the incredible opportunity that is being presented to companies via Alexa Skills. It was clear to us that an Alexa Skill is not only a means to provide a service or experience but rather a revolutionary new brand asset whereby sound can be strategically utilised to achieve a brands goals. So we set out to demonstrate that intelligently used sound could be used to connect more emotionally with users, create positive association and embed powerful audio queues that could significantly boost brand recall in a number of far reaching alternative contexts. To achieve this we first needed an audio signature to serve as an sound logo for the skill. Instead of using a melody or specific instrument we decided that the basis of the audio signature should be a rhythmic pattern. By using a rhythm we were able to remain open and flexible for future unknowns giving us a larger scope for audible adaptability later on down the line. “Learning Out Loud” is a skill for young people looking to learn whilst having fun - so we chose a rhythm which was progressive and motivational in nature. We also used the same amount of beats in the rhythm as there are syllables in the Skills name (Learning Out, Loud - 1-2-3, 4). When the name of the skill is spoken in time with this rhythm, the memorability of the name drastically increases. This was a priority as the user's ability to successfully open the skill would depend on their ability to remember its name. We will also be able to synthesize our visual logo with our sound logo via animation later on down the line. This will also greatly improve brand recall / memorability of the Skills name. To create the Sound Logo we used the relevant sounds of a school bell ringing, a computer generated powering up sound and the scratching of a pen to assert the signature rhythm. This Sound Logo is then fortified in the minds of the user via repetition across many of the other audio chimes we created which play throughout the skill. The signature rhythm is played with a variety of alternate instruments that correlate to the purpose of each individual chime e.g. Maths - Punching Calculator, Science - Clinking test tubes, History - An age of Empires inspired piece with old drums etc etc. It was very important to us that before the users first session was over that the “Learning Out Loud” audio signature/logo would be firmly embedded within the users mind. The more the skill is used the more ingrained it will become. Development Development started after the happy path scripts were complete. The skill is built with the Alexa Skills Kit SDK for Node.js, S3 bucket storage for the audio files, Dynamo DB for saving users previous scores and questions We wanted to be able to add in future questions once deployed and also be able for students to say the answer, e.g. “The san andreas fault” as well as “B”. To do this we used the dynamic entities feature which would allow us to add in new answers slots without the need to update the language model each time. Testing We were able to test by sharing access to all members of the team and along with close family and friends with the target audience. We asked people to record the audio for the session and then send it back to allow us to fix any errors. Once amends were created we were able to submit to live. Icon design We wanted to give the skill an eye catching icon which would include and educational reference, be easy to recognise and importantly fit well with the format of Alexa skill icons. We were able to work with Hana Skarratt who created several versions of the skill icon along with a bigger logo. Challenges we ran into There were a number of challenges we ran into with this project. First of all as the Skill has to live for the US we needed to adjust the Alexa Conversations element to take the years of US high schools (9th freshman, 10th sophomore, 11th junior, 12th senior ) rather than UK highschools which (year 7, 8, 9, 10 11). Using the UK it would have been easier as the players would say “year 7”, where as with the US we needed to handle “9th grade” or “freshman year”. The Audio Tailors With no visual aid to queue cognition the greatest challenge we faced was finding sounds for the school subjects. The subject sounds had to be instantly recognisable and easily identifiable. Due to Geography being such a broad subject with its three main branches (human, physical, environmental) it proved to be the most challenging piece to conceptualise. After some trial and error playing around with the multiple sounds of maps opening, globes spinning, volcanoes erupting and a whole range of atmospheric sounds all the way from the jungle to the sea, we were finally able to combine all three branches into one coherent sound that could be readily associated with the subject. The sound of human footsteps (human) on loose earth (physical) in the rain (environmental) followed by the sound of falling rocks in a landslide (physical) in the rhythm of “Learning Out Louds” audio signature. Accomplishments that we're proud of We were proud to be able to work together as a team for the first time putting together a very neatly packaged skill which has the room for plenty of enhancements in the future. It was great to be able to use Alexa Conversations and APL for audio for the first time, the majority of our projects are UK based so we wouldn’t have been able to use these features. The Audio Tailors We were proud to be able to work together with John and Jamie, two immensely talented and competent people in this exciting new field. We were also incredibly proud to be involved in the creation of something that will benefit so many young people affected negatively by Covid-19. We believe in this project and hope that it will provide them with a helpful tool to aid in securing their academic futures during this pandemic. What we learned Adding in audio to the skill made a huge impact on the skill, it makes it much more engaging for the user which should result in increased playing time and a higher return rate. We learned the ins and outs of Alexa conversations and now have a great understanding of how it can be used in future skills. What's next for Learning Out Loud There are a number of areas that we know we can expand on to make this Learning Out Loud even more engaging and increase adoption. 1. Account linking - link up with schools existing software systems so feedback can be given to teachers, which students are using the Skill, questions that students are struggling on, which subjects are students performing well in. we would look to add account linking with popular platforms such as smarttech, Schoology, Edmodo 2. ML / AI generated questions. -educes the need for the teachers to add in questions which is time consuming but feeding a tool with raw text which then creates questions for us 3. Leaderboard systems - allowing users to compete for the top places overall and per subject 3. Reminders API - allowing users to set a reminder to play the game according to when their exam is, e.g. weekly, and then daily nearer to their exam date. Built With alexa amazon-alexa amazon-dynamodb conversations node.js s3 Try it out www.amazon.com
Learning Out Loud
With many children still learning from home or months behind the planned curriculum Learning Out Loud is a fun way to create a quiz schedule tailored to the students study plan.
['https://wearerabbitandpork.com/', 'John Campbell', 'www.theaudiotailors.com', 'Josiah Smithson', 'Jamie Poole']
['Finalist']
['alexa', 'amazon-alexa', 'amazon-dynamodb', 'conversations', 'node.js', 's3']
8
10,443
https://devpost.com/software/job-search-kbm7z2
Job Search splash intro/outro Phone number prompt Job type query screen Job found view Inspiration The process of job searching can be tedious and slow, we wanted to create something which speeds up this process and makes it more intuitive. By searching through voice, the user avoids having to navigate a website, click tiny check boxes and scroll through annoying drop down menus. We also found that when searching online, jobs can get lost in a clutter of emails and tabs, so we wanted to avoid this by sending messages to the user by SMS. What it does Job Search will query you for these four parameters: Job type Location Salary Full time or part time Once these details have been given, Job Search will find jobs meeting those criteria and then prompt you for your cell number, it will then send you a text message to your cell with the job results! How I built it We developed the skill mostly in the Alexa developer console, using the interface provided by the Conversations API. We created lots of dialogs to handle the skill flow and we built an API definition for the job search, which is managed through an AWS lambda function, written in NodeJS deployed through the ASK-CLI. We also created utterance sets in the console which represented different ways users could interact with the skill, these utterance sets were built off the values in our interaction model, which contained both custom slot types (our job list slot) and then built-in slots such as AMAZON.PhoneNumber. The views were built using APL, we mostly used features available in 1.0, for the audio in skill we used APLA. We used the SerpAPI to perform our job search, and then the Twilio node package to send our SMS messages. Challenges I ran into We initially found debugging difficult since the interactions within the conversations API weren't hitting our lambda endpoint, but we found that following the advice offered by the Amazon office hours via Twitch really helped us find the best approach. We initially had trouble finding an appropriate Job Search API, since the public job searching API's were too specific, offering information which was far too granular for most users, i.e. API's purely for careers in engineering. Eventually we found the Search Engine Results Page API (SerpAPI), which supported a wide range of jobs. Accomplishments that I'm proud of We're proud of the way we planned out the flow of the skill, thinking about different ways users may approach searching for certain parameters, for example, when gathering phone numbers we came across the following friction points: User says number wrong User gives invalid number User gives number in varying lengths Solutions We built an intent which allows the user to edit misheard numbers, so this flow... Alexa : Please read out your phone number User : 800 499 Alexa : 800 495 User : No that's 800 499! Alexa : OK, that's 800 499 ...Is possible! We used NPM packages which validate phone numbers based on region We save the previous utterances in a session variable and concatenate them on each user request. What I learned In testing we found that the flow users take in a skill is rarely linear and they will often deviate off the "happy path" to change something or add a new parameter. The conversations API has taught us to have a more lateral mindset when it comes to developing skills. What's next for Job Search We'd like to further enhance our conversational flow, building more dialogs to ensure that we have an excellent coverage of all potential conversational paths. We'd like to expand the breadth of parameters that job search can accept to allow for more intelligent job searching, We also wanted to add more API's that provide more depth in certain careers. To enable permissions to request and access user email addresses, so we can send emails about jobs that match the users criteria. Built With amazon-web-services ask-sdk conversationsapi node.js Try it out vocala.co
Job Search
Find the perfect job through Alexa and get results sent to your cell!
['Richard Matthews']
['Finalist']
['amazon-web-services', 'ask-sdk', 'conversationsapi', 'node.js']
9
10,443
https://devpost.com/software/mexican-history-for-dreamers
Wolfie the Sommelier, Wine Expert Plus Ambassador, is here to help! Inspiration I come from a wine culture. My grandparents used to grow grapes and prepare wine at home. It was not the refined experience we have today, but food wasn’t as varied either, and people were more focused on nutrition than enjoyment. The first time I walked a supermarket aisle chock-full of wine bottles, I almost got dizzy with so much choice. I only knew white and red. Where to begin? Little by little, through many years of joyous trial and error, and some wine tasting courses, I learnt to pair food and wine. But my case is special, I would not be put off wine due to a bad experience – wine runs through the veins of my ancestors and, on some evenings, through mine as well! I have many friends, however, who said they hated wine … until I introduced them to the stuff gradually. It’s time to share this knowledge to the world, and Alexa is here to help. What it does Wine Expert Plus is a simple skill that recommends you what wine to have based on what you will have to eat. It takes broad categories (red meat, fish, vegetables…) as well as more precise descriptions of your meal. The recommendation is twofold: it will suggest a wine category (red wine, white wine, sparkling wine,…) as well as a geographical denomination (Champagne, Cava) or a grape type (Cabernet Sauvignon, Merlot…) that will be recognized by any wine merchant, even supermarket staff, making wine shopping an undaunting experience. How I built it I built it fast! Thanks to Alexa Conversations, the creation and curation of likely dialogs covering the possible scenarios in a natural language conversation is a breeze. The bulk of the work is in the creation of content. I created a relational database (Microsoft Access) to create the list of wines and foodstuffs, and to document the wine and food matches. Then I used SQL queries to generate the JSON files that serve as data repository to the skill’s back end. As this content is static and not too large, I decided against using a database (Dynamo, Aurora): this would have been overkill. Challenges I ran into I had created Alexa skills before and that was both a blessing and a curse. I had a good sense of what I wanted to do and how to design the interaction model, but the shift of core concepts took me a little while to get used to. The UX of a beta service is of course evolving and it could be unstable. Most changes were for the better and I experienced just a couple of issues. Accomplishments that I'm proud of Well, certifying this skill in less than 48 hours would be a tremendous achievement! Also check out the logo. Wolfie the Sommelier deserves his minute of fame! What I learned Witnessing how Amazon has brought AI/ML to the creation of the interaction model is pretty rad. Especially if I consider how it was just two years ago. I remember having to write code to automatically generate the possible string combinations for the sample utterances, intent by intent. Now AXC takes care of that. What's next for Wine Expert Plus More synonyms for the food categories so that more dishes are understood by the skill. Eventually, being able to purchase wine directly from the Skill is an obvious evolution, but we shall see. I would also like to add visual responses with pictures of the regions where the recommended wine comes from, or snapshots of the grape types. Skill Id, Skill Name amzn1.ask.skill.cc83f55a-7486-45a5-a2cc-90969d1b7e9c Wine Expert Plus Built With axc javascript lambda node.js
Wine Expert Plus
First impressions count. And wine pairing really is a thing – get it wrong and you’ll be put off wine. That’s where Wine Expert Plus comes to help: get tailored recommendations tailored to your meal.
['Eva Sanchez Guerrero']
['Best Food and Drink Skill']
['axc', 'javascript', 'lambda', 'node.js']
10
10,443
https://devpost.com/software/ease-my-trip
Inspiration COVID19 is killing all the fun we can have in travelling and visiting places. But now we are seeing fewer cases of COVID19, everyone has started making plans. I was having a cup of coffee with my wife and just randomly asked Alexa. "Alexa, what are the best places to visit in Dubai?" and I was left with very limited options. Could not even check the weather for the month I am travelling, type of activities and how long will it take to visit each place. I needed some kind of app to help me make an itinerary for my trip to Dubai considering all the parameters. And what would make it an outstanding application if I can get it all these done just by using voice commands and send the final plan to my email that too using voice command. That is when i decided to work on this idea and make it a reality. What it does "Easy trip planner" helps a user plan a trip for any destination. User has to provide destination and travel dates in different ways. Once provided, Alexa shares a customized day to day plan for the mentioned dates. Alexa shares a list of activity considering time of day, type of activity and time duration for activities. Users can swipe through dates and activities. If user does not like any activity, they can check the "Activities to do" section and swap the activity. Once plan is prepared as per user's requirements, it is sent over to user via email How I built it We are currently using Google sheets and AWS dynamodb as backend. This skill delivers the best experience on display devices such as Echo show. Skil is built in Node.js language with code deployed on AWS lambda. Challenges I ran into Making a trip based on time is the most critical part. Allowing users to swap activity was very critical considering time limits. Working with intents and conversations parallelly was very challenging. Code builds failed so many times. Accomplishments that I'm proud of First of all, I have finally what I was searching for. I am very happy that people can plan their trip using Alexa devices. Our future development cycles include integration with Flight booking APIs and Ticket info/buy APIs. What I learned Making conversations like humans is very difficult. To some extent, we can design it but practical implementation is very complex. What's next for Ease my trip We are in talks with "Inspirock" and "Tiqets" teams for content collaboration. We finally made deal with “Tiqets” platform for ticketing platform integration and content API integration. Because of time crunch we can not implement it right now for competition but now it will be implemented in second phase of skill with more features and more places to visit. We will integrate Flight search & booking features, activity reviews & place reviews. Built With alexaskillkit amazon-alexa amazon-dynamodb amazon-web-services amazonalexa apl awslambda dynamodb googlesheet googlesheets lambda node.js
Ease my trip. The new way to plan your next trip !!
Create a fully customized day by day itinerary for free, just using Voice commands !!
['Angadveer Singh', 'Tapan Chauhan']
['Best Travel and Transportation Skill']
['alexaskillkit', 'amazon-alexa', 'amazon-dynamodb', 'amazon-web-services', 'amazonalexa', 'apl', 'awslambda', 'dynamodb', 'googlesheet', 'googlesheets', 'lambda', 'node.js']
11
10,443
https://devpost.com/software/coffee-club
Video title page Inspiration I chose a coffee club for the Amazon Alexa Conversations Challenge because it requires something to be configured, and that's a perfect fit for Alexa Conversations. In this case, customers use a conversation to configure both the club initially, then another conversation to configure each individual coffee order. What it does The skill uses Alexa Conversations to gather information about people's coffee preferences, and stored them for later use. You only need to add a member once, and you can delete them at any time. Later, you can start a "coffee run". In the real world, this is where you say "who'd like a coffee?". Different people will say Yes on different days, so the actual coffee order is different each time. The skill lets you specify a list of up to five club members who can be included. (If you need more you can always create a second coffee run!) From the names you specify, the skill looks them up in persistent storage (the club), and adds details for each person to your coffee run (the order). It posts the resulting order to a skill card in your mobile app so that you can take it with you, or screenshot it and send it to someone else. How I built it I built the skill as an Alexa-hosted Skill in Node.js. This was my first Alexa-hosted skill so I had to learn about that as well as the new (beta) Alexa Conversations. Challenges I ran into Alexa Conversations is brand new, and in beta. So I struggled with the all-new UI at times, and the fact that it's a black box service, so it's not always clear what's gone wrong. In most cases I was able to work through this using the Alexa team's examples, but in some cases I had to start fresh. In fact Coffee Club was my backup idea - my original plan was a far more ambitious skill using several external APIs, but the complexity of my dialogs proved too much for me, or Alexa Conversations in its beta state, or both. Accomplishments that I'm proud of Turning round this Alexa Conversations skill in just a few days after spending two weeks struggling with a far more complex one (which I still plan to publish after the challenge). What I learned Ask for help - the Alexa team is very responsive, but so are all the other participants. I help as much as possible, and appreciate the leg up when others help me! What's next for Coffee Club I plan to add more options for drinks - I've really only scratched the surface. So additions like you'd find in a coffee shop would be on the list. But for Alexa Conversations I will add lost more things you can say to the skill to make for a more natural conversation. Built With alexa amazon-web-services javascript node.js
Coffee Club
Coffee Club is a productivity skill that manages a "coffee club" for friends and co-workers, taking the guesswork out of ordering the right coffee for everyone.
['Dave Curley']
['Best Productivity Skill']
['alexa', 'amazon-web-services', 'javascript', 'node.js']
12
10,443
https://devpost.com/software/abhyas
Inspiration India is one of the youngest countries with an average age of 29 yrs with 37.1% lying in the 0-19 yrs bracket. It will be a disaster if it doesn't manage its young generation properly. Lack of useful information and hence motivation combined with overworked teachers on unnecessary and lengthy paperwork every day instead of on teaching is further fuelling this downfall. I wanted to do something to avoid this disaster and using Alexas in the classroom seemed like an effective and economical way of doing that. What it does In rural India, no one talks in English including teachers in an English medium school, and this can greatly affect a student's ability to listen, understand, and talk anything in English later. The main problem we are trying to solve here is to improve the communication skills of students by having someone(Alexa) to talk in English with them in their classroom. Abhyas Teacher Assistant tackles this problem by providing spoken modules like a conversation with a friend or a doctor. Assistant also helps with reading out lessons with proper pronunciation, speeches related to motivation, concentration, and academic guidance, and getting the time table details. We are further planning to support taking attendance, oral tests, school-wide announcements, ringing class bells among other things. How I built it Abhyas Teacher Assistant is built using Alexa Conversations in Alexa skills kit for node.js, Amazon developer console, Alexa hosted code with AWS S3 for saving persistence, and DynamoDB for storing and retrieving classroom, student details. Challenges I ran into With Alexa Conversations still in beta and improving. I came across several challenges related to building dialogues, managing the delegation of Dialogue management between Alexa Conversations and Interaction Model. With the help of the Amazon Alexa team on Slack, Twitch live streams, documentation, and the community, I successfully sailed through the above challenges. What I learned A lot. Being worked in the NLP area, I am in awe of what the Alexa team has accomplished until now. Still, a long way to go but, Alexa conversations are like building another Alexa from the ground up and I respect that. I learned new tech, had fun trying to decipher how AC works and meeting and helping new people in Slack channel, etc. What's next for Abhyas After running some demos in rural schools before the pandemic and getting positive reviews from teachers, students, and their parents, we are in talks with the local government in India for launching the project as a pilot to some more schools and take it from there. Whatever we might win in this challenge will go into further improving the project. Thank you for this opportunity. Built With alexa-conversations amazon-developer-console ask-sdk node.js
Abhyas Teacher Assistant
Use the power of Alexa conversations for assisting teachers to empower the students of government schools in rural India.
['Ruthvik Reddy SL']
['Best Education and Reference Skill']
['alexa-conversations', 'amazon-developer-console', 'ask-sdk', 'node.js']
13
10,443
https://devpost.com/software/drink-enough-water
Water is Life Drink enough water Skill demonstration Calculate the ideal water consumption Alexa Conversations 6 parameters Inspiration Drinking enough water every day is essential for life. This is true for all of us and especially for older people. We have experienced this ourselves with our own parents! While searching for a simple and affordable solution, we noticed that the current skills in the German and US amazon skill store for a healthy supply of water are missing important functions or that they do not work properly. This has motivated us to develop the “best skill in class" for daily drinking of sufficient water. What it does The skill ensures healthy hydration for its user by calculating the ideal drinking quantity, its individual memories of drinking water and hints on the amount of water drunk and still to be drunk. With its combination of functions that are essential for users, the skill clearly stands out from existing “drink water” skills: calculator : It determines the user´s ideal and very individually high drinking quantity per day. reminder : It reminds the user to drink water whenever he wants to. tracker : It records and tracks the amount of water the user has drunk and informs him about the amount of water still to be drunk per day. Calculation of the individual drinking quantity with Alexa Conversations The skill is able to determine the individual drinking quantity with 6 parameters. These are: The gender, the age, the size, the weight, the degree of activity and the prevailing climate. Due to the many parameters, the variance of input formulations and correction possibilities increases. The ideal application for implementing the possibilities of Alexa Conversations! With Alexa Conversations we have elegantly succeeded in handling these many possible cases in a flexible way. The Alexa Conversations dialog actively asks for missing input sizes or manages corrections that are made by the user. With Alexa Conversations the skill is easier to use for the user and therefore more suitable for everyday use. As a second basic function, it is easy and convenient for the user to be reminded by the skill to drink water at self-selected times. After setting a reminder time, the skill reads out all defined reminder times, so that the user gets an overview and can make changes. Afterwards the user will be reminded to "drink water" at the specified times. As a further basic feature, the skill can record and update the daily amount of water drunk. In addition, the user calls the Skill its drunk water quantity. The skill then tells him the amount of water he has drunk up to that point and gives him an indication of the remaining amount for that day. So the user knows how much he has already drunk and how much he has to drink to reach his ideal daily amount. These 3 basic functions of the "Drink enough water" skill enable the user to achieve healthy hydration. For this he needs his ideal drinking amount, his personal drinking reminders and the continuous recording of his drunk water quantity, and hints about the amount of water still to be drunk. ## How we built it We built it with the Standard Alexa / AWS technology stack using Lambda, S3, DynamoDB, Cloudwatch, SNS. For the backend (which is implemented but, because of the sake of simplicity, not activated) we use React and Quicksight with Machine Learnings-Features. Challenges we ran into The biggest challenge was the integration of the Alexa conversations part into the other skill part. Accomplishments that we're proud of In the skill store are already quite good drinking water skills. We are proud that we were able to create a new and better level of usability, and functionality. What we learned Alexa Conversations is a great technology with huge potential. But it is necessary to test this technology through multiple trials. What's next for Drink enough water Additional Premium Features are not only planned but already created in a backend (see AWS Account). Without giving too much away, soon it will be possible to offer a warning function that will indicate when there is a danger that a person has not drunk enough at a certain time. In addition to this, it will be possible to track and document the drinking behavior of people over long periods of time. And also to ensure that different people can use "Drink enough water" in parallel, such as families. So stay tuned! Built With amazon-dynamodb amazon-web-services cloudwatch lamdba ml node.js quicksight react s3 sns
Drink enough water
Water is Life! Do you drink enough water? Many people don't! This skill enhances quality of life through healthy hydration. The Skill ensures that its users drink enough water and are protected.
['Matthias Kose', 'Marcus Kuehn']
['Best Wildcard Skill']
['amazon-dynamodb', 'amazon-web-services', 'cloudwatch', 'lamdba', 'ml', 'node.js', 'quicksight', 'react', 's3', 'sns']
14
10,443
https://devpost.com/software/evriskon
Inspiration Customers need personalized meal suggestions that are both in line with their health goals and their food tastes. No matter if the customer is on low sodium, low-cholesterol or on a fully personalized DNA profile-driven diet they can get the right suggestions. Is your customer in need of only recipes that are vegetarian, contain less than 600mg of sodium per serving, are high on Vitamin A but low in fiber? PHILIA Alexa Conversations Skills can help with this. What it does PHILIA is an Alexa Conversations Skill, that provides over 2 million recipes that are indexed, normalized, and contain full nutrition information. The skill allows for search by nutrient quantity, 40 diet, and health labels as well as keyword searches by cuisine type (chinese, italian, indian, french,...), meal type (lunch, dinner, breakfast, snack) or dish type (soup, salad, pizza, sandwich,...). How I built it It is not a basic or easy skill, as it is developed with innovative technologies such as: Amazon Alexa , Amazon’s cloud-based voice service available on hundreds of millions of devices from Amazon and third-party device manufacturers. Edaman , Recipe Search API, Food Database API, and Nutrition Analysis API Python3.7 , programming language. AWS Route 53 , a highly available and scalable cloud Domain Name System (DNS) web service. AWS Lambda , a computing service that runs code in response to events and automatically manages the computing resources required by that code. AWS CloudFront , Fast, highly secure and programmable content delivery network (CDN) AWS api-gateway , Create, maintain, and secure APIs at any scale Challenges I ran into Every task / MVP brings new challenges, tests the efficiency of individuals and teams. To complete any MVP successfully it requires a lot of effort, hard worker. Along with technical/development skills, task completion also needs a lot of other team-based skills. It tests an individual's ability, team coordination, communication skills. It’s a very difficult task to recommend food mainly based on their nutrients requirement person like because different people like different types of dishes . It varies according to their origin place, their current living state. Also person the budget also matters a lot for what kind of food he prefers instead of nutrition. But we optimized our result to get much-related recommendations. Accomplishments that I'm proud of We are proud to say that finally, we complete our task successfully with decent results as the saying goes "hard work pays off". This is the complete technology for the whole time we keep on updating. Its MVP entire, lots of additional features will come soon. we keep updating this with better results. What I learned We learned a lot of development as well as team-based skills:- Voice design dialog Building a skill using Alexa Conversation Training Alexa Conversations to collect informations Learn about lots of APIs which I don't know before like Recipe Search API, Food Database API, and Nutrition Analysis API, How to develop a recommendation system. How to avoid overfitting in machine learning models. Learn a lot of team-based skills how to break a complex task into lots of parts and steps Refine understanding challenges through discussion and explanation Time management, Team leadership quality the most important skill needed to complete any task, Good communication with teammates How to apply development skills in the real-world in a beneficial of the world What's next for PHILIA We plan to enhance the accuracy to make it more user friendly by using the following steps:- Try to merge best-suited recommendations based on origins and amount of nutrition diet recommendation system to get optimized results which customer like. increase the variety of dishes and user-specific recommendations. Currently, we are covering certain regions' food data in recommendation an MVP so we are covering a small region but soon we try to increase a wide variety of foods to cover whore world in our MVP. Built With amazon-alexa amazon-api-gateway amazon-cloudfront-cdn amazon-lambda amazon-route-53 edamam-nutrition python
PHILIA
PHILIA is an Alexa conversation skill that provide over 2 million recipes and meal recommendations for special diets.This skill allows customers to search by amount of nutrients,diet and health labels
['Phil Jayz Odinga', 'Pankaj pankaj']
[]
['amazon-alexa', 'amazon-api-gateway', 'amazon-cloudfront-cdn', 'amazon-lambda', 'amazon-route-53', 'edamam-nutrition', 'python']
15
10,443
https://devpost.com/software/alexa-read-me-a-tea-time-story
I love telling and listening to stories on my Amazon Alexa, almost as much as I also love drinking tea. My experience ‘Alexa Tell me a Teatime Story’ is a great way for me to combine these passions in a new platform that can transport tea drinkers to another world for a short time. I hope to become the first of many ‘Alexa’ authors. Listeners can put their 3-5 minutes of tea brewing to better use by listening to a single scene of around 800-1000 words of a longer serialized story of the kind Dickens used to write. They can also set up a teatime routine every day to make teatime special. I hope in the future to also bring new soap operas and on-going serials similar to The Archers and EastEnders to Alexa. The user is limited to just one segment per day. In a future update, I will include an in-skill purchase of $1 for listeners to hear one extra scene of the story per day. I hope very soon to also introduce alternative endings and the ability of users to bring back characters from the dead or kill them off as an in-skill purchase. I’d also like to integrate smart home tech into the telling of the stories; during scary moments the blinds could drop automatically and the lights could dim or flicker, doors in the house could mysteriously bang shut and the air conditioner could come to life bringing a chill to the reader and making it a truly interactive experience. I imagine working closely with tea brands. A brand could work with Teatime stories as part of their content marketing strategy to craft a story specifically for its customers; an offering that could be advertised on the back of their packaging. For example, customers buying Indian Chai could be offered an 'Indian romance story' centred around a railway, free with their tea. Users who wanted to hear the next part would have to buy the next box, with a complete story spanning several boxes. As the technology becomes available, I’d like Alexa to be able to order tea bags in for users once they have run out, debiting their Amazon account automatically. As RFID develops, a weight-sensitive tea caddy would be able to tell once the user only had 1 teabag left and would remind them to order to get the next part of their story. I see RFID as being crucial to the medium- and long-term development of the experience as it would negate the need for users to have to set a routine as the stories would start playing automatically as soon as they opened the individual packaging of an RFID teabag. As a creative person rather than a techie- my first novel 'The Talk Show' will be published by Bloodhound Books in March 2021 - this has been a really fascinating and exciting project for me and as such I've worked closely with other developers and the LinkedIn community to help make this a reality. The biggest challenge has been working out how to write an original story that works for the platform rather than simply converting a story written for print. I've also sought to keep the dialogue tree and the experience as simple as possible and this has also required some intensive revision, but my writing experience has come in useful here. Built With apl apl-a asksdk javascript json node.js ssml Try it out youtu.be
Alexa, Tell me a Tea Time Story
Put those 5 mins waiting for your tea to brew to better use: listen to an exclusive story bound to keep you hooked. Set up your teatime routine to & never miss a segment of your favorite story.
['Harry Verity']
[]
['apl', 'apl-a', 'asksdk', 'javascript', 'json', 'node.js', 'ssml']
16
10,443
https://devpost.com/software/planetary-commander-role-playing
Start Screen Echo Spot Launch Screen Inspiration This is the beginning of an ambituous project to recreate an old BBS Style game called Tradewars 2002. Where you have ships that go around the universe trading. What it does In this very early release version of Tradewars 2002, I have leverage on the use of Alexa conversation in a way that is scalable. The game starts with one advisor and a few ships but can easily grow to a few advisors. The advisors will be silos of Alexa Conversation delegates that is good at a specific item. The early trial game comes with a defense advisor that would recommend you a spaceship based on a few criteria. How I built it ASK SDK is used couple with Alexa Conversations technology. Challenges I ran into Alexa Conversation does not work in non en-US locales so have to have a few workarounds and hacks to make it work on the device. Accomplishments that I'm proud of Mastering Alexa Conversations! I am now quite confident in implementing it and leveraging it for any other skills that requires it. What I learned Sometimes the devil is in the detail Alexa Conversation can be a very powerful and scalable tool What's next for Planetary Commander Role Playing As mentioned this is very early version. I will have lots of work to finish up to add more advisors, more ships and add multiplayer capability. Built With alexa amazon-web-services apl apla ask lambda s3
Planetary Commander Role Playing
Planetary Commander leverages on Alexa Conversation technology to create a game template that is scalable in a very organised manner. You buy a spaceship that can then be launched to trade.
[]
[]
['alexa', 'amazon-web-services', 'apl', 'apla', 'ask', 'lambda', 's3']
17
10,443
https://devpost.com/software/color-with-me
Inspiration We are a team of woman passionate about creating technologies that impact the social and emotion well-being of children. Let's color together is not just a coloring activity. The goal is to engage with the child so that they feels as though they have someone to color with even if their family is busy. What it does The skill presents pictures the child's parent can print out. Once a picture is chosen, Alexa keeps track of how the child colors the picture and presents the final masterpiece at the end. Along the way, Alexa shares a few jokes, riddles, and facts along the way. How I built it It was built within the Alexa Developer's Console (ADC) using the Conversations features. Challenges I ran into We were able to ramp up and develop a skill within two weeks, while juggling full time lives (i.e, jobs, families). Our entire team was new to developing skills within ADC. We had to become familiar with Skill development and the Conversations feature. This forced us to make a few limiting decision (i.e., not using delegation, creating an Alexa-hosted skill). We were able to identify a few bugs within the system and suggest a few features. One of the aspects that was particularly challenging is that the conversations examples were only written in Node.js, not in Python. Accomplishments that I'm proud of We are glad to have an opportunity to develop a skill that gives kids something to engage in that's fun and helps build motor skills. What I learned Conversations is a great feature to help minimize the brainpower required to think of every possible scenario that is required when coding. What's next for Color With Me We plan to make the skill more engaging by adding Intents and Dialogs, in addition to adding more pictures to color. It's important to us that the Skill promotes reciprocal communication. This reciprocal communication is critical to the emotional intelligence skill development that we consider our mission. So, we plan to capture and reflect different responses from the kids. We'll also be adding some additional testing and additional mediums for how kids might 'color' our pictures (e.g. markers, crayons, pastels). Built With apl apla python s3 ssml
Let's Color Together
What if you could have someone to color with, anytime you want!
['Melissa Smith', 'Monique Howard', 'Leah Erb']
[]
['apl', 'apla', 'python', 's3', 'ssml']
18
10,443
https://devpost.com/software/1940-s-boxing-simulator
Inspiration A few years back, I joined a group heavy-bag class. I fell in love with it, but struggled to find a practical way to practice on my own away from class. There are several other apps/videos out there, but they lack a sense of fun and playfulness. What it does This skill coaches you through intense, 3-minute rounds of the 4 most basic punches. How I built it Before I knew about APLA, I actually generated these levels using NodeJS along with the SoxJS library. I contracted a voice actor to take care of the audio to make things a bit more authentic. Challenges I ran into Originally, I built this skill using Voiceflow. So, it was a bit difficult to rebuild it using the Alexa developer console. Accomplishments that I'm proud of Writing my own code to randomly generate each level. The inclusion of background music was another thing I'm pretty happy with. What I learned Forums and StackOverflow are life savers. Shoutout to them. What's next for 1940's Boxing Simulator I want to add a difficulty setting, other punches, feints, and sound effects. Built With firebase node.js sox
1940's Boxing Simulator
Imagine a time machine brings back a boxing coach from the 1940's. That's pretty much this game in a nutshell
['Solomon Ilochi']
[]
['firebase', 'node.js', 'sox']
19
10,443
https://devpost.com/software/devpost-skill
Inspiration As an avid hackathon participate I noticed that there is no Alexa Skill that lets you know when they are about to happen. I hope this skill can help myself and others better manage their hackathons. What it does Lets users check and add the latest Devpost Hackathon challenges to their calendar. How I built it Built with NodeJS and the Amazon Alexa skill kit. Challenges I ran into Find a way to scrape consistent data from Devpost since there is no official API. Accomplishments that I'm proud of What I learned I learned a lot about using Amazon APL What's next for Devpost Skill Still thinking Built With amazon-alexa google-calendar javascript Try it out github.com
Devpost Skill
Alexa Skill that helps you view and manage Devpost Hackathons.
['Derrick Wilson-Duncan']
[]
['amazon-alexa', 'google-calendar', 'javascript']
20
10,443
https://devpost.com/software/healthy-meals-j73ub9
Inspiration This is my second Alexa skill and the first one I created using conversations API and it is very helpful a lot of time saving while creating the back end. I joined the hackathon since I enjoy learning new things and buy gadgets that I can develop stuff. What it does A tool that generates full meal plans in less than a second that can be fully customized based on your preferences. How I built it I used node.js and a lot of math to solve some equations that can generate a menu based on the user's age, height and weight. Challenges I ran into It was kind of confusing understand how to create Alexa conversations dialogs but watching the twitch videos it helped a lot Accomplishments that I'm proud of I'm proud that I was able to finish a very well functional MVP and the video 10 minutes before deadline!! Also I was able to create 3 Alexa conversations APIs. One for generate the meal plan, one for swapping the food using context carry over and the last one to generate a list. What I learned I learned how to use Alexa conversations and create dialogs for it. What's next for Healthy meals For the next version I will add storing the user session into a database, so I can delegate the dialog and don't ask him again for the data. Also add APL to show the menu in a list. And finally create meal plans in advance like the whole week so the user can have a full shopping list and order it online. Built With alexa node.js
Healthy meals
Customized meals based on your preferences
['Jose Agustin Granados Jimenez']
[]
['alexa', 'node.js']
21
10,443
https://devpost.com/software/opt-health
We started Opt Health in 2019 as a telemedicine platform for optimizing men’s health through proactive and preventive medicine protocols, including hormone therapy and sexual health. When the Covid-19 pandemic landed here in California in mid-March, we were getting close to launch but decided to put Opt Health on hold because the onboarding process requires an in-person visit between our licensed practitioners and our clients and we felt it both tone deaf and logistically challenging to launch during those times. The same week, a physician friend of mine in San Diego that knew about Opt Health called to inquire whether it might be possible to use our platform to see patients, at a social distance. Like many physicians and other healthcare workers, he was very concerned about his own, and his staff’s, direct exposure to patients that may have an active Covid-19 infection. Within days, several of his physician friends and colleagues also wanted to use our platform for the same reason. We saw a massive opportunity, not only from a business perspective to repurpose portions of our technology to create a new product line, but more importantly, to meaningfully contribute to the global fight against the coronavirus pandemic. We decided to build a Covid-19 telemedicine solution called StreamMD that allows licensed physicians to screen, treat, and test (we have access to FDA EUA home-collection test kids) patients while minimizing community spread, mitigating person-to-person infection, and reducing unnecessary and inefficient overflow of patients in emergency departments at hospitals. The addition of Alexa Skills to our telemedicine platforms is impactful for both our doctor and our clients. From a client perspective, the Alexa Skills will facilitate better health outcomes by encouraging compliance with medical instructions and guidelines. With the use of HIPAA-compliant Alexa Skills, we will have further capability to communicate protected health information (PHI) including messages from providers, specific medication reminders, patient journal entries that can be read by their doctors, medical suggestions based on their blood work panels, discuss sensitive issues such as ED and hormone imbalance. In the current iteration, not using HIPAA-compliant Alexa Skills, we created conversational dialogues using natural language for our clients to become educated on health tips relating to their specific wellness goals that were formulated by our physicians. From the doctor/provider perspective for both Opt Health and StreamMD, we learned that improved patient compliance with the doctor’s orders not only improves patient health outcomes, but impacts the overall health system in positive ways. Non-compliance is expensive because patients have to come back to the doctors without achieving positive results. It creates time waste and inefficiency in a system where resources are already stretched thin. Our technology is also in the process of being whitelabeled for use with internal medicine and family medicine practices, where the Alexa Skills will provide additional value. In these practices, patients are often rushed out with a pile of paperwork and a couple months later they're still in the same spot. They are overwhelmed by the documents and often do not read them. An Alexa Skill would facilitate the digestion of this information through the use of natural language and conversation, which many patients prefer over reading and will also save the providers’ time from repeating information that was already communicated. Our doctors will be recommending Alexa to their patients after the consultations or visits for this educational information, as well as reminders for follow-up instructions. Typically, patients want to follow their doctor’s orders; however, they do not if they are overwhelmed or the information is overly complicated. We believe the Alexa Skills solve this issue by simplifying and clarifying that information. Further, in the internal medicine arena, there typically is no insurance reimbursement to providers for these follow-up communications. The more they can be streamlined, the better. In sum, we believe that Alexa Skills will help generate better health outcomes in our double-sided network of patients and doctors. We see a world where physicians use Alexa as another tool in their toolkit to improve care and outcomes, and patients are happy to comply because of ease of use and the enjoyment of better health. We built our Alexa Skill by creating a slot for patient goals and mapping various tips for the patient’s goal. We used four sample annotated dialogs to create a model that drives the conversation toward understanding the patient’s goals and the number of tips they’d like to receive. One of the challenges we faced was that our desired invocation name “Opt Health” would not be recognized by Alexa. As a solution we updated the invocatio name to “Health Optimizer.” We also needed to add a fourth dialog with all of the API arguments collected in one utterance, which allowed our model to work! Built With alexa alexa-conversations javascript
Opt Health
Opt Health Alexa Skills improve overall health outcomes by enhancing patient compliance and streamlining post-clinic patient education.
['Greg Tidwell']
[]
['alexa', 'alexa-conversations', 'javascript']
22
10,443
https://devpost.com/software/halloween-at-home
Inspiration Curious to learn how Alexa conversations works and while brainstorming for ideas, keeping the the current pandemic situation in mind, created Halloween at Home make families get some fun movements with Halloween Characters! What it does Halloween is always fun for all! Though we are at home, let's try make it more fun and get in to action with some special Halloween creatures. It's so simple and let's create characters just by Voice!! Remember, Get ready to move and have fun with some of our all-time favorite Halloween characters!! User could able to make choices of size and how active the creature would be and if it needs to be silly or happy! Finally they reveal their mystery character and a special details about it and we can have fun, pretending to be the one! How I built it Learnt Alexa Conversation using Pet-Match Tutorial and created my version of Halloween at Home! Challenges I ran into I tried adding images and audio effects for each and every characters. But I couldn't able to resolve the issues and make it work. So finally ended up in creating a text only version. Accomplishments that I'm proud of Finally, I could able to reveal mystery character by making a conversation with Alexa! What I learned How to create dialog, arguments, defining respective apis, responses and make alexa conversations work. What's next for Halloween at Home! Try to incorporate sound effects and APL for the characters and movements. Built With alexa amazon-alexa conversations node.js
Halloween at Home!
Just use your voice, have a little conversation with Alexa, make choices, create a mystery Halloween character and get energized by doing some actions!
['Lakshmi Priya Venkatesan']
[]
['alexa', 'amazon-alexa', 'conversations', 'node.js']
23
10,443
https://devpost.com/software/cell-phone
Inspiration I love using Alexa for controlling my smart home, listening to music, and playing quizzes. But I was always missing features that were helping me to be connected with my friends while I am at home alone. I wish I could say "Alexa, I want to talk with my friend", but none of my friends have an Alexa-enabled device yet, and the calling to phone numbers is not available in my country. If you are an Alexa user in the US or Great Britain you are always using the latest features, you can call freely any person via mobile. But all that becomes unavailable outside, for most of the people in the world. Even if you are in these countries, Alexa can't help you to stay connected with your parents, friends living in other countries. You can't send text messages for people who haven't signed up for Alexa Messaging. This makes it harder to convince friends to use smart speakers alongside with me and enjoy all the possibilities that we could have. I thought I could actually solve this problem and solve it for others too. What it does The Cell Phone skill helps you call or text your friends in any country via mobile phone no matter if they have an Alexa device or not. Tell your phone number and the phone number of your friend. The skill will call you and your callee over the phone and create a joint conference so you both can talk to each other. You can also use this skill to ask another person to call you back if you are missing him, need attention, or just want to talk. The skill will call or text that person and kindly ask to call you back. How I built it The skill has written on Python and live at AWS as a Lambda function. For cellular connectivity, I use Twilio. For data storage, the skill uses PostgreSQL deployed in RDS. Challenges I ran into Initially, I started building this skill using an old-fashioned approach, using Intents. I had to write tons of code to handle all the possible paths of a conversation. But after the announcement of Alexa Conversations, I tried it and instantly loved how simple and faster it became to build complex skills. In this skill, I combined these two approaches to serve the initial goal to make the phone calls available for Alexa users in all countries (since Alexa Conversations, for now, is available with the US locale only). Another challenge was gathering user input for sending text messages. The built-it slot type "AMAZON.SearchQuery" doesn't allow using phrases with other words, making it impossible to say natural "text {message} to my mom". I tried to make a custom slot type and to write down possible variations for all use cases, but this also didn't work well. So I finally decided to switch back to "AMAZON.SearchQuery" and split the conversation flow into 2 steps: "Text my mom" and saying the message then. The same problem was when I tried to use this slot type with Alexa Conversations. So for the dialogs-enabled version, I had to use a custom slot type. The third challenge that I faced was integrating in-skill purchases. For the usual mobile phone operators, it's natural to offer several plans or packages that include a limited number of minutes and SMS to their clients. And it's natural for clients to easily switch between these plans depending on their calling needs. But for Alexa Skills, it is not possible to easily "upgrade" or "downgrade" the existing subscription: the user has to cancel one first and then purchase another, it is also impossible to offer discounts for some users. To overcome this barrier I had to develop the following monetization scheme. The user purchases a basic plan and then has an option to purchase an upgrade to increase limits: Medium plan = Small plan + upgrade 1 Big plan = Small plan + upgrade 1 + upgrade 2 Technically it's three different subscriptions, but they work together. This greatly increases the number of ISPs that should be defined (especially if we want to offer a discount) but gives some flexibility and personalization. Accomplishments that I'm proud of I love that now many people will have more ways to connect with each other and request someone's attention when they need it using Alexa. It wasn't an easy task to build this skill while having a full-time job and an ongoing revolution in my country (Belarus). I faced many challenges and had to keep many small nuances in the mind, but I am proud that I did this alone within the hackathon and even submitted it before the deadline :) Also, when I was choosing the right pricing model to keep it running and pay Twilio bills, I found out that actually people will be able to save money on phone calls using my skill. In most countries, you're not being charged for the incoming calls and the roaming calling is much more expensive than the subscription required to use the skill. What I learned I learned how to use Alexa Conversations and went deeper in designing engaging and easy-going voice user interfaces. I have mastered building complex skills that combine several approaches, work with relational DB, webhooks, and third-party services. What's next for Cell phone I want to add an ability to enter phone numbers using a dial pad on the screen-enabled Alexa devices and add some more visuals with the APL. Definitely I will add localizations and launch the skill in more counties. If more people will learn about this skill and will use it, I will be able to launch it even on non-monetizing locales to make it available for the larger audience. Built With amazon-rds-relational-database-service amazon-web-services ask isp lambda postgresql python twilio
Cell phone
Skill helps you call or text your friends in any country via mobile phone no matter if they have an Alexa device or not. No need to install additional applications.
['Pavel Litvinko']
[]
['amazon-rds-relational-database-service', 'amazon-web-services', 'ask', 'isp', 'lambda', 'postgresql', 'python', 'twilio']
24
10,443
https://devpost.com/software/escape-paradise
Inspiration With the new rules imposed in 2020, travel and escape rooms are some of the things we have missed the most. Therefore, we decided to bring the joy of both in the comfort of our home. Roughly at the same time we found out about Alexa Conversation, and we realised that was the missing element to build our game. The story of the game was inspired by the adventures of Robinson Crusoe, but we’ve given it a modern twist. What it does Escape paradise is like a maze where you need to make decisions at every step. The aim is to escape the Paradise Island, whilst avoiding all of its dangers. In this aspect, the game and the experience is very similar to an escape room. You have to use your logic and take advantage of what’s available, while working towards your main goal : getting back home. How I built it Leveraged Alexa Conversations and Lambda functions. Challenges I ran into Understanding how Alexa Conversations work was probably the biggest challenge. It’s quite a different experience than building Alexa skills the normal way, where you would rely more heavily on coding. What made this even more difficult is that it being quite a new feature, there wasn’t much material available for us to look at, so it was a lot of trial and error. The second challenge that’s probably worth mentioning was coming up with the content. It’s surprisingly difficult to make an engaging game about escaping an island, while also limiting the duration of the adventure to only 2 days (in virtual time). For now ,we’ve decided to limit the duration of a round to only 2 (virtual) days as we believe this will give the players the ideal session (in terms of maximum length). Accomplishments that I'm proud of Being able to actually deliver a fully functional skill on quite a short notice, without compromising in terms of quality. What I learned How to use Alexa Conversations and how to write content for strategy games. What's next for Escape Paradise There are many improvements that we’re considering. The quick wins are to add more possible paths to the game. Depending on the adoption rates and the feedback that we’ll get , we will also extend the duration of the journey. Last but not least, we hope to include SFX (sound effects and maybe images for some devices) in the game, in order to enhance the user experience and make the game more enjoyable. These will be very useful for the longer sessions, as they will provide diversity and keep the player engaged. Built With amazon-alexa amazon-web-services node.js
Escape Paradise
Escape Paradise is a strategy game, in which you need to find your way out of an abandoned island. To succeed, you need to explore the surroundings, find the right items and make good decisions.
['MSIM Consulting']
[]
['amazon-alexa', 'amazon-web-services', 'node.js']
25
10,443
https://devpost.com/software/solo-project
Inspiration Young minds learn best when taught in an interactive fashion. Also, they love to hear animated voices more than ordinary human voices or even Alexa's voice. Our skill considers these factors and presents a character, Finn, for them to explore countries and learn about festivals with. The skill is targeted towards 3-9-year-old children. The skill is designed in a way that childer can learn on their own, or parents can invoke the skill and walk through it with their children. What it does Once the user launches the skill by saying, "Open Adventure Time," he/she can use it in two ways - The interactive mode that helps them learn about festivals. Interactive mode of exploring countries and learning about them. If the user wishes to explore a country, Alexa randomly selects a country for the user to explore or explore one of the user's choice. If the user wishes to learn about festivals, Alexa randomly selects a festival to learn or starts with one of the user's preference. Our primary goal is to teach children while playing an interactive game cause we believe that how they learn best. How I built it We built the backend using AWS Services and the VUI (Voice User Interface) using Alexa Skill SDK. All the data consumed in this project is hosted on the AWS S3 cloud service, and all the testing/monitoring is done using AWS Cloudwatch. Challenges I ran into Extracting content and structuring it in a way that makes our skill flow naturally and keeps users interested was a significant challenge. What's next for Adventure Time We are planning on adding more interactive chapters, so the user had a more extensive range of countries and festivals to choose from, and we want to expand the skill to include sections that similarly teach basic maths and science. Also, visuals are more appealing than just plain audio, and for this, we are planning on incorporating APL(Alexa Presentation Language) in our skill. Built With alexa amazon-alexa amazon-cloudwatch amazon-web-services node.js postman
Adventure Time
A conversational story based Alexa skill for engaging users in an interactive fashion to explore countries and learn about festivals.
['Somil Gupta']
[]
['alexa', 'amazon-alexa', 'amazon-cloudwatch', 'amazon-web-services', 'node.js', 'postman']
26
10,443
https://devpost.com/software/noodle-oracle
Inspiration My mom, always could put something together with whatever we had at the moment. What it does Asks 3 simple questions on noodle type, broth, and toppings and gives you a recommendation for a noodle soup and tips of making it yum! How I built it Alexa Conversations to fill the options for noodles, broth and toppings, AWS on the backend. Challenges I ran into AC UI is a tough lough and sometimes buggy. Accomplishments that I'm proud of Managed to figure out AC UI to a certain degree. What I learned Dialog automation is hard.. What's next for Noodle Oracle More data and better conversation flows. Built With alexa amazon-web-services conversatoons node.js
Noodle Oracle
Now that people self-isolating, many pick up a new hobby of cooking at home. Some do it better than others. Noodle Oracle helps everyone be at least mediocre in cooking ramen, udon and such.
['Maxim Makatchev']
[]
['alexa', 'amazon-web-services', 'conversatoons', 'node.js']
27
10,443
https://devpost.com/software/roulette-game
app logo game leaderboard Inspiration I felt inspired by pizza skill. already had an idea in the past of making a roulette game. but Alexa conversation made it easy to ask for arguments in any order and with confirming API to fix mistakes. What it does You can simply play roulette for fun. there is a weekly leaderboard for the luckiest. All regular bets are available. How I built it I built it with Alexa conversation, node.js, lambda, and the amazon game-on for leaderboard Challenges I ran into Some issue with Alexa conversation, how context is reset at the end of a dialogue. and debug was not very easy as this is a beta feature. Accomplishments that I'm proud of I was able to complete this first version of the game and integrate with other functionality: like game-on. What I learned I've learned how to use Alexa conversation! It could be useful for some of the skills I have a live! but I will wait for the beta to be done. What's next for Roulette Game I wanted to be able to loose to have a re-bet capability. improve user experience (ie: variation of alexa answer) integrate with ISP if i find a way of making it fair for all players Built With hosted-skill lambda node.js
Roulette Game
You can play roulette with Alexa conversation!Participate in a weekly competition, you start with 1000 coins and your highest balance stays on the leaderboard.Score and balance reset every week
['gautier fauchart']
[]
['hosted-skill', 'lambda', 'node.js']
28
10,443
https://devpost.com/software/amgen-pipeline-for-biotech
Submitted for certification Inspiration I am interviewing for a job at the firm and my inspiration was need for employment! However, when I started working with Alexa conversations, it turned out to be a great experience. I haven't been able to stop. What it does The skill simply gives you a drug recommendation based on your requirements. However, these are drugs that are in the pipeline which means that the world is yet to see them. This skill tells you what's coming in biotech. How I built it I followed the pet match tutorial and used node.js. I had a lot of help from the Alexa developer community, especially Kevindra. Challenges I ran into I was unable to get the description added to my audio response, after I get the drug recommendation. I also ran into multiple build issues in the beginning since I am new to this, but was able to solve them using the slack community support. Accomplishments that I'm proud of Just being able to complete one functionality has been a good experience for me. Of course, there is so much more to do! What I learned I am fairly clear on how to incorporate conversations in every skill I will work on. What's next for Amgen Pipeline for Biotech Adding the description functionality, automating data reload for the drug updates. Built With node.js
Amgen Pipeline for Biotech
Amgen Pipeline uses Alexa Conversations to give you a drug from the leading biotech company's pipeline, based on your specific criteria - clinical trial phase, therapeutic area, and condition.
['Dilip Merala']
[]
['node.js']
29
10,443
https://devpost.com/software/healthycare
Inspiration My response to COVID-19. Covid19 has overwhelmed the hospital care system. Why not allow voice conversation lexicons to ease the burden. Once patient has added the medication its easy for our API to cross-reference data on the side-effects of medication, offering a peace of mind to the user who in other circumstances would have to call or go and visit their local doctor. What it does Empowering patients through voice chat to improve and promote wellness by simply sharing a conversation. Our data scientists collaborate with patients and caregivers to help streamline medical knowledge and data. How I built it Alexa’s new conversations feature skill allowed me to leverage the latest secure HIPPA medical collaboration API's, machine learning technology, advanced natural language processing, and a proprietary predictive contextual messaging to help better patient health adherence outcomes and lower cost. Challenges I ran into The speed of getting a response from external API whist making sense of a wide variety our health data and tailored user knowledge, the attendant however now can be used to ask questions for medication directions or general advice on drugs and side-effects. Accomplishments that I'm proud of Identify what customers are doing when and where with our intelligent product. Different knowledgeable data schemes will help quickly build experiences and page adaptation easy and simple whatever your tech. What I learned Encourage More Active Health Care Conversations- Powering the overall trend in tailored interventions, Hto understand the patients language and everyday activities delivering relevant information faster and more accurate. What's next for Healthycare Cognitive Customer Service Messaging AI algorithms, machine learning technology, advanced natural language processing, and a proprietary clinical contextual bot algorithm. Built With amazon-alexa amazon-web-services api context-voice
Conversational voice health diary for recording daily meds
With Covid19 on the rise, 1/2 of all medication is not taken as prescribed. Through conversational engagement layers and improving communications we tackle patient wellness head on.
['David Suter']
[]
['amazon-alexa', 'amazon-web-services', 'api', 'context-voice']
30
10,443
https://devpost.com/software/willow-mood-tracker-ym8zwq
Inspiration Stress (induced by cortisol) is a factor in 70% of diseases including cancer Mindfulness-based stress reduction techniques such as yoga and meditation are proven to calm the fight-flight response and restore the immune system De-stress Me aims to make healthy coping skills for stress more accessible to people. What it does The user tells Alexa what he or she is feeling. Alexa asks questions to gauge how the user wants to de-stress - i.e. by recommending physical activities, or by talking it out. Alexa recommends short or long activities per user input. How I built it I built the app using Node.JS with an AWS Lambda endpoint. Challenges I ran into This is my very first Alexa Skill project, so learning the features of the Alexa Developer Console, connecting to AWS Lambda, was a learning curve. Accomplishments that I'm proud of Getting the basic dialog flows set up What I learned My biggest takeaway from this project was the difference between the basic Alexa skills functionality vs. the Alexa Conversations functionality. Setting up dialog flows on Alexa Conversations was definitely more challenging! What's next for Willow Mood Tracker Adding more activity recommendations Fleshing out the "Talk it Out" functionality, where users can use Alexa to talk about what's making them stressed out Getting Alexa AWS Certified Built With amazon-web-services javascript lambda
De-stress Me
De-stress Me is a digital coaching app that recommends activities to help you relieve your stress at the moment.
['C Tadina']
[]
['amazon-web-services', 'javascript', 'lambda']
31
10,443
https://devpost.com/software/formula-1-calendar
Inspiration Being fond of Formula 1 for a long time, I thought it was a good idea if I could make a conversational skill with Alexa skills kit which would provide me the event details whenever I want. What it does It tells me where and when is the next race according to the current date(Dynamic). It tells me which races are left and when are they scheduled(Dynamic). Also, it gives me the whole calendar of races for 2020 season(Static). How I built it I built it using Python and from a scratch skill template. I built the intents myself and wrote the logic for dynamic content and got the data myself. Challenges I ran into The challenge I ran into was that it was my first time making a proper skill and I had to learn by myself how to use everything. Also, I had to decide the logic for the dynamic content. Accomplishments that I'm proud of I am proud that in the first time, I could make such a skill and got it published What I learned I learned about intents, making it such as conversation, programming for dynamic content, and how to handle requests and outputs to make it more natural. What's next for Formula 1 Calendar The next update will be support for Hindi Language and also other things like wins and news. Built With amazon-alexa python Try it out github.com
Formula 1 Calendar
Get your F1 session calendar updates with this skill
['Aadil Tajani']
[]
['amazon-alexa', 'python']
32
10,443
https://devpost.com/software/bird-tour
Logo Welcome Choose a Trail Inspiration Our inspiration came from talking to people that enjoy birdwatching from the comfort of their own homes. Due to the pandemic, they have been unable to travel much so the types of birds they see are limited. We wanted to provide an opportunity for people to still listen to and learn about new types of birds without requiring them to travel. What it does Bird Tour is an interactive birdwatching tour in the Rocky Mountains. Users can choose which trails to take, and their decisions impact the types of birds they will encounter. Users can listen to bird calls, learn new facts from the guide, and even attempt to answer questions about the birds. How we built it Based on our inspirations, we created a persona who is curious, likes birdwatching, and is open to technology. However, our persona had limited bird knowledge and was stuck at home. They were looking to apply bird knowledge to real-life birdwatching. We had Alexa take the role of a guide to teach about birds and lead users on an interactive journey. We imagined what the user would hear and see during the birdwatching tour. This allowed us to make a more conversational design. To create a lifelike experience, we collected public domain bird call noises from the National Parks Service and used Speech Synthesis Markup Language (SSML) to synthesize the bird calls, environment soundscapes, and the guide's voice. Also, there are small quizzes to create more interactions. Challenges we ran into The challenge was that we wanted to create a lifelike experience. The environment soundscapes and bird calls needed to collocate perfectly with the guide's voice. There were many different sentence lengths and bird calls that needed to be adjusted. This meant that the SSML formed a huge, hierarchy structure. Accomplishments that we're proud of It was difficult to get the tone of the answers correct. Since we used a synthetic voice, the dialogue needed to be written in a way that didn't sound snarky when a user said an incorrect answer. We are proud of the final dialogue we wrote which keeps with the friendly and encouraging voice and tone we were aiming for. What's next for Trip Planner We plan to expand the library of bird sounds and facts so that users can encounter a greater variety of birds on their tours.
Bird Tour
Bird Tour
['Isaac Gutjahr', 'Ya-ching Tsao', 'Oliver Greive', 'Kae-Yang Hsieh']
[]
[]
33
10,443
https://devpost.com/software/lara-persons
Inspiration The inspiration to execute this challenge was the need observed in the current scenario where many of us find ourselves working remotely and it is relevant to enable new communication channels between companies and their workers. In this way, seeing that every day more users in the world have an echo dot in their homes, it is a great opportunity to use this device to interconnect work teams and provide them with the most up-to-date information possible. What it does LARA Persons connects the members of a work team with their organization. In a first instance designed for the human resources area, but with the expectation of expanding it to other departments of the organization. LARA Persons allow employees to make inquiries and requests to the human resources department such as: inquiry of vacation days, salaries, generation of requests and receive notifications from the company. For requests, the information is completed with the generation of documents and subsequent to the email registered by the employee. How we built it LARA Persons , was built with Alexa skills conversations, which provided the corresponding interactions for demonstration, also it uses an example API that is hosted in HEROKU and built with node / expressJS, which is responsible for sending the example emails. Challenges we ran into One of the main challenges we faced when building the skill was to achieve the most real interactions possible, similar to as if we were working in a productive environment. To solve this problem, it was decided to create an API that provides different endpoints and that would allow the sending of emails. Another problem that arose was at the time of modeling the skill, since originally it had been thought using the interaction model and there was an adaptation process to use Alexa Conversations, which allowed modeling the dialogues in a more natural way, obtaining as a result a more fluid conversation. Accomplishments that we're proud of Validate our knowledge of the Amazon Alexa development suite and learn about and integrate the new capabilities that the suite offers, such as Alexa Conversations. These capabilities stand out for their easy learning curve and the simplicity of conducting conversation and testing modeling. What we learned In the process of creating LARA Persons we learned how the new Alexa Conversations system works, creating dialogs, intents, modeling different paths regarding user response, use of variables, publication of the skill and integration with an echo dot . What's next for LARA Persons The following steps consider packaging the solution in a productive version that integrates with a human resources management system developed by Gat-Blac, which will enable the features and advantages offered by Alexa to be enabled, especially considering the current scenario in which many companies are in remote mode and using an Amazon echo dot will facilitate communication between companies and their workers. Built With alexa amazon-alexa express.js lamba node.js
LARA Persons
LARA Persons connects the members of a work team with their organization, In a first instance designed for the human resources area
['Jorge Garcés', 'Jonathan Hechenleitner', 'Nicolás Ávila Tapia']
[]
['alexa', 'amazon-alexa', 'express.js', 'lamba', 'node.js']
34
10,443
https://devpost.com/software/mental-care-therapy
Nearest Hospital Anger Management Breathing Excercise dyslexia Inspiration Mental Health is one of the most important issues in the world at the moment, be it for teens or adults. Mental health awareness in India is very limited along with social stigma attached to it makes it almost an ignored topic. By using our app anyone and everyone can track their mental health by monitoring their daily behavior and mood.The most important yet the most ignored aspect of mental health is awareness that our app triggers. It will help people to improve their mental health and prevent bad behavior, anxiety attacks and anger pangs and also prevent suicidal thoughts through the inputs and suggestions our app will give them. What it does Mental Care Therapy has these unique features : Meditate - Meditate option will allow user to meditate in a peaceful environment by providing some relaxed meditation audio. Anger Control - This will allow user to manage their anger by providing peaceful audio to the user. Help - This feature will allow user to get the location of the nearest hospital in case of emergency. Motivate - Motivate option will play motivational speeches or songs. Counting - Helping in learning of counting numbers for people having dyslexia disorder Breathing Exercise - This option will provide step by step breathing exercise guidelines and help people to do exercise in the right way and with the right mind Challenges we ran into Integrating mapmy-api with the alexa skill connectivity with amazon-web-services. What's next for Mental Care Therapy Habit analyser to track good and bad habits. The journal feature is a great way to practice self-care by reflecting on the day, noting any distressing thoughts, and documenting how you overcame them. Built With amazon-alexa amazon-web-services mapmyindia-api voiceflow Try it out alexa-skills.amazon.com
Mental Care Therapy
Mental Care Therapy is an Alexa Skill help you cope with Depression, Anxiety, Stress, and find the nearest hospital in emergency.
['Atishay Srivastava', 'Neel Kukreti']
[]
['amazon-alexa', 'amazon-web-services', 'mapmyindia-api', 'voiceflow']
35
10,443
https://devpost.com/software/movie-finder
Usage Flow Chart Inspiration My wife and I often struggle with the same question on many evenings: what movies haven't we seen yet or are worth to be watched again? Streaming services offer tons of movies but it takes a lot of time to navigate through all categories and pages and in the end you can't decide... Imagine a friend tells you about a good movie. Chances are high that you will consider watching it because a someone directly addressed you and your brain's auditory center was involved. That's where Alexa comes in handy and can make a difference to the tons of movie recommendation websites and streaming services: Simple, easy to use and without any distractions: A couple of movie recommendations matching your mood, plus the ratings of millions of others. This Alexa skill makes use of the new Alexa Conversations features. What it does It asks you what genre (or list of genres!) you're up to today. Also if you'd like to see blockbusters only or if you're less picky today. Alexa will then look up movies matching your criteria and prepare a list for you. She will also tell you if others are loving the movie or if it's an acquired taste. Movie Finder knows over 550,000 movies and you can pick from 28 different genres: Romance, Documentary, News, Sport, Biography, Drama, Crime, Adventure, Fantasy, Comedy, War, Family, History, Sci-Fi, Western, Thriller, Mystery, Horror, Action, Music, Animation, Musical, Film-Noir, Adult, Reality-TV, Game-Show, Talk-Show, Short. How I built it I made use of a free but reliable data source for movie meta data. The data is stored in a RDS that can be queried efficiently by my Alexa skill. I built up the skill with a combination of Alexa Conversations (for asking what the user likes to watch) and traditional skill invocation to get the best out of both worlds. I tested it on the simulator and on an Echo Dot. Not to forgot to mention that I was happy I had some great debugging assistance by Kevindra from the Alexa team when it got tricky... Let's get started... It's always a good plan to first draw a picture of what it should look like before getting the hands dirty and the tools warmed up. So I used a virtual whiteboard to draw the dialogs and states I wished to have in the end which would provide simplicity, meaningful results and a great UX. For example, by what criteria should the user be able to filter? If it's only one, the results will be fuzzy. If there are too many filter questions by Alexa, the result list will become more precise but the user will become annoyed by the many filters before delivering results. I also conducted a user interview with my wife, asking about her strategy when looking for movies. So I ended up drawing a Alexa conversation only asking for two things: genre and rating of others . Then I checked out the Pizza reference example and watched some Q&A sessions on Twitch first to get me started to Alexa skills. When it comes to a skill like this, it's all about the data. So I checked if there are any free-to-use APIs or databases out there. I personally found IMDb to be the source for movie related questions. I always check it out before watching a movie (but need to fiddle around with my laptop or mobile...) IMDb offers a free database for personal and non-commercial use. The raw data about movies, titles, genres, average ratings and number of votings from others is available as a zipped csv and was uploaded by me to S3. It's then imported into an Aurora RDS. Aurora has a nice feature: The statement LOAD DATA FROM S3 loads data from csv files stored in an Amazon S3 bucket into the database without the need to write scripts or performing millions of INSERTs from your laptop. I then optimized the data and table structure for efficient searchability when asking for one or many genres (see section Challenges I ran into below). A separate M:N table in combination with indexes do the job. At start, my Alexa skill speaks out how to get the search started (= what utterance the user can use to start a search). The skill then hands over to Alexa Conversations which will handle the back-and-forth, asking the user what genre they're up to and if they care about the ratings of others. A RDS query is performed and the result stored in Session Attributes. Then Alexa Conversations hands back to the skill which will let the user hear about the results. Each movie result contains the English title, genre (in case the user gave a list of genres) and the average rating of other watchers. If it's a match, fine. If the user has seen this movie recently or doesn't like it for some reason, an utterance like "next" steps forward to the next item on the list. Challenges I ran into Database First, I considered using DynamoDB for holding my data because I considered its high availability but also because I wanted to do a project with it. Also, many rows in the data source are different in completeness, so a key-value store could be an elegant solution without assuming what values are known from a movie. Another one was that a search should be performed in the range of milliseconds. But after doing the first steps I quickly learned that indexes can only be put on top-level objects, so no lists like genres (each movie can be tagged with zero, one or multiple genres). So I chose Aurora. After the raw data was imported elegantly from S3 I optimized the tables and data for search efficiency. I later wanted to be able to search for genres but the tricky part is that a movie can be tagged with multiple genres (e.g. Thriller and Action) and the datasource just provided a comma separated string for each row. So I needed to parse that string, dissect it into its list of values and then put that information into an M:N table allowing a fast and efficient lookup for particular genre(s). An RDS also has the elegancy of allowing complex join operations and conditions over multiple tables like ratings, title information and genres. I also created the necessary indexes for a fast lookup, especially when doing complex joins over three of my tables. In the end a search for the complete user-tailored movie list is performed in about 100 ms, which is totally okay in terms of user experience. Debugging I used a non-US Alexa Developer account and I lost a lot of time during debugging wondering why certain things appear in CloudWatch and others don't. I started some skill sessions without a log being written. At that time I didn't realize what went wrong. Together with the Alexa support team I figured out that CloudWatch logs in the North Virginia region were only on Alexa Conversations logs when using a non-US developer account. I had a sudden idea once to look into the Ireland region and ... tada, there were the skill's logs. A bit cumbersome to always open two regions and merge logs together by timestamp during debugging but finding this out solved it for me. Accomplishments that I'm proud of Well, of course, building my first Alexa skill ever. Then, playing around with the beta of AC and see what's it capable of, making it easier for developers to concentrate on the content and not on state management. And last but not least, having something which helps us during my evening struggle, deciding on a movie to watch! What I learned Learning about some more features of Aurora, DynamoDB, Lambdas, some Node.js libraries and of course the Alexa SDK and APL-A editor was a good time for me. What's next for Movie Finder One ideal extension would be to send a link to the IMDb webpage, but unfortunately the Alexa app doesn't allow this. Hint: Feature request ;) A user could also want to hear about movies their favorite actor or actress has starred in. So an extension to search by actor could be done. And of course i18n for other regions. Built With amazon-alexa amazon-web-services imdb mysql node.js
Movie Finder
Hanging around tonight on your couch not knowing what good movie to watch?Just say, "Alexa, open Movie Finder" and let Alexa recommend you movies loved by others. Discover & enjoy!
['Norbert Baumann']
[]
['amazon-alexa', 'amazon-web-services', 'imdb', 'mysql', 'node.js']
36
10,443
https://devpost.com/software/evox-ikigai
The Ikigai model for living a purpose-filled life Questions to provide thoughtful self-inquiry. The answers give us data to create customized programs to support Ikigai Journey . Inspiring visuals for on screen backgrounds. Inspiring visuals for on screen backgrounds. Inspiring visuals for on screen backgrounds. The Ikigai Journey starts with the "compass" to support our users in knowing where they are and where they are going next! Inspiration We see that with Alexa Voice we have an amazing opportunity to support people in living happier, healthier, more fulfilling, and satisfying lives. By using AIML, and with the help of our incredible team of Behavioral Experts including Psychologists, Coaches, and Personal Development experts, with our Ikigai Journey skill on Alexa, we empower discoveries and choices to create new behaviors and habits that will support our subscribers to create, design and achieve lives they love. What it does Version 1 of our skill enters the cycle of exploration areas to find purpose in life and invites people to share specific information about themselves and their beliefs that gives us the ability to design a life-roadmap that helps them achieve their goals. We use various AWS Services and other services that help us understand patterns and behaviors to better direct our subscribers with video content, articles, coaches, and other services that help them better become authors of their own lives. How we built it We work with a team of neuroscience and achievement experts to help write the conversations and to plan out those dialogs in an order that has been proven to help people make changes in their lives and build new, positive, supportive habits. Challenges we ran into No lie... it's a lot of work to build a skill designed to empower and support people with varied interests, issues, and goals. Version 1 of Ikigai Journey focuses on helping people understand where they are in the process of "re-inventing" themselves and their lives. It will take significant effort to create a safe environment on Alexa for people to feel comfortable to share potentially delicate and personal information. Creating a secure, transparent experience for users to take advantage of all the opportunities we can build on Alexa will take trust-building at each step. Accomplishments that we're proud of We've published our first draft of our app and our preliminary Alexa Conversations have been successful. With the new features we saw launched on Alexa Live (7/22/2020), we now have even more opportunities to add more depth to our conversations. We've been able to send a secure link to our subscribers via their mobile devices and allow them to record a private journal in their own voice. Here we then use tools to understand the context and content of their entries and give them insights into how their "journaling" applies to the goals they have set-up in our Ikigai Journey skill. It is pretty cool for the user to see patterns of speaking and how some of the words and phrases used either support or detract from life-goals. What we learned Having intimate conversations on Alexa is going to take a lot of finesse, grace, and patience. Asking people to share themselves vulnerably in a time when many are skeptical about inappropriate "listening" and data use requires us to be extra cautious about how we ask and what we ask our subscribers while working toward supporting them with their life goals. What's next for evox Ikigai We're working on a larger set of conversations which will include many more features, including videos, potentially live Amazon Chime opportunities to speak with experts or group meetings, or online conferences. The UX will be a 30-day trip "around the wheel" of Ikigai Journey inquiries and prompts to examine areas of passion and purpose. With the addition of Alexa Comprehend sentiment analysis, pitch analysis, and Amazon Recognition, curated content, courses, and coaching will be personalized for the user (using Amazon Personalize and Amazon Kendra, as well for supportive reading materials, video, and healthy lifestyle products). Built With amazon-cloudwatch amazon-ec2 amazon-rds-relational-database-service amazon-web-services angular7 ask-sdk aws-java aws-java-sdk comprehend elastisearch java kendra lambda oauth personalize postgresql rekognition springboot
Ikigai Journey - a personal development & empowerment skill
An Alexa Skill that supports people to live lives of their own design and lives that they love to live. We support people to focus on what they really want in life and how to skillfully get there.
['Andrew Mersmann', 'Xavier Dubois', 'suraj prakash sahu', 'Debabrata Patra']
[]
['amazon-cloudwatch', 'amazon-ec2', 'amazon-rds-relational-database-service', 'amazon-web-services', 'angular7', 'ask-sdk', 'aws-java', 'aws-java-sdk', 'comprehend', 'elastisearch', 'java', 'kendra', 'lambda', 'oauth', 'personalize', 'postgresql', 'rekognition', 'springboot']
37
10,443
https://devpost.com/software/parent-s-librarian
Inspiration As a parent, I'm constantly faced with the dilemma of giving my kids suggestions on what books they should read. Like any parent, I always want my kids to read more, but not knowing what content my children will enjoy creates hurdles for the discovery process. Generally speaking, titles come from other parents, children's friends, teachers, librarians, book stores, and web sites. But even then, how do I know these books will pique the interest of my kids? And more importantly, are they appropriate for their reading level? Wouldn't it be great to have a trusted source of recommendations that takes into account key factors such as child's age, reading levels, interests, and their favorite authors? Essentially, a single destination for book discovery but one that is also extremely intuitive to use and readily accessible. Thus, Parent's Librarian was born. An ideal marriage between a leading edge content discovery platform with Amazon Alexa technology and tools! What it does Parent's Librarian allows parents to discover children's literary content that is appropriate for the interests and reading level based on the child's age, previously read books, and preferred authors. And it takes place in a frictionless environment by using natural language voice interactions via Alexa-enabled devices. Parent's Librarian also supports multiple recommendations so users are provided with more choices for their children. How we built it We built Parent's Librarian using Alexa Conversations to manage the dialog model and combined it with our proprietary software and associated algorithms (lambda functions). We use AWS hosted services for our back end including relational database. Challenges we ran into Learning how to construct a dialog model using Alexa Conversations was challenging. The tutorials and sample skills are big help but it takes some time to learn the nuances of the system. Building and training the model for the skill can be time-consuming and requires patience. Often, cryptic build fail messages including the dreaded "Build model failed, with no accompanying info", make it challenging to track down and debug issues. Alexa Conversations can be prone to corrupted models (e.g. following changes to dialog variables and arguments) which trigger error messages that don't reflect the surface values. Sometimes, the only solution is to rewrite large portions of the dialogs (I believe this is a known problem and best practices exists now). The lack of a revision/code management system for Alexa Conversations makes it difficult to track changes and increases the risk of regressions errors and build breaks. Accomplishments that we're proud of Providing book recommendations relies on a large amount of data which can be extremely intensive to work with. We're proud of our platform's algorithms that can analyze and produce suggestions that are both tailored to the interests of the child and appropriate reading level. Even more impressive is our ability to generate recommendations in a timely manner. What we learned Testing your skill in the console is quick and easy but doesn't reflect the nuanced interactions that happen with an actual device (e.g. time-outs, accuracy, natural conversation, etc). The bottom line is it can be both a frustrating and a highly rewarding experience working with Alexa Conversations (AC). In its current beta state, AC isn't without its warts, but as a Skills development and dialog model tool, it is definitely a game changer! What's next for Parent's Librarian The Parent's Librarian platform and its algorithms are constantly evolving to improve the suitability of its results for the target readers. Combined with a growing library of titles, the platform continues to provide better results drawn from an expanding content set for parents and kids. The team at Chatter Learning is extremely excited about the future of Parent's Librarian and a plethora of features and enhancements are planned for the skill. In the near term, support for Amazon display devices (e.g. Echo Show) as well as expanded invocation options are a priority. Built With alexa amazon-rds-relational-database-service amazon-web-services conversations node.js
Parent's Librarian
A voice-enabled platform enabling parents to discover tailored and engaging literary content for kids
['Tony Lam']
[]
['alexa', 'amazon-rds-relational-database-service', 'amazon-web-services', 'conversations', 'node.js']
38
10,443
https://devpost.com/software/salazar-the-spark-bot
Diagram of Salazar the Spark Bot Alexa Demo: https://www.youtube.com/watch?v=bLcePe85mSo Inspiration I thought about what a sales manager looks for and it boiled down to revenue. Seeing what leads are open, getting their value, and knowing what the month will look like are critical business operations. I was also inspired to make something visually appealing so it gets the point across in a way that words do not. Since the premise is that the user should stay in the Cisco Spark platform, I wanted to make it easy for users to access their Pardot information either by voice or by typing a quick command. What it does The user authenticates with Cisco Spark and Pardot. They are then able to get a monthly forecast of the estimate value of open leads, see the top 5 open leads, and know the probability of those leads closing within the month. As an added benefit, the user can also have Alexa read the messages aloud, create new rooms, and add new team members, all hands-free. The information is also shown as an infographic with real-time data from Salesforce Pardot. There are two prerequisites: You must have a Cisco Spark account. You must have a Pardot account. Go to https://emilytlam.com/pardot.html to authenticate your Cisco Spark account and Pardot information. You can say type the following commands: /opportunities to get your top 5 leads for this month based on the probability of those leads /forecast to get the total amount of leads and sum of the value for this month. Or you can type Tell me about my opportunities and What's my forecast? Then view the infographic to see the pie chart and bar graph that are dynamically rendered. Easter egg: type yay! To use with Alexa: You must link your account with the Sal the Bot Skill in the Alexa app. You can say the following: Alexa.... "list my rooms." "read my messages." "create a new room." "get my teams." "create a new team." If you get lost at any time, you can say "menu" , "help" , or "stop" . How I built it I used an oauth flow to authenticate the user with Cisco Spark and Pardot. Their email and access tokens are stored in a DynamoDB database. API Gateway is used to receive POST messages from Cisco Spark. The webhook is registered to an AWS lambda function that is exposed through API Gateway. This lambda function handles requests from Cisco Spark when new messages are created. The function fetches the user's information from DynamoDB, makes the necessary calls to the Pardot API, and sends a POST message to Cisco Spark using html markdown. From there, the user can click on a link that renders the infographic of the information. Challenges I ran into I would have liked to render the svg file directly into the Cisco Spark platform but it was not one of the file types supported by the API. I also had difficulty setting up a bot but once I understood the concept of webhooks it clicked for me. Then I wasn't entirely sure I needed a bot after all because of the limitations (the bot has to be in the chat room, has to be mentioned, etc). The concept of an integration, application, or a bot was differently blurred for me as I traversed between Alexa and my integration that turned into an application but acted like a bot. As a result, It was difficult debugging access tokens because in some cases I wasn't sure if it was Pardot, Cisco, API Gateway, or Lambda, or if an access token had simply expired. Accomplishments that I'm proud of I'm proud of building an application with many moving parts and calling multiple APIs to not only process information but to render it in a visually pleasing manner. I also focused on a vision and implemented a solution around that concept rather than cobbling together what I could based on what I knew. I wanted my application to be practical, but also visual. And I'm happy to say that when you type in yay!, you will a smiley face. It was to remind me that in the end, the effort is worth it because I learned to become a better developer. What I learned I learned a great amount about webhooks, setting up an API to handle POST requests from Cisco Spark via API Gateway, debugging authentication errors (Pardot tokens are only valid for one hour...) and how to use DynamoDB to read and write table entries. Using CloudWatch to debug log messages was crucial. I also learned to deploy lambda functions with Apex which was a huge time saver. What's next for Salazar the Spark Bot *Making the design flow looking and feel more similar to the Cisco Spark authentication flow. *Having Salazar make Pardot requests using Alexa. I would also like to incorporate more natural language processing with the Salazar application, especially adding more commands and intents. I also have a Watson sentiment analysis for the Alexa skill that is in development mode. Built With amazon-alexa amazon-dynamodb apex api-gateway aws-lambda cisco-spark-api google-chart Try it out emilytlam.com www.amazon.com github.com github.com www.youtube.com
Salazar the Spark Bot
Sales Analytics In Cisco Spark
['Emily Lam']
[]
['amazon-alexa', 'amazon-dynamodb', 'apex', 'api-gateway', 'aws-lambda', 'cisco-spark-api', 'google-chart']
39
10,443
https://devpost.com/software/be-a-superhero
Inspiration I am very fond of video games and super hero movies and i always wanted to make a game for the same. But being a backend developer, it is difficult for me to create normal mobile app for the same. So i chose alexa skill to create a fictional super hero game. What it does Basically, in this games you will create your own superheros with your desired super powers and will fight with super villains to save the world. Other then that you can also practice and master your super powers for the future battles. In this games, there are 3 modes: 1) Battle Mode - Solo and One on One 2) Arcade Mode 3) Practice Mode It offers you to select any 3 super powers in each game from the pool of 40+ super powers. It also provide you an option to choose battle ground or open an attack as choosing background can be crucial when it comes to winning. Overall, there are 12000+ combinations of super powers can be made. And it will be fun to play with. How I built it I used best of both Alexa Traditional Interaction Model approach and Alexa Conversations to built it. The part where we had 12000+ combination path to choose 3 super powers, there Alexa conversation came in place and became handy for me. Challenges I ran into As Alexa conversation is still in beta phase, some time its became frustrating when error comes and you dont know the reason. But Alexa Support team and community was super helpful Accomplishments that I'm proud of I actually started building the skill in last week before submission date and just completed couple of hours before submission. (I was solo member ;)) What I learned Learnt so many things about Alexa conversations. What to do and What not to do while designing skills using Alexa Conversations What's next for Be A Superhero So Arcade mode and One on One (Offline two player) mode is still in beta. Other then need to focus on audio-visuals of the same Built With amazon-alexa node.js
Be A Superhero
"Be A Superhero" is fictional voice first game skill where a user can create their own super heroes with desired super powers and can battle against super villains who try to harm the world.
['Sahil Kanani']
[]
['amazon-alexa', 'node.js']
40
10,443
https://devpost.com/software/travel-bug-fsam1q
Inspiration Just asking other travelers for a recommendation is so much better and faster than online research. This skill attempts to approximate the experience of getting a recommendation from a fellow traveler. What it does Provides recommendation on the travel destination and transportation based on budget, preferred activities and number of people. How I built it I used Alexa Conversations and AWS backend. Challenges I ran into AC UI is buggy and hard to use. Accomplishments that I'm proud of Learned AC quite a bit in a short amount of time. What I learned Annotating dialogs is hard, no matter what tools. What's next for Travel Bug Provide my data driven APIs. Built With alexa amazon-web-services conversations noje.js
Travel Bug
Traveling will never be the same. Old routes don't work, new routs are unknown. Travel Bug helps you discover new travel destinations based on a few simple questions like your budget and activities.
['Maxim Makatchev']
[]
['alexa', 'amazon-web-services', 'conversations', 'noje.js']
41
10,443
https://devpost.com/software/my-africa-safari-tour-planner
INSPIRATION Since Covid-19 has further devastated human beings, animals, nature and businesses, I thought its imperative for we human beings to keep exploring, preserving and supporting people, nature and sustainable businesses by taking Africa Safaris to cool off and learn about the preservation of the largest animals in the world and fresh water resources in the world before we all destroy it with our reckless exploitation of the world. Africa is home to many of the world's most famous fauna in human culture such as lions‚ rhinos‚ cheetahs‚ giraffes‚ hippos, leopards, zebras‚ African elephants among many others. The Covid-19 pandemic and climate issues like drought, flood and high-temperatures has wiped out many of them. Few Important Facts 99% of the world’s water sources are unfit for human consumption, leaving a paltry 1% to sustain over 7 billion people across the planet. Overall, Africa has about 9% of the world's fresh water resources and 16% of the world's population. Africa’s Lake Tanganyika is the second deepest freshwater lake, and holds the second largest volume of fresh water in the world. It’s the longest lake, and extends across Burundi, Zambia, Tanzania, and the Democratic Republic of Congo. It's impossible to deny — humans are destroying the natural environment at an unprecedented and alarming rate. Nearly 21,000 monitored populations of mammals, fish, birds, reptiles and amphibians, encompassing almost 4,400 species around the world, have declined an average of 68% between 1970 and 2016, according to the World Wildlife Fund's Living Planet Report 2020. The number will increase dramatically from tragedies like the bush fires in California - USA, the recent Australia bush fires that killed or harmed three billion animals etc. Without the Preservation of Nature - Human beings will perish THE EXPERIENCE, MOTIVATION AND CHALLENGES My first encounter with Alexa and How it can help preserve Humans & Nature So this Hackathon or Competition is my first encounter with Alexa. I believe that sensors, visuals, virtuality and voice are the future of human interaction and development, which is the reason I was so eager to participate in this competition. The UNFORTUNATE FACT about AMAZON ALEXA in AFRICA Alexa is NOT AVAILABLE to 2 billion People in Africa. I can't even access the Alexa app on Google Play or App Store. I can't even order the Alexa devices from Amazon.com and have it shipped to Africa. Very unfortunate indeed. The Fun encounter with Alexa and my first Skill During this competition I built my first Aexa Skill with the help of very insightful and skillful participants like @Dave and others. I built a quiz game around my passion on cloud computing - https://www.amazon.com/dp/B08GJK5Y5V My Ugly Encounter with Alexa Conversations I came into the competition with a lot excitement and motivation, built and publish my first skill (Cloud Computing Dummy) without Alexa Conversations. But that excitement was short lived with Alexa Conversations buggy nature and ugly attitude of displaying never-ending GENERIC ERROR messages that says absolutely nothing but breeds frustration and anxiety. You could almost see a shrink after Alexa Conversations was done messing with your mind. My Heartbreak with Alexa Conversations After 5 days straight of less sleep and pounding the slack channels for help, I couldn't get my skill working because Alexa Conversations won't diagnose the problem and the guys (Alexa Team) did their best to help but to no avail, I'm sure they were also mentally brutalized by Alexa Conversations buggy nature. Amazon Alexa - Part of the Future of Communication I believe that sensors, visuals, virtuality and voice are the future of human interaction and development, which is the reason Amazon Alexa and other technologies like it will greatly enhance communication in the near future. Finished my Alexa Conversations but not done with Her until she Whispers to 2 Billion Africans Its no secret I'm in love with everything Amazon even though Alexa Conversations shattered my heart in a billion pieces but this love affair ain't over until Alexa talks to 2 billion Africans. What it does My Africa Safari Tour Planner helps you plan future Safari Tours and Vacations in Africa. This initial release will be improve to add visual and virtuality in the future. How I built it With Alexa Conversations, Amazon S3, JavaScript, Amazon Lambda, Accomplishments that I'm proud of The competition gave me the opportunity to learn about Alexa and build my first skill - Cloud Computing Dummy - https://www.amazon.com/dp/B08GJK5Y5V What's next for My Africa Safari Tour Planner = Alexa & 2 BILLION AFRICANS To build a world where nature talks and whispers back to 2 billion people Africans about natures gifts and its key to the preservation of humanity using Alexa. Built With alexa-conversations amazon-alexa amazon-web-services javascript
My Africa Safari Tour Planner
My Africa Safari Tour Planner helps you plan future Safari Tours and Vacations in Africa.
[]
[]
['alexa-conversations', 'amazon-alexa', 'amazon-web-services', 'javascript']
42
10,443
https://devpost.com/software/czeus-maths-challenger
cZeus Maths Challenger: Amazon Alexa Skill Inspiration Are you tired of hearing ‘Millennials are lazy’, baffled by your six-year old’s homework or maybe wondering if maths skills will come naturally once you finish school? Have you started to lose the essential math skills? Are you looking for a simple anti-ageing brain exercise? It’s time to take action with cZeus Maths Challenger! cZeus Maths Challenger Website What it does cZeus is a patented game and a fun way to excel at mathematics fundamentals. It is a refreshing way to boost numeracy, times tables, logic and working memory! The aim of each puzzle is to find the answer to a set of mystery numbers by using the given clues. A puzzle a day will increase your numeracy skills and boost your confidence before you know it! Many games are available for improving individual skills separately, e.g. their focus is either on basic numeracy or pure logic and problem-solving. cZeus is a puzzle-game that at once fuels different part of the brain; short-term memory, numeracy, agility, problem-solving and logic. Interestingly solving cZeus Puzzle is like an alternative way of finding answers for a set of non-linear equations, without the player having any knowledge of algebra. cZeus Puzzle is presented in a grid form similarly to Sudoku and can be solved on paper. How we built it We have built this very quickly using the great Alexa skills examples. Challenges we ran into To run the game concept only through voice interaction and natural conversation. Accomplishments that we're proud of cZeus, patented app: cZeus Puzzles are patented with U.S. Patent No. 9,649,552 cZeus is a Kid's friendly App: cZeus is designed keeping children safety in mind. cZeus is 5 stars rated: cZeus has received 5 stars from Educational App Store. A clever app that teaches mathematics and number skills through a puzzle-based game, an alternative approach to practising and using basic number skills and finding common factors. cZeus is endorsed by UK universities professors: "cZeus is a thoroughly absorbing game based on a smart idea and has the potential to take the player further along the path to a deeper appreciation and enjoyment of mathematics.” Partnership with Imperial College: Imperial College Computing partners with cZeus to launch a competition for schools. What we learned We learnt how naturally we can interact with our players, with a much easier method, i.e. conversation. What's next for cZeus Maths Challenger To build the UI interactions for devices with a display. Built With conversation math natural-language-processing Try it out www.amazon.com
cZeus Maths Challenger
cZeus Maths Challenger is a refreshing way to boost numeracy, times tables, logic and working memory! Adults would never lose these essential skills again by solving cZeus puzzles daily.
['Shohreh Blank', 'Mo Zoualfaghari']
[]
['conversation', 'math', 'natural-language-processing']
43
10,443
https://devpost.com/software/my-radio
icon Inspiration Flashbriefings and Volley FM What it does Allows a personalized selection of podcast-like short audio shows. How I built it I used Alexa Conversations, AWS, node.js Challenges I ran into AC UI is quite immature. Accomplishments that I'm proud of I am glad i could master AC to the extend I did in this short period of time. What I learned AC What's next for My Radio Populate content and provide more AC training data to handle a better variety of flows. Built With alexa amazon-web-services conversations node.js
My Radio
Navigating complex menus via voice is hard, but necessary for personalized programming of podcast-like content. We use Alexa Conversation to provide ease of adding new shows from multiple categories.
['Maxim Makatchev']
[]
['alexa', 'amazon-web-services', 'conversations', 'node.js']
44
10,443
https://devpost.com/software/foodie-meal-planner
Inspiration Coming up and maintaining our weekly meal schedule was becoming such a chore at home for us. We would always forget what meals we were having at various points throughout the week, which meant sometimes we couldn't cook our scrumptious dishes as well as we'd wished for. We thought we could harness the power of Alexa Conversations to develop a meal plan organizer that would feel natural and easy for people to use so that's where the idea for Food Meal Planner came to life for us. What it does Foodie Meal Planner allows you to set up and query your meal plan for the week ahead and also set meal preparation reminders so that you never forget to soak your chickpeas overnight ever again! With Foodie Meal Planner you can: Add breakfast, lunch and dinner meals to every day of the week ahead. Ask what's on the menu for any given day. Set Alexa reminders for those important prep steps that have to be done very early in advance. When setting a reminder, try saying "Remind me to soak the chickpeas in water" or "remind me to defrost the chicken". How we built it Foodie Meal Planner is an Alexa-hosted skill backed by a Node.js lambda and an Amazon DynamoDB table as its storage backend. And it's obviously fully powered by Alexa Conversations! Challenges we ran into We were completely new to Alexa skill development when we started developing this skill barely a few weeks ago. That in and of itself was a big challenge for us, as we had never built a voice-driven app before. We had to research how to build an Alexa skill app from scratch, using Alexa Conversations, which is a very powerful and new piece of technology on top of that. Nevertheless, the journey has been very rewarding and we're looking forward to using Alexa Conversations for more skills in the future. Accomplishments that we're proud of We have definitely come a long way since that first brainstorming session we had a few weeks ago. The fact that we managed to build a working, useful Alexa skill that we've been able to make a part of our daily routines has been a major accomplishment for us. What we learned Too many things to count! Alexa skill development and Alexa Conversations would definitely be the highlights. Building Alexa skills using Conversations has been such a fun and interesting experience for us. What's next for Foodie Meal Planner We would like to enrich the dialogue options available in the skill and train Alexa Conversations further to make it easier and quicker for our users to plan out their week. We'd also love to tap into the power of AI/ML at some point to develop a meal suggestion feature that helps people decide what they're having throughout their week based on their meal history and other preferences. Built With amazon-alexa amazon-dynamodb amazon-web-services javascript jest lambda node.js
Foodie Meal Planner
Plan out your meals for the week and be reminded of the time-consuming yet crucial food prep steps that are so often overlooked at home!
['David Jimenez Sequero', 'Julio Márquez Castro']
[]
['amazon-alexa', 'amazon-dynamodb', 'amazon-web-services', 'javascript', 'jest', 'lambda', 'node.js']
45
10,443
https://devpost.com/software/u-lectric
Inspiration For EV driver it is very important to find exactly charging station that he was looking for. Because there are different types of charging stations, different connectors, operators. On low power station it needs the whole night to charge battery and on fast (superchargers) you can charge 80% of battery in 20 min. What it does: U lectric, help people who drive electric vehicle, to find charging station. You can find on different criteria: nearest chargers filter by power filter by connectors find chargers in other cities filter by operators. U lectric will find all information about charging station: name, connector types, number of connectors, location, pricing and other usefull information that help to choose charging station what user was looking for. And if you are using U lectic from your phone, tablet, or in build car (device that support geo location and navigation system), U lectric not only find the charging station but also build a route to it and show it on your navigation system. How I built it U lectric was build with the help of Alexa Conversations, Intents and Java sdk. Challenges I ran into There were challenges with delegation and with navigation part. Accomplishments that I'm proud of I'm proud that navigation is working, because there was no missing objects in sdk, so I had to implement it by myself on back end side to be able to show user navigation to charging station. What I learned I learned how to use alexa conversation What's next for U lectric As next we will try to add integration with new service to increase quantity of charging stations that U lectric have access to.
U lectric
U lectric help you find nearest EV charging station and build a road to it with navigation system. Also you can customize search and find specific (power, connector type, operator, city) EV station.
['Iryna Kokhanchyk', 'Oleg Kokhanchyk']
[]
[]
46
10,443
https://devpost.com/software/you-call
Designing while Alexa builds the conversation model :) Dialog0 - Table reservation sample Demo table reservation SSML for responses. Welcome APL template Inspiration Some calls require a long time to wait on the phone and are repetitive calls, such as calls for claims, calls to reserve a restaurant, calls to renew car insurance, etc... Also, I recently saw the news that Alexa will be able to make voice calls soon. So I thought it would be great if a skill could ask you a few questions and then speak for you on those long phone calls. What it does This Alexa skill doesn't dial the phone number to call yet, but if you put a smartphone nearby, this Alexa skill can speak for you to reserve a table for dinner on Friday with your friends. To achieve this, the skill first asks you some information (day, time, number of diners) ... and then it asks you to call the restaurant and put the phone nearby, because from that moment on, this Alexa skill will negotiate for you to get the table reservation. How I built it I've used Alexa conversations (beta) to implement this Alexa skill. I have also added audio and visual templates (Alexa Presentation Language - APL). In addition, the skill requests permission to access the user's name and phone number, in case the restaurant asks for a name and phone number to record the table reservation. I've used SSML (Speech Synthesis Markup Language) when Alexa tells the waiter the phone number, so that it sounds like a phone number and not like a number in the millions. Challenges I ran into This has been my first Alexa skill using Alexa conversation, which has been the most important challenge. Afterwards, working on a dialogue in which the first part is the user talking to Alexa and the second part is Alexa talking to the waiter, has sometimes been a bit complicated. The time spent to build the conversation model is very slow compared to "interaction model" skills, so I've taken the opportunity to make some drawings to explain the skill. Accomplishments that I'm proud of I'm convinced that the idea is good and has potential. It's easy to scale the skill to new conversations such as: car insurance renewal, claiming an internet bill, making an appointment at the hairdressing salon, etc... The result obtained works :) What I learned I've learned to do skills with Alexa conversations and all that it involves: api definitions, annotate dialog, new slot type with properties, etc... What's next for You call I want to add new dialogues, to renew the insurance, to ask for a better price on the internet service or to make an appointment at the hairdresser. And try it in the real world! I have yet to investigate how to use data-binding in the APLs used by Alexa Conversations. I also want to add Speech Synthesis Markup Language (SSML) in Alexa conversations responses. Built With alexa apl conversations javascript
You call
Alexa skill that speaks for you on your phone calls.
['Javier Campos']
[]
['alexa', 'apl', 'conversations', 'javascript']
47
10,443
https://devpost.com/software/ice-cream-pronto
Inspiration Have you ever missed the Ice Cream Truck as it passed by your house? No more! With Ice Cream Pronto, Alexa will notify you when the truck is near. You can also request to be added to their route or notify the truck while in route. I drove an ice cream truck last summer for the first time with my kids and realized times have change since I was a kid. I noticed very few kids playing out in the street. In fact very few kids outside at all. While driving through a neighborhood, we found very few customers. Through my rear-view mirror as we would leave the neighborhood, I would notice customers walk out. There were customers, but it took them longer to come out. They had much less warning because they could not hear the Ice Cream Truck Music indoors until it was too late. The simple solution seemed to be circle the neighborhood to give them more time, but this also had the side effect of annoying those that didn't want to hear ice cream music. I my self have also fall victim to missing the Ice Cream Truck because the short notice. There had to be a better way. And that's what inspired the creation of Ice Cream Pronto. What it does Ice Cream Pronto allows you to get a notification on your Alexa device when an Ice Cream Truck is near. You can also request the Ice Cream Truck to stop by your neighborhood. The notifications are like the traditional Ice Cream Truck music for the digital age at a personal level. The Ice Cream Truck driver then can turn on broadcasting. The driver uses the web app with GPS enabled on a mobile device. When the truck drives near a house with an Alexa subscribed to the notification it will trigger their notification. The customer can also request that they be include in their route. This can also work to ping a truck that may have just passed by. Say "Open Ice Cream Pronto". It will then ask if you want notification. It will ask for your address and confirm it, then ask if you want notifications. Once does it will send the data to the Ice Cream Truck to include them in their route. Video Demo link How I built it VS Code as the developer environment. I used node.js and the Alexa SDK. The dialog was controlled by Alexa Conversation API. Most of this work was done the the Alexa web console. After getting the dialog working, I refactored the code to listen to the Conversations API handler. This refactor from the traditional way with multiple intent handlers reduce my code base by 70%. The Alexa handler are coded in node.js and hosted using Alexa self hosting provided with new projects rather than a separate AWS project. This simplified the lambda setup process. The node.js code was coded in VS Code and push to production via git. The git repo was also set up by the Alexa dev web console. The db service to the Ice Cream Pronto is done in .net core and ms sql. The Conversation API handler called the back end service once the dialog was complete, rather than ever step of the dialog not using Conversation API. Challenges I ran into I planned on configuring everything in VS Code, but many parts of the config for Conversation API had to be done in the Alexa web console. It took a bit to understand how to set up a Conversation dialog so I just followed the lesson walk through they provide. After finishing it, it started to make much more sense, but until then I was walking blind. The notifications also challenged this solution. The notifications are very structured in their wording not allowing free test for notifications. In the end I found a notification template that mostly fit my needs. Accomplishments that I'm proud of This is the first time I got notifications working on Alexa. I had something miss configured with the dialog conversation and it was talking out of order and repeating itself and I almost gave up. Once the dialog bugs were worked out, the flow seems turned out much smoother than I expected and it all made sense. What I learned Conversation API, Notifications (Alexa Proactive API), and a cleaner work flow using Alexa ASK CLI. What's next for Ice Cream Pronto Promoting Ice Cream Pronto to local Ice Cream Trucks. Built With .netcore alexa alexaconversationsapi alexaproactiveapi echo node.js vscode Try it out icecreampronto.com
Ice Cream Pronto
Have you ever missed the Ice Cream Truck as it passed by your house? No more! With Ice Cream Pronto, Alexa will notify you when the truck is near. You can also request to be added to their route.
['jamesfdickinson Dickinson']
[]
['.netcore', 'alexa', 'alexaconversationsapi', 'alexaproactiveapi', 'echo', 'node.js', 'vscode']
48
10,443
https://devpost.com/software/focus-qropgc
Created Focus Card Inspiration On the news , we are hearing a lot about people having financial issues. We wanted to build a tool that would help people focus on their financials and accomplish What it does Creates a Focus Card with suggested tools based on what you want to do, what you are willing to do and who will help you. How we built it NodeJS, Vuejs, AWS, Conversatuib Challenges we ran into We had trouble getting the dialog built. Accomplishments that we're proud of We were able to get the suggested tools to match a large amount of scenarios What we learned We learned about conversations What's next for Focus A full integration in to the app Built With alexa alexa-conversations node.js
Focus
A skill that enables users to tell Alexa a financial goal. Alexa will then help them clarify their focus and provide a set of AI Tools that empower the user to accomplish their goal.
['Savalas Colbert', 'Zaid Shabbir']
[]
['alexa', 'alexa-conversations', 'node.js']
49
10,443
https://devpost.com/software/property-agent
Property agent Inspiration help my dad rent our old flat online, found out we have to fill a huge form with details of our flat. So I thought "how easy would this be if I used VUI to enable form filling" and boom this idea came in. Also, I wanted to get my hands dirty with Alexa's development. So long story short - reduce form filling time What it does It helps you sell, rent or find a property near your area. How I built it With Alexa conversation :) and JS Challenges I ran into Everything was a challenge to me :) Accomplishments that I'm proud of This skill! Yeaaa! What I learned Not much But I am happy I built something What's next for Property Agent Make it work properly :P ! Skill ID http://amzn1.ask.skill.20d58811-75b6-4624-a81b-2d1c5e504fc3 Built With amazon-alexa apl javascript
Property Agent
Worried about how can you sell your property easily or are you looking for an apartment.Be carefree and ask you alexa device for help.Just ask "Alexa open property agent"
['Aditya Sisodiya']
[]
['amazon-alexa', 'apl', 'javascript']
50
10,443
https://devpost.com/software/the-magic-card
The Magic Card! Welcome screen Design screen Inspiration We all love the holidays and we wanted to create something that people would love to engage with, something that changed every-time you came back. The card provides users something to look forward to, the holidays! What it does The skill provides an audio-visual journey, helping users build and design a magical holiday card. We wanted to mix the Conversations API and APL, to create a compelling audio-visual conversation for users. How we built it We used the Conversations API framework on Alexa, we followed the following steps We started with the concept, we wanted something users would love and thats when we thought it has to be something for the Holidays, which are just around the corner. Then we spent some time deciding on how the dialogue would work, how could we make it simple and at the same time excite the user. This is when we decided to include the APL, and make it audio-visual conversation. Once we had the building blocks, we got to work with the designs, sound and dialogue. Challenges we ran into We found it difficult to design the conversation at first, but with lots of attempts and practice we were able to get there in the end. Accomplishments that we are proud of Gelling music, visuals and conversation allowed us to build something we were all really proud of. What we learned Once you know the user problem you want to solve, focus on the dialogue and spend most of your time designing on paper before getting to the tools. What's next for The Magic Card Next we plan to add the ability to create cards for any occasion, and then work out away to send these to users. Getting an Audio or Visual card via Alexa, could be a killer app! Built With alexa conversations javascript lambda node.js
The Magic Card
A simple magical card, using the Alexa conversations API to guide the user and allow them to design their own magical holiday card!
['Francisco Torres', 'Susan Brett', 'Vytas Kancleris', 'Chetan Damani']
[]
['alexa', 'conversations', 'javascript', 'lambda', 'node.js']
51
10,443
https://devpost.com/software/working-hours
Inspiration Due to the COVID-19 crisis that affected the world as never seen before and combining the facility of voice command with the challenge of keep tracking of your worked hours, especially nowadays during the health crisis, the Alexa Skill Working Hours came into play to facilitate how you keep track of how many hours you have worked. What it does With a simple command voice, you can let Alexa start to track your working hours and as soon as you get everything done. you simply ask Alexa to close your hours and generate a report of your worked hours history. How I built it I used Serverless, Python and MySQL to build that skill Challenges I ran into Understand the "new" way of doing things with Alexa Conversations and pass-through the bugs and limitations of its beta version plus, keep it cleaner with constants updates of the ASK SDK . Accomplishments that I'm proud of Make easier to have a report of how many hours I have worked and all of that being hands-free. What I learned That "beta" version really means "beta"! :) I had few issues (even once my whole skill data was removed, but got fixed after I contacted the Support/Forum) and still some functionalities of Alexa Conversation does not work as described. What's next for Working Hours Send the hours report to the customer email Built With mysql python serverless
Working Hours
Do not let the COVID-19 affect your productivity at work. With Workings Hours you can easily keep track of how many hours you have worked and thanks to the voice-enabled commands, it is all hands-free
['Guilherme Ferreira']
[]
['mysql', 'python', 'serverless']
52
10,443
https://devpost.com/software/learn-lingo
Inspiration In March 2020 I had already spent almost 3 months preparing and sorting everything to be able to backpack a few countries around the world for 30 days. Visas Check. Tickets Check. Itineraries Check. Hotel and other Bookings Check. Dreams Check. I planned an extensive trip and was ready to have perhaps the best time of my life, and then the pandemic happened. To get over the gloominess, I started learning a new language (Spanish) during the quarantine to interact with the locals more naturally the next time I get to travel. And then the Alexa Skills Challenge was announced! Ergo, my ideas aligned in the same direction & I decided to participate in it and build Learn Lingo Alexa Skill! What it does With Learn Lingo Alexa Skill you can: ✅ Take on a daily challenge to practice a new language using a personalized learning plan. 🎯 If you are already having proficiency in a language we offer, then you can take a fun language quiz to evaluate yourself. ⏰ In a rush? Revise new phrases or words quickly using the flash card mode. ⏰ Just exploring? Use our unique voice translator feature to hear how different phrases sound in different languages Open the skill by saying "Alexa, open Learn Lingo" and you'll be easily guided through the different sections. We currently support learning Spanish, German, Italian, French, and Hindi with more to come! So which language would you like to learn? How I built it The Learn Lingo Alexa Skill has these main features - The Daily Challenge ✅ Every day you will be given certain words or phrases from your selected language, and from time to time there will also be a quiz to help you retain your learning. ⏰ We limit the daily lesson at five words or phrases. It makes it easier to form a habit and find a few minutes of time to practice daily. ✅ Also once you complete the lesson, you've ample time to practice the words or phrases that you've just learned, and the next lesson of new words or phrases will be available the next day. The Language Quiz & The Flash Card Mode ✅ While learning at a stretch is appreciated, it's always better to keep a check on our progress and revise what we have learned. The Voice Translator ✅ Translate English words or phrases into 35 commonly spoken languages around the world. Voice Translator currently supports the following languages: Afrikaans Albanian Arabic Armenian Chinese Croatian Czech Danish Dutch Finnish French German Greek Hindi Hungarian Icelandic Indonesian Italian Japanese Korean Latvian Macedonian Norwegian Polish Portuguese Romanian Russian Serbian Slovak Spanish Swedish Thai Turkish Vietnamese Welsh ✅ To try just say, 'Translate, how are you in German', or try, 'Translate, good morning'. ✅ This is an experimental feature and might not be 100% accurate. If you have suggestions or feedback we request you to write back to us at - contact@thealexa.dev Challenges I ran into Building a contextual conversational experience in itself was a big challenge. Forming the mental model to tackle language learning via voice as well as multimodal technologies was a hefty task. Accomplishments that I'm proud of I was able to embrace the nitty-gritty of language learning and ended up building an Alexa skill that leverages the multimodal technologies to provide the best in class contextual conversational experience to the end-user. What I learned No matter how pro you are at something, tackling a new technology puts you into the same shoes as everyone and the challenges are far tougher than they initially seem. What's next for Learn Lingo I would love to add more language learning programs to the Learn Lingo Alexa Skill and also make it available globally. Built With alexa alexa-skills-kit amazon-alexa amazon-dynamodb amazon-ses amazon-web-services heroku lambda node.js Try it out www.amazon.com
Learn Lingo
A simple and fun way to learn new phrases in a language of your choice and practice pronunciation listening to human-like voices.
['Ashish Jha', 'Mohan Raj']
[]
['alexa', 'alexa-skills-kit', 'amazon-alexa', 'amazon-dynamodb', 'amazon-ses', 'amazon-web-services', 'heroku', 'lambda', 'node.js']
53
10,443
https://devpost.com/software/lighthouse-voice-conversations
Inspiration By 2030, over 80K+ seniors on Medicare will be navigating two or more chronic conditions...and at the same time, the AMA is forecasting a 20K physician shortfall for this group. Our initial program sat on smartphones, but we couldn't get first 60 day usage above 35%. We built a quick Alexa prototype and after three rounds of pilots have initial usage above 75%. What it does LIGHTHOUSE puts your doctor on your kitchen table. We connect a patient to their physician's care plan. How I built it Alexa Conversations + Lambda/nodejs + RDS + Cognito Challenges I ran into We had to restart a few times to get Conversations to work – we had errors we couldn't wrestle down and had to start blank slate. That was frustrating. Additionally, we needed some Conversations-led dialogs as well as old school and it took us a bunch of rounds of iteration to get those two models to work in concert. What's next for LIGHTHOUSE Voice Conversations More content, more conditions, more EMR integrations, more Medicare revenue.
LIGHTHOUSE Voice Conversations
LIGHTHOUSE Voice puts your doctor on your kitchen table. We help seniors build skills in diet, physical activity, taking their meds and writing stuff down. LIGHTHOUSE is reimbursed by Medicare.
['Dave Vockell']
[]
[]
54
10,443
https://devpost.com/software/having-fun
Inspiration I heard this story a long time ago and thought it would be great if Alexa shared this wisdom. Things seem so polarized, so this is a fun way to help people change their perspective on changing perspectives. What it does A simple app. Alexa tells you a story, you answer a question, Alexa shares the answer and repeats your answer back to you. How I built it Just used the Alexa developer console, Lambda and the simple Alexa Conversations example. Challenges I ran into Deciding whether to host on Lambda or using Alexa to manage the hosting on the free tier. Accomplishments that I'm proud of Getting my first published Alexa skill. I have only published a skill as developer only before. What I learned Alexa has already trained data types for Colors, TV shows and a lot of other inputs. There are a lot of conversations that are easier to build now that I didn't realize before. What's next for Change your thinking riddle Perhaps add additional riddles, stories that are randomly selected. So far there is only one. Built With amazon-alexa node.js
Change your thinking riddle
A simple app. Alexa tells you a story, you answer a question, Alexa shares the answer.
['K Chatterjee']
[]
['amazon-alexa', 'node.js']
55
10,443
https://devpost.com/software/dine-in-directory
Dine In Directory Dine In Directory The sausage Inspiration With alot of dining in home during 2020, I wanted to make a skill that can help people find restaurants open for deliver or pickup. What it does Dine In Directory is an Alexa Skill that helps the user find restaurants open for delivery and pickup. How I built it Leveraging Alexa Conversations I was able to put together a dialog flow that helps the user along with collecting the search criteria needed. The search criteria is processed to narrow down a latitude/longitude and then leverages the yelp api for restaurant searching. The backend is an aws lambda using c#. Challenges I ran into The biggest challenge is the learning curve on getting to know and use Alexa Conversations. Accomplishments that I'm proud of I was able to tame the beast knows as Alexa Conversations, such a great way to add back and forth dialogs. What I learned I learned how to configure and harness the power of Alexa Conversations, this has opened up much more possibilities for my future skills. What's next for Dine In Directory Better graphics using the Alexa Presentation Layer APL, Built With alexa alexa-conversations amazon-web-services aws-lambda azure-maps c# yelp
Dine In Directory
Want to spend a quite evening dining in at home use Dine In Directory to find you next special meal.
['stevie V']
[]
['alexa', 'alexa-conversations', 'amazon-web-services', 'aws-lambda', 'azure-maps', 'c#', 'yelp']
56
10,443
https://devpost.com/software/ingredient-substitute
Welcome to Ingridi' hunt Ingredient replacement for almond butter with allergy preference Ingredient replacement for almond butter without allergy preference Ingridi' hunt Inspiration Cooking is a great hobby for most people and there are so many reasons behind it. The biggest reason would be because it’s super fun and it gives everyone a chance to experiment with a wide variety of choices. Another great reason is that we get an amazing dish at the end. Also, there are loads of recipes to try out. But many times, we drop the idea of cooking just because we don’t have an ingredient that’s needed for the recipe and we have no time to rush to the grocery store to buy the missing ingredient. When we heard about Alexa conversations, we wanted to create something that will help people out there to continue with the recipe with some alternative ingredients. Also, our hands will be pretty engaged while cooking and we thought conversations will be a perfect fit to help users find ingredient replacements for recipes. That’s not all about it! Though there was such a skill already, what we personally thought was missing was that all of us in today’s world want to focus on diets and be fit. In addition to this, all of us do have some kind of food intolerance. We wanted to make this more personal to the user by considering users' diet, allergy preferences and also to make Alexa smart enough to recommend ingredients based on the kind of food the user is going to make. That was our starting point. What it does Finds an ingredient alternative based on the users' diet, allergy preferences, and the kind of food the user wishes to prepare. These are different ways to find an alternative ingredient using our Ingridi' hunt skill. Search by ingredient and the specific food type The user can provide the ingredient name and the type of food to be prepared to get a more refined substitute. Refine your search with allergy & diet preferences The user can provide allergies and/or diet preferences so that the users' health is not negatively impacted by the substitute. Say you are allergic to peanuts and want a substitute for butter, you wouldn’t be recommended peanut butter. Also, you may be a vegetarian and wouldn’t want any ingredients with egg as a substitute. These choices are taken care of by the skill. Quick search The user can provide just the ingredient name to do a quick search so that we get a substitute without any custom preferences. A super busy mom trying to make a quick breakfast for herself before joining a zoom meeting is a typical use case we considered for this. General ingredient replacement tips We often like to hear some general tips while experimenting with interesting recipes. Ingridi’ Hunt also provides a way to hear exciting food tips when you want. Don't forget to try those out! The user need not worry if their preferences do not have an exact substitute recommendation, we always recommend the closest alternatives in case we do not have a perfect match. How we built it Alexa conversations Alexa conversations were the most interesting part for us. We worked on setting up conversations and dialogs for different scenarios that a user could search for an alternative ingredient or ask for tips. These conversations were then integrated with the APIs we created in the backend using AWS Lambda to fetch the recommendation for the user-specified ingredient based on the substitutes, diets, and allergies stored in DynamoDB. Alexa Presentation Language For devices which support APL, we included screens for the initial launch and for recommendation to present the ingredients in a better way. Data source Data being the crux of this conversational skill, we made use of different reliable data sources to fetch ingredient related data along with the diets and allergies. Currently, we support diets like Kosher, Vegan & Vegetarian, and allergies like peanut, tree nut, dairy, alcohol, sulphite, gluten. This data was loaded into our tables in DynamoDB so that we could leverage the interaction between lambda and DynamoDB. Community support The slack channel created for this hackathon motivated us a lot and we were actively getting our questions/issues sorted out with the Alexa team and was also very helpful for us to know the resolutions for common issues that we often faced. Being the first skill we are building, the certification issues that the community actively shared helped us to a great extent. The live streams in twitch introduced us to many aspects of Alexa conversations skills. Challenges we ran into Collecting data for different types of ingredients and food types and consolidating them, deciding on how to handle different food types. Handling different possible user inputs and conversations. Figuring out how to continue the conversations and clearing the previous conversational context was a bit challenging. Trying to figure out the different utterances that the user can make to get things done. Accomplishments that we're proud of Publishing our first-ever Alexa skill Allowing multiple conversations with the user to ask for tips or search for substitutes. We got a chance to explore different AWS services like Lambda, DynamoDb, S3, and integrate those with our custom hosted skill. What we learned Usage of AWS lambda functions i.e how easy it is to integrate it with our skill and load layers of packages. In addition, saving and deploying lambda functions was very convenient and time-saving. Setting up conversations in a sequence and understanding what the user wants. We also spent time understanding how the user will convey what they want and how they can switch context. Alexa presentational language and how it can be used with/without display enabled devices. What's next for Ingridi' Hunt Expand the choice we give for the diet preferences and refining the API to be able to provide results based on various food types. Bring more flexibility by adding more dialogs like updating the preferences during a conversation. Make better use of APL and get user inputs from APL. Get feedback from the users, find their pain points, and make the skill greater! Built With alexa-conversations amazon-alexa amazon-dynamodb amazon-web-services apl node.js
Ingridi' hunt
Quickly find ingredient alternatives based on your diet and allergy preferences along with the the kind of food you wish to prepare
['Gayathiri Geetha', 'Sowmya Seshadri', 'Priyamvada Mukund', 'Dinesh Balaji Venkataraj', 'Madhankumar']
[]
['alexa-conversations', 'amazon-alexa', 'amazon-dynamodb', 'amazon-web-services', 'apl', 'node.js']
57
10,443
https://devpost.com/software/house-hunting
skill welcome screen Inspiration Whenever I'm in a new city or neighbourhood that has a great vibe, or see a home with great kerb appeal, I often wonder what it would cost to live there. House Hunting is the quickest way to satisfy that curiosity. What it does The skill uses Alexa Conversations to capture the customer's desired search location and property details. If permission for location services is granted, and available, the skill presents a 'search nearby' option. If not, it falls back to requesting a zip code. With location services enabled, the skill uses a geocoding API to get detailed location data, then a real estate service API to search local listings in that area. Pricing will always be up to date because the skill searches current listings for each session. I designed the skill with Alexa users 'on the go' in mind, so if you're using it on your mobile, car, ear buds or other wearable device, you don't even need to know what zip code you're in - the skill does that for you. How I built it I built House Hunting using an Alexa-hosted skills, Node.js, Javascript, and it's connected to two third-party APIs using axios. The first API is Google's GMP for reverse geocoding when using location services. The second is Realtor.com's API for accessing property listings. Challenges I ran into I had to quickly learn not just Alexa Conversations but also Alexa-hosted skills, because I've always used the ASK-CLI but it's not supported yet for Alexa Conversations. But my main challenge was working with Alexa Conversations as a brand-new component in the Alexa ecosystem. This was my first planned skill for the challenge, and I temporarily abandoned it thinking I'd bitten off too much. After submitting my 'backup' skill I had learned enough and had great support from the Alexa team, that I decided to push ahead and finish this one as well. Accomplishments that I'm proud of Getting the skill finished after much head-scratching while debugging my Alexa Conversations piece. What I learned Don't keep silent - use the challenge forums and support teams - they are amazing. What's next for House Hunting 1) Much richer Alexa Conversations dialogs for additional search filtering (e.g. number or beds, baths and price range). 2) More detailed property results, possibly offered as an email report with links to the listings and contact details for follow-up. This version of the skill was for the curious tire-kicker. The next version might be for the serious house hunter! Built With alexa gmp javascript node.js realtor
House Hunting
You know when you're in a new neighbourhood with a great vibe, wondering what it might cost to live there? That!
['Dave Curley']
[]
['alexa', 'gmp', 'javascript', 'node.js', 'realtor']
58
10,443
https://devpost.com/software/youtube-player-jxwr0u
Inspiration I heard that Alexa can't play YouTube directly, but you can use Bluetooth to pair a device with Alexa-enabled speakers and play YouTube through the device. While Alexa can make playing songs and other audio very convenient, you're limited to Amazon and certain services or radio stations for direct streaming. This was challenging which motivated me to build it. What it does The purpose of this skill is to enable users to listen to audio from Youtube on their Alexa devices. This skill searches for a set of YouTube videos based on a search term provided by the user and then plays the audio from the most relevant video while enqueueing the next most-relevant tracks to be played after. There are also several options provided to manage playback settings, namely: previous, next, pause, resume, and repeat mode (which is looping the audio). How we built it We built Youtube Player leveraging Amazon Alexa SDK, AWS Lambda function and AWS web server with the use of youtube API. Challenges we ran into Our main challenge was linking youtube using youtube API to the Alexa skill. Accomplishments that we're proud of Our accomplishments to date that we are truly proud of, are being able to offer the basic service free of charge for people who can't afford to pay. We had this idea quite late on, so it was great to see our design, test, and build it. We're also super happy it's a skill that can be enjoyed by every music lover! What we learned This is the first time we built an Alexa skill. We learned a lot of things, from the basics of how to create a skill and name it, how to use AWS services to build a complete skill till certification. What's next for YOUTUBE PLAYER Now it's an audio player, we want to improve it in such a way that the skill can play in the video too so that the skill can be useful in echo show and echo spot devices too. Built With amazon-alexa amazon-dynamodb amazon-web-services google javascript Try it out github.com
YOUTUBE PLAYER
Amazon Alexa skill to play audio from YouTube
['Guru Prasad']
[]
['amazon-alexa', 'amazon-dynamodb', 'amazon-web-services', 'google', 'javascript']
59
10,443
https://devpost.com/software/alexa-make-me-a-cocktail
The underlying machine learning mechanism for Alexa conversations was the key to get myself involved in. This Alexa skill suggests cocktail recipes based on the provided ingredients from the user. Sometime you might be wondering what kind of cocktail you can make based on the ingredients you've got. It acts like a bartender who listens to your favorite ingredients before starts making a cocktail. Built With amazon-alexa node.js
Alexa, make me a cocktail
Alexa, suggest me a cocktail with the ingredients I've got
['Anastasios T']
[]
['amazon-alexa', 'node.js']
60
10,443
https://devpost.com/software/indian-traveler
Indian Traveler Indian Traveler Alexa Skill Inspiration The travel and tourism industry is undoubtedly one of the most affected industries due to the pandemic. But that shouldn’t stop us from innovating new products and solutions on the tourism front. The transition to Working from Home is here to stay and it will exponentially increase the digital nomad culture in the coming time. My solution is going to address the challenges pertaining to potential tourists planning to visit India. It will address some of the key pre-travel challenges along with the challenges that the tourist might face during his visit. What it does The Indian Traveler Alexa Skill broadly addresses these themes - Ease of Plan: Travel details are never so easy to search. It usually consumes several hours of searching and planning. Travel Requirements: People traveling to a place for the first time can’t finalize the itinerary or the packing list for their trip easily. Foreign Travelers: Travelers planning to visit a new country for the first time generally don’t have the details about visas and other documents required and end up searching multiple websites for answers. How I built it Welcome to Indian Traveler, your travel companion for India! Indian Traveler Alexa Skill can help you at each step of your next trip to India. With this skill, you can: * Some features of the skill will require access to your Alexa Lists and your email for an enhanced experience. ✅ Try "Alexa, open Indian Traveler" Pre Trip Features If you're yet to plan your trip, you can ask Alexa for the visa policies and other requirements for your nationality to visit India. ✅ Try "Alexa, What are the visa requirements" The next step is to pack your bags. We know it can be hard to know what to pack sometimes. To help you out Alexa has got the ultimate packing list for you. Whether you're going to the serene beaches of South India or traversing the dense cities of the North, you should start packing some must-haves to make this trip memorable. ✅ Try "Alexa, What should I pack for India" If you are not sure of where to start your trip and want recommendations on places to visit in India, Alexa has got you covered. Alexa will ask you a few questions and recommend a place that will suit your preferences. ✅ Try "Alexa, Recommend a place" Want to know more about a city or a monument in India before you add it to your itinerary? Just ask Alexa and learn a bit of history, along with the best times to visit, the average temperatures, and travel routes. ✅ Try "Alexa, Tell me about Agra" or "Alexa, Tell me about Taj Mahal" Finally, zeroed in on the cities that you would like to visit? The next step is to find out what are the must-visit attractions that place offers. ✅ Try "Alexa, Which places should I visit in Delhi" During The Trip Features Got the recommendations, but don't have time to make it to all the cities or monuments on your list? Confused on whether to visit the Agra Fort or the Taj Mahal? Don’t worry, Alexa will help you choose by giving some top reviews and ratings for each place, so you can decide easily. ✅ Try "Alexa, Pick one between Agra Fort and Taj Mahal" or "Alexa, Where should I visit between Delhi and Bangalore" Ran into a mishap during the trip and need help? Just ask Alexa for the emergency contact number. ✅ Try "Alexa, What is the emergency contact in India" Need to reach your country's embassy? Alexa will provide you the phone number, email, and address of your country's embassy in India. ✅ Try "Alexa, Where is my embassy" Trying to impress some local friends or need help asking for directions in a local language? Check out the unique Voice Translator to translate English words or phrases into 10 commonly spoken languages around India and break the language barrier. Languages currently supported - Bengali Gujarati Hindi Kannada Malayalam Marathi Nepali Tamil Telugu Urdu ✅ Try "Alexa, Translate 'How are you' in Hindi" or "Alexa, Translate 'When does the bus arrive'" After The Trip Features ✅ Learn more about the places you couldn't make it to this time! Build a new itinerary and a new packing list, and hop onto another adventure with your travel companion! ✅ If you found the Indian Traveler skill useful do leave us a review, or write to us with your thoughts and feedback at - contact@thealexa.dev Challenges I ran into Building a contextual conversational experience in itself was a big challenge. Forming the mental model to tackle voice as well as multimodal technologies was a hefty task. Accomplishments that I'm proud of I was able to embrace the nitty-gritty of APL and ended up building an Alexa skill that leverages the multimodal technologies to provide the best in class contextual conversational experience to the end-user. What I learned No matter how pro you are at something, tackling a new technology puts you into the same shoes as everyone and the challenges are far tougher than they initially seem. What's next for Indian Traveler I would love to add some Indian games and quizzes to the Indian Traveler Alexa Skill and also make it available globally. Built With alexa alexa-skills-kit amazon-alexa amazon-dynamodb amazon-ses amazon-web-services google-maps heroku lambda node.js Try it out www.amazon.com
Indian Traveler
Your perfect travel companion for India. Visa Requirements & Embassy Details, Trip-Planning, Packing Lists & Travel Recommendations, Indian Languages Translation & much more!
['Ashish Jha']
[]
['alexa', 'alexa-skills-kit', 'amazon-alexa', 'amazon-dynamodb', 'amazon-ses', 'amazon-web-services', 'google-maps', 'heroku', 'lambda', 'node.js']
61
10,443
https://devpost.com/software/read-my-feed-nctmas
Inspiration Reduce screen time for reading social media feed. What it does Alexa read our social media feed. How I built it Using Alexa Conversation. Built With alexa
Read My Feed
Ask Alexa to read my social media feed
['Tri Labs']
[]
['alexa']
62
10,443
https://devpost.com/software/dummy-cinema
Inspiration I love movies and I used to go a lot before of this situation so it inspired to do this project apart from that I thought that it was quite a real example of a conversation between a customer and the ticket seller in a box office. What it does It allows you to buy tickets in advance and check your next booking. How I built it I have developed this skill using Alexa Conversations and node. Challenges I ran into The big challenge here has been to learn how to use and implement the solution using Alexa Conversations a really new and interesting feature with all the pros and cons that it entails. Accomplishments that I'm proud of I've learned how to implement a voice solution using Alexa Conversations and create something from scratch. What I learned I've learned how to implement a voice solution using Alexa Conversations and create something from scratch. What's next for Dummy Cinema There are a lot of improvements in the way one of then is the integration of APL that I couldn't include because of the lack of time. Built With airtable alexa-conversations amazon-alexa node.js
Dummy Cinema
Buy your tickets and check your bookings in this fictitious theatre like in a real one and only with your voice.
['Jesus Maria Chamizo Carmona']
[]
['airtable', 'alexa-conversations', 'amazon-alexa', 'node.js']
63
10,443
https://devpost.com/software/audio-sense
What is audio-sense? Implementation and flow of the skill Using the APL for audio. Using the APL for audio. Using the APL for audio. Skill testing Alexa Conversations Dialogs Alexa Conversations Utterance Sets Alexa Conversations API Definitions Skill Logo Inspiration The motive behind building the Audio Sense Alexa skill is a condition known as Auditory Hypersensitivity. People who experience auditory hypersensitivity: may be sensitive to certain sounds and frequencies, and can experience discomfort when subjected to them. can find filtering out background noises more difficult than others do. may experience auditory sensory overload. A short video description to illustrate what we're talking about: https://youtu.be/ipI8hOGjVUs (skip ahead to "1:35" to listen to the audio simulation) The condition can cause them to feel overwhelmed when too many competing noises occur at once. It is can also lead to levels or irritation, distraction, or general discomfort. Children and adults with autism or Asperger’s frequently report auditory overload and hypersensitivity. We've also done our research into this condition and what may aid in its therapy, here's a link to the paper we used: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4707379/ What it does Audio Sense is an Alexa game skill that makes use of Alexa Conversations. It aims to help and train players to cope with Auditory Hypersensitivity and distractive environmental sounds. The skill sets up a game environment that can span over multiple settings and levels. The goal of the game is for the player to zone out auditory disturbances in a given environment; while trying to discern only the necessary/valid information from the same. Context related information is used to train players to deduce information that could not be grasped. Starting the game requires the player to select the environment they wish to play in. Based on their input, each environment has a set of unique sound files that the player has to listen to, along with a set of instructions to follow. The game is based on the player's success rate. If all the tasks in the given environment were completed successfully, the player proceeds to the subsequent levels coupled with increasing difficulty. DISCLAIMER This tool does not provide medical advice, and is for informational and educational purposes only, and is not a substitute for professional medical advice, treatment or diagnosis. How we built it The skill makes use of the Alexa Conversations feature as its default dialog manager. The entire flow of the game was planned and developed using features provided with the Alexa skills kit, such as dialogs, utterance sets, slots, API definitions, etc. The cornerstone of our project is how we made use of the APL for Audio and all its functionalities. The backend technology used was Node.js, allowing us to write suitable API handlers and code for the seamless execution of the skill. APL for Audio Implementation All the sound files that you heard in the demo were mixed and sequenced using the APL for Audio and Alexa Sound kit Library . Carefully picked out the right sounds for each environment and mixed them seamlessly, incorporating volume filters and silences. Made use of a random selector to bring a sense of spontaneity to the Alexa responses. Challenges we ran into Since the Conversations feature is still in its Beta stage, we could not find documented solutions for many of the problems faced during the skill development. Since learning resources were also scarce, the trial and error method was our best friend during development. Accomplishments that we're proud of We're extremely proud that we were able to develop a skill that covered the core logic of our idea. This being the first Amazon skill either of us has ever developed, we were elated at the end with the results we achieved. What we learned We learned the process that's involved in creating a good Alexa skill. We also deeply understood the APL for Audio feature provided in the skills kit, making use of it intensely in our skill. What's next for Audio Sense Implementation of more game sound environments. Multiples levels and training rounds. Ability for users to create their own unique environmental sound files. User adaptive levels and progress updates. Analysis and diagnosis of weaknesses in the user, with subsequent training and games to help improve the same. Additional rounds to help users in getting used to the various auditory disturbances that may exist in the selected environment, and help them learn how to zone out the same. Additional material A detailed video explanation including an extended demo https://drive.google.com/file/d/1_U4AFrOUjamPV9g9yqByOQF2j-4hXQxh/view?usp=sharing The presentation for our submission https://docs.google.com/presentation/d/1g4AdETeUoO0jQDTrAt1t6DZfyvQ9Ghu9jWVaVuNfh2w/edit?usp=sharing Built With amazon-alexa apl javascript node.js Try it out www.amazon.com
Audio Sense
Audio Sense is an Alexa Conversations skill based game. It aims to help and train players to cope with Auditory Hypersensitivity and distracting environmental sounds.
['Rahul Suresh', 'Noel Alben']
[]
['amazon-alexa', 'apl', 'javascript', 'node.js']
64
10,443
https://devpost.com/software/kanyakumari-quiz-w8o02i
Kanyakumari Quiz - Quiz of Kanyakumari ,Temples, Ancient, waterfalls, Seas, Ocean and Geography Inspiration Kanyakumari is a historical and tourist destination of South India. Highly inspired its unique ocean sunrise, sunset and moonrise! What it does Children Education & Reference - quiz questions. How I built it Alexa Skill Blueprints Challenges I ran into Challenge your friends with open answer Kanyakumari quiz questions. Accomplishments that I'm proud of Quiz of Kanyakumari ,Temples, Ancient, waterfalls, Seas, Ocean and Geography . Compete with like minded people from India and check your knowledge/Trivia. What I learned How to create Alexa skill in minutes and very interested to create more What's next for Kanyakumari Quiz To create more it should be more beneficial for children education and references. Built With alexa blueprints skill Try it out www.amazon.in
Kanyakumari Quiz
Quiz of Kanyakumari ,Temples, Ancient, waterfalls, Seas, Ocean and Geography
['Anand Kumar G']
[]
['alexa', 'blueprints', 'skill']
65
10,443
https://devpost.com/software/heroes-quiz
Launch Page MIchael Phelps Barack Obama Brad Pitt Bill Gates Beyonce Right answer Wrong answer Final page Inspiration Growing up all of us had our real life heroes, heroes we looked up to, heroes who made us believe in ourselves, heroes who defied all odds to be who they are. Heroes Quiz is a tribute to all these American Greats who inspire millions with the work they do. The Phelps and the Jordans of the world that show that nothing is impossible. What it does The idea is pretty simple, Alexa asks you three questions about your real life hero? For example to get to Michael Phelps Alexa: What is your hero's profession? User : Athlete/Swimmer Alexa: What is your hero's age? User : Thirty Five Alexa: Which state was your hero born in? Maryland User : Maryland Then Alexa guesses the name of your hero. Alexa: I think your hero is Michael Phelps Alexa then plays a video on your hero's life. To test how much you know about your hero, Alexa asks 5 multiple choice questions (with four options) from your heros life. Here is the catch the video that the user sees is generated by an Artificial Intelligence and the questions that the user answers are also generated using advanced Natural Language Processing techniques. Alternatively, everyday Alexa picks a new hero as the "Hero of the day", in this segment the users may get to know about someone who they haven't heard of before and get inspired by them. Heroes Quiz currently supports these 5 professions: Athletes Actors Entrepreneurs Politicians Singers Here is a list of all the heros (links to videos included) Michael Phelps Michael Jordan Stephen Curry Larry Bird Alex Morgan Billie Jean King Will Smith Leonardo DiCaprio Johnny Depp Tom Cruise Brad Pitt Jennifer Lawrence Meryl Streep Evan Spiegel Brian Acton Bill Gates Jack Dorsey Jeff Bezos Larry Page Larry Ellison Mike Bloomberg Jimmy Carter Barack Obama Bernie Sanders Joe Biden Bill Clinton Elizabeth Warren Donald Trump George Walker Bush Ben Carson Hillary Clinton Lady Gaga Beyoncé Taylor Swift Bruno Mars Miley Cyrus Jennifer Lopez How we built it We leveraged the power of Alexa Conversations to build the first part of the game. Conditional APLA response rendering has been used to handle API failures i.e. in an event that Alexa is not able to guess your hero, the api returns a FAILURE status. Also If the user gets the state or age wrong, they can change one of the slots (via the context carry over functionality). The visual part of the skill is built using APL 1.4. We have used Alexa layouts to get a uniform visual experience across all devices.The Video component uses AlexaTransportControls for the pause/play functionality. The TouchWrapper component is used along with the sequencer to work with touch enabled devices. The data is stored inside json files categorised by profession in the hosted lambda. Challenges we ran into Debugging Alexa Conversations Errors We had to develop the skill in two halves as the AXC model takes a lot of time to train Designing APL Docs Designing an architecture that enables communication between the nodejs backend for video generation, the python backend and lambda hosted Alexa backend Video Generation and Machine Learning are 2 very CPU intensive tasks, we had to deploy seperate digital ocean servers to make 37 videos Using version control on multiple repositories Accomplishments that we're proud of Going from having zero knowledge on how to create an Alexa Skill to successfully creating an Alexa skill that uses Alexa Conversations to do something meaningful. Also we were able to create an algorithm that takes any wikipedia article, summarises it and makes a video out of it within minutes. (5 minutes max) What we learned ASK SDK and using the Alexa Developer Console Using Alexa Conversations to develop the future of voice Using context carry over along with conditional responses The Alexa Presentation Language and various components like the Pager, Sequencer, TouchWrapper and Containers The "when" clause while rendering APL & APLA documents to work well on all kinds of Alexa Devices Using NLP to generate questions from wikipedia text Text Summarisation using transformers pipelines Working with GraphicsMagic to create and resize images Working with ffmpeg to combine images to form a video Setting up a REST API using Flask and Python MVC pattern in node js to write clean code Premier Pro for editing videos What's next for Heroes Quiz The quiz currently supports real life heros only, we plan to add more heroes and include reel life heros as well, we also plan to improve the quality of the AI generated videos and add animations to make the skill more interactive. Built With alexa alexa-conversations alexa-skills-kit amazon-alexa amazon-web-services apl comprehend digitalocean express.js ffmpeg flask javascript natural-language-processing node.js photoshop polly premier-pro python react redux s3 tensorflow typescript wikipedia
Heroes Quiz
An A.I. powered Alexa quiz game based on real life American Heroes
['Sarthak Arora', 'Akshit Suri', 'Varun Ramnani']
[]
['alexa', 'alexa-conversations', 'alexa-skills-kit', 'amazon-alexa', 'amazon-web-services', 'apl', 'comprehend', 'digitalocean', 'express.js', 'ffmpeg', 'flask', 'javascript', 'natural-language-processing', 'node.js', 'photoshop', 'polly', 'premier-pro', 'python', 'react', 'redux', 's3', 'tensorflow', 'typescript', 'wikipedia']
66
10,443
https://devpost.com/software/hey-buddy-your-not-alone
We can overcome this together Inspiration- In this fast moving world, people all around just work tirelessly, never thinking of their mental health. We wanted to bring about a change in that trend. What it does- Finds specific speech patterns related to how a person is feeling on talking about a particular topic and relays it to the program which allocates the appropriate feeling.. How I built it- Using Amazon's Alexa Skills platform, created intents and responses after a thorough research regarding the idea. Challenges I ran into- We had never in the past undertaken a project of this magnitude. At first, it seemed a small but a brilliant idea but slowly, with research we understood how much more it was. And after compiling a lot of data and some more, we were finally able to build the platform. Accomplishments that I'm proud of- We could find a skill that could successfully help people with their stress in these difficult times. What I learned- More about Ui, Ux and Amazon Alexa Skills What's next for Hey Buddy, Your Not Alone- We plan to further make this better by including a skill that could suggest people the types of clothes and food that they should buy that would complement their personality and also make them feel happy and loved. Built With amazon-alexa-skills ui ux Try it out drive.google.com
Hey Buddy, Your Not Alone
Time is changing, and so are the needs. Lets escape the sigma related to mental illness by using 21st century algorithms.
['EKTA LAL 18BEC1141', 'Archit Dehloo', 'Vaibhav Saxena', 'Pearl Motwani', 'Viknesh Rajan']
[]
['amazon-alexa-skills', 'ui', 'ux']
67
10,443
https://devpost.com/software/moduulo-invoice-generator
API Success example API Failure example Inspiration We work a lot with companies invoicing time (consulting companies, lawyers, development teams, outplacement). For them, we developed a SaaS software helping to automate the invoice production (account receivables). Invoicing time means preparing timesheets using time tracker software, project management software, CRM tools, and classical accounting tools to produce the invoices. An example: While consultants work, they frequently must open their time tracker software, start and stop timers, register tasks into projects, and produce reports. Discussing with our customers and leads, we discovered that writing an invoice with the usual software tools takes an average of 10 minutes. We, and our leads and customers, consider this to be too long. The goal we set is 20 seconds for invoice production and sending it to the Customer. But how can we achieve this? Our idea was to use a voice-based tool to register everything while you work, in a simple but effective way: Amazon Alexa with Amazon Echo device. What it does Alexa Skill: moduulo Time Tracker. The first part is a voice-based time tracker skill: moduulo Time Tracker (BETA). This skill registers all your tasks into projects which you can manage in our SaaS Software. When starting a new task, the skill checks the project's existence or creates a new one. Website: https://agency.moduulo.net . As a second part, we created a website where they can have an overview of running tasks, projects, invoices, and customers. TEMPORARY (This will be implemented later in the moduulo Time Tracker skill): After using the moduulo Time Tracker skill the user has to specify details of the project’s customer and its company. Alexa Skill: moduulo Invoice Generator. As a final part and hackathon submission, we added moduulo Invoice Generator . Using this voice skill, you automatically generate your invoices in only 15-20 seconds. This new skill is based on Alexa Conversation (Beta). It reads Company data, Customer data, Project data, Prices, and much more from AWS DynamoDB using Lambda functions and generates the invoice. Upon request (during the conversation), the skill can send the invoice directly to the Customer via e-mail. moduulo Invoice Generator is our contribution to #AmazonAlexaConversionsChallenge How we built it The backend tech stack is Alexa voice skill and Alexa Conversation (Beta), AWS Lambda, AWS DynamoDB, Claudia.js for API deployment, Node.js for API and skill development, and AWS Cognito. The frontend tech stack is React, and AWS Amplify for Auth and AWS API. Challenges we ran into From a CEO point of view: the only challenge we run in is the current challenge #AmazonAlexaConversionsChallenge :-) From a developer point of view: We had to learn to build the conversation skill, in combination to connect it to various AWS Lambda functions to update our DynamoDB database. We also had to add the main screens of our AWS hosted SaaS frontend for the handling of Customer data and Company data. Accomplishments that we are proud of We are proud to see the first MVP working. And we have the proof that it is possible to generate invoices in under 20 seconds. What we learned Voice-based tools are not only a Gartner confirmed trend, but they help to make business processes easy. Tedious administrative tasks turn into fun work, just because an intelligent helper, Alexa, is doing the repetitive work for you. Also, first time experience with creating these Alexa skills were accomplished. What's next for moduulo Invoice Generator In the next couple of days, we continue developing our SaaS software, adding new views for managing the business processes via Browser and Mobile. Also, some bug fixes and adding personalized welcome message, making hardcoded entries to dynamic after database update and so on. We already signed with four trial customers who committed to paid trials. Our primary focus is learning with these customers and enhances our voice-skills. Our Vision: We will implement more voice-based skills to allow our customers to manage their businesses without any software. Built With alexa amazon amazon-alexa amazon-cloudwatch amazon-cognito amazon-dynamodb lambda node.js react sendgrid Try it out agency.moduulo.net alexa.amazon.com skills-store.amazon.com
moduulo Invoice Generator
Generate your tasks - which were tracked with Alexa skill 'moduulo Time Tracker' - into invoices and send them out without a hassle!
['Oliver Gasser', 'Karl Matti']
[]
['alexa', 'amazon', 'amazon-alexa', 'amazon-cloudwatch', 'amazon-cognito', 'amazon-dynamodb', 'lambda', 'node.js', 'react', 'sendgrid']
68
10,443
https://devpost.com/software/the-silicon-valley-quiz
Inspiration Thought of the idea while watching reruns of the show. What it does it skill tests you knowledge of the show by asking questions that progressively get harder with every question. We asks questions related to all the 6 seasons How I built it We used Node and built it on Alexa hosted platform Challenges I ran into There's not a lot of tutorials on Alexa so we had to work our way through the documentation which was time consuming Accomplishments that I'm proud of We got a working skill out What I learned We learnt a lot about Alexa skills and how they are created - we had never worked with the technology before What's next for The Silicon Valley Quiz We plan on expanding into other quizes and other genres of content
The Silicon Valley Quiz
Quiz game from the popular TV show Silicon Valley
[]
[]
[]
69
10,443
https://devpost.com/software/pizza-bot-ahgruv
order pizza online. Inspiration- Ordering a pizza online. What it does- It asks what size of pizza you want ? what toppings you want? what flavor smoothie you want and when you like to pick up your order. How I built it- I built it by myself. I have used the Amazon Alexa developer console to built it. Challenges I ran into- There were many challenges that occurred when I am developing the pizza bot like I have encounter problems with intents and slots and also with endpoints of pizza bot when I am developing it. Accomplishments that I'm proud of- It works well without any problems. What I learned- It's my first time when I am using Alexa developer to develop skills previously I have done projects on Dialogflow which I found easy as compared to Alexa developer but now after using Alexa Skill Developer now I have learned Something new. What's next for pizza bot- I applied it for Certification and It is under review if it gets passes the test then I am going to deploy it all over the world. Built With alexa
pizza bot
order pizza game
['Abhishek Bhardwaj']
[]
['alexa']
70
10,443
https://devpost.com/software/plan-your-travel
Inspiration Today if we want to travel to any location, we need to go through a tedious process for comparison of different available options for a flight on different websites, then finalize and book them. All of which needs to be operated manually and require constant user attention. What it does The skill 'Trip Mentor' suggests the best available flights between two cities on a particular date. This helps the user with its trip planning. It also shows the list of planned trips for the date input given by the user. The skill makes use of Alexa Conversations with a more natural dialog with the user for better user experience. How we built it The initial step of our skill development was to get accustomed to development using the Alexa developer console. Then we started building an intent-based skill to get the list of available flights between two cities according to the user’s inputs. The skill compares the flight details and provides list of affordable flight . The user can select the fine option from the list of flight details. The second step of the process was to integrate Alexa Conversations (beta) to our existing intent-based ‘Trip Mentor’skill. To start with integrating Alexa Conversation to the skill a blueprint was made with all needed dialog, utterance sets, slots, API definitions, and responses which helped to build an AI base to train the model to maintain better user conversations . The skill makes use of : Slot Types: AMAZON.CITY AMAZON.DATE AMAZON.NUMBER Two PCS slots in which one for getting the inputs from the user to fetch flight details and other PCS for saving the selected flight details. API Definitions: GetFlightDetails: Fetch all the flight details according to the user input using an external API call and returns a list of flights available as a string to VUI, which is done with Alexa Conversations. SaveMyTrip: This saves the flight details according to the option selected by the user and helps the user with a link to book the flight. Skill makes use of Alexa's persistent adapter for saving the details. Dialog for training the model covers the basic conversation path for user and conversation with context carry over . Challenges we ran into As a newbie to Alexa development, getting started was itself a big challenge but reference material on skill development made things faster for us. We also faced challenges: To find suitable external API’s for getting the correct results of flight information with required time details. To integrate Alexa Conversations (beta) into our existing skills. Accomplishments that we're proud of As beginners on Alexa skill development, successfully developed and certified our first Alexa skill. We were able to identify and integrate the right API’s with the skill Use the latest feature of Alexa Conversations(beta) for better user experience with our skill. What we learned Fetch external API (async/await) Session attributes Integrating Alexa Conversations to an existing skill. Keeping track of Conversation with Context carry-over What's next for Plan your travel Next steps: Add a reminder for the user prior to his flight. Integrate the Payment API's so that the user can do a hassle-free payment as soon as it gets the best-suited option shared by the skill. Once the payment option is integrated, the skill will be extended to search and book hotels and cabs. Add multiple language support for the skill Built With alexa travelpayout
Trip Mentor
Travel using Alexa
['Dhiraj Chordiya', 'Alwina Oommen', 'Sai Krishnan S', 'Priyanka Ganjude', 'PRAVEEN KUMAR']
[]
['alexa', 'travelpayout']
71
10,443
https://devpost.com/software/league-alexa
League Alexa Picture Inspiration Recently due to the corona virus outbreak, people began to stay in their homes to be safe. It is natural for them to find something fun to do during these boring days. So me and my partner started this online game called "League of Legends" and figured out how many other people were into this game too. However, before we play the game, when we choose our position to play, we hesitated a lot since it was hard to choose what character to play. So we thought what if Alexa told us what character to play so that we can choose it quickly and play the game efficiently. This might seem a little bit off topic due to the fact that the topic we chose might not seem related to natural conversations. However, we thought that features such as our team's feature would be helpful for enhancing the various qualities that Alexa needs to become an AI bot that can not only discuss about basic features with human beings, but also some specific features that would create an empathy between the user and the bot. What it does So, as stated above, Alexa would be choosing a character depending on what position we are planning to play. If the user states out the position he/she wants to play, Alexa would randomly choose a champion(character) to play with. Not only that, but Alexa would recommend a thing called "roon" for the users too after choosing the champion for them. Roon is a special feature a character has in League of Legends that would improve their abilities in attack, defense, agility, etc. We have to choose our loons before we start the game, but many people have a hard time finding for the loon informations online despite the fact that they only have about 40 seconds to choose the loon. So, Alexa would save the gamers' day. How I built it Me and my teammate used python 3 to program for the bot. Specifically, we used a feature called the AWS (Amazon Web Service) to create a lambda function list for operating Alexa. Then we just connected that to the alexa developer console and tested out each time we added a new feature. Challenges I ran into It was my first time utilizing my python knowledge to work on such a big project. I thought that I needed a lot of these experience to be actually good at this field so I just signed up for the project to gain more knowledge upon this programming language. At first I literally had no idea what was going on except for the fact that I needed to build a bot that would enable natural conversations with users. With my basic knowledge of python, I thought that there wasn't enough information so I tried asking people who actually had this experience and watched youtube videos about programming conversation bots using python. Finally, I gained some knowledge in the field and used my own creativity to create the feature I wanted. Accomplishments that I'm proud of I got to learn more about python language itself and I am proud that I actually have done making at least one feature for Alexa. What I learned Programming is a language that cannot be educated only by textbook. Experience takes a big part. What's next for League Alexa Features such as : Alexa telling us our ideal woman/man type maybe? That's what I want to add next if I gain the opportunity to add features to Alexa Built With amazon-alexa amazon-web-services python
League Alexa
We realized how many people were interested in the game called League of Legends. We decided to develop an Alexa feature that would help users play the game more conveniently and fast.
['Junseo Kwak', 'Yungi Jeong']
[]
['amazon-alexa', 'amazon-web-services', 'python']
72
10,443
https://devpost.com/software/aiml-alexa
Inspiration AIML connec alexa custom skill and vidoes player api build What it does How we built it AIML framework and Nodejs Challenges we ran into Accomplishments that we're proud of What we learned node.js, c#, api skill What's next for AIML alexa With new aimlHigh(botAttributes) one can create a new interpreter object. botAttributes is an JSON-Object that can contain attributes of the bot one wants to use in AIML files, e.g. {name: "Bot", age:"42"}. While continued messaging will store the previous answer for use with , you can pass a previous answer like so: new aimlHigh({}, 'last answer'). This object has a function called loadFiles(fileArray) which receives an array of AIML files. This function loads the AIML file into memory. There is also a loadFromString(stringContent) that can also be passed if AIML file has been saved into a string. Furthermore, the object has a function called findAnswer(clientInput, cb) which receives a message and a callback. The callback is called when an answer was found. The callback of findAnswer should look like this: callback(result, wildCardArray, input). Result is the answer from the AIML file and wildCardArray stores the values of all wildcardInputs passed previously from the client. The original input which triggered the answer is given back via input. Example aimlHigh = require('./aiml-high'); var interpreter = new aimlHigh({name:'Bot', age:'42'}, 'Goodbye'); interpreter.loadFiles(['./test.aiml.xml']); var callback = function(answer, wildCardArray, input){ console.log(answer + ' | ' + wildCardArray + ' | ' + input); }; interpreter.findAnswer('What is your name?', callback); interpreter.findAnswer('My name is Ben.', callback); interpreter.findAnswer('What is my name?', callback); Built With aiml aiml-framework alexa-custom-skill Try it out github.com github.com
AIML alexa
AIML
['Hemakumar M', 'Nirmal Kumar', 'Abubakkar72 Abubakkar']
[]
['aiml', 'aiml-framework', 'alexa-custom-skill']
73
10,443
https://devpost.com/software/talking-baby
Inspiration 아기들과의 기분 좋은 대화 What it does How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for talking baby Built With javascript
talking baby
Talk to the lovely baby.It's okay to ask questions.Sometimes when a baby feels good, he hums and laughs.
['인지하다 Injihada']
[]
['javascript']
74
10,443
https://devpost.com/software/three-kingdoms
Inspiration The Three Kingdoms era of China had always inspired me, and I really wanted to let other people know more about this period and be able to take away valuable lessons depicted in the Romance of the Three Kingdoms novel. What it does This skill informs users about warriors of the three kingdoms era and history tidbits about this era. How I built it I used node.js as the backend, hosted on AWS Lambda. Challenges I ran into Alexa Conversations had a lot of quirks. Reading the documentation and tinkering/testing different strategies to make some of my use cases was the most challenging part. Another challenge was training Alexa to understand certain words such as shu (shoe, shoo) and wei (weigh, whey). Accomplishments that I'm proud of I'm glad I got the context carry over functionality to work. For example, when the skill asks if you want to hear about another warrior, saying, "how about {faction}" will trigger more information about a warrior from the specified faction, being shu, wei, or wu. What I learned I learned that delegating the dialog back and forth between Alexa Conversations and the regular Alexa Skill interaction model was possible and it gives me a lot more control over how to handle different parts of the skill. What's next for Three Kingdoms I'd like to add some APL features to actually have visuals about some of the warriors. Built With amazon-web-services lambda node.js
Three Kingdoms
This skill provides useful tidbits and information about the Romance of the Three Kingdoms era in China.
['John Chung']
[]
['amazon-web-services', 'lambda', 'node.js']
75
10,443
https://devpost.com/software/wiki-walk
Wiki Walk Earth Ford Mango Inspiration The skills created by university teams for the Alexa Prize challenge work in conjunction with knowledge-graph databases in order to sustain a conversation over different topic domains. I wanted to explore the use of a knowledge graph in conjunction with Alexa Conversations, and wanted to see if I could support conversations outside of any one subject domain using general data relationships. What it does In Wiki Walk , the user asks for information about something - anything in the Wikidata database. Wiki Walk offers the user a description that matches the query, and asks for confirmation that that is the topic of interest. If the user wants more options, Wiki Walk asks for something to narrow down the search. Once a topic and its description have been accepted, Wiki Walk traverses the Wikidata database to connect the original something with its hierarchy of inclusion: e.g. "Earth, third planet from the Sun in the Solar System, is a instance of inner planet, a subclass of planet of the Solar System, a subclass of planet, a subclass of planemo, a subclass of substellar object." How I built it Wiki Walk works entirely with its Alexa Conversations model, using four dialogs and three APIs, without depending on an intent-based interaction Model. The API handlers are included in an Alexa-hosted Lambda function. On asking about a topic, the GetAFact API handler uses the Wikidata API for a search on the topic, returning 50 possible Wikidata entities, offering the user a description of the first entity. If the user rejects the first description, and offers up a desired description, the MatchDescription API handler scores all the remaining descriptions for a best match, and continues offering until the user accepts one. Once an acceptable description is selected, the TakeAWalk API handler makes successive calls to the WikiData API to search for properties that indicate a hierarchy of inclusion and fetches the next parent entity in the hierarchy (up to six levels). The results are read by Alexa using APL-A, and displayed (with an image of the original entity), using APL. Challenges I ran into My first challenge was to choose a concept for my skill, and scope it to the time of the hackathon. There was a new-tech learning curve built-in, and both the platform and its documentation were in beta, so these needed to be scoped realistically as well. The primary technical challenges were: deciphering the error messages relative to the state of the model and the build, the inability to save a snapshot of a working model before changing or extending it, and the longer Alexa Conversations build times that affected my development cadence. Because I was learning at the same time the platform itself was being shaken out, I was challenged in distinguishing between conceptual error, programming error, and platform or documentation error. Accomplishments that I'm proud of In addition to completing the skill within its defined scope, I contributed to the community during the development period, helping other developers in Twitch streams and on Slack, and providing feedback in terms of experience, questions, and documentation suggestions to the Alexa Conversations team. I was not alone in this, however; we were all lifted by the engaged community of developers and the commitment of time and resources by the Amazon team. What I learned I learned that there are general semantic relationships embedded in a knowledge graph that can support conversations around a variety of topics, independent of domain knowledge within the skill. I learned to scope an exploration to fit within the time constraints of a learning curve and a hackathon deadline. I learned to better differentiate between gaps in my conceptual understanding, coding errors, and platform and documentation maturity issues. What's next for Wiki Walk Find other domain independent property relationship patterns to offer in a conversational context without domain-specific hacks and heuristics Use Alexa Entities, when integration with Alexa Conversations is supported Use real sentence embedding vs simplistic scoring of potential descriptions in the MatchDescription API Maintain context across sessions (get to know the user’s interests for likely disambiguation) Integrate more imagery (e.g. sync the spoken output with an image slide show rather than a single image) Understand questions relative to entity type (e.g. “Who” vs “What” will weight people entities) Create more dialogs for "unhappy paths" Built With alexa-conversations ask node.js wiki-data Try it out www.amazon.com
Wiki Walk
Combine Alexa Conversations with a knowledge-graph database (Wikidata) to explore the relations among items based on user input.
['Steve Nelson']
[]
['alexa-conversations', 'ask', 'node.js', 'wiki-data']
76
10,443
https://devpost.com/software/debate-pro
Inspiration I was inspired to write this skill as a challenge to use the brand new Alexa Conversations What it does Debate Pro helps you learn Debate Arguments for popular Debate Topics How I built it With the Alexa Console and my own ASP Core api site Challenges I ran into Conversations requires tracking many arguments. Programming for this was a challenge Accomplishments that I'm proud of The Skill will recognize many different topics and give you arguments for Pros and Cons What I learned Alexa Conversations is very powerful. I expect to continue using it. What's next for Debate Pro I will keep adding Topics and Keep adding arguments Built With aspcore c#
Debate Pro
Debate Pro is an Alexa Skill that teaches you arguments for Debate Topics. Pro or Con, In Favor or Against, Debate Pro will help you learn about arguments for most popular debate topics.
['Russell Kirkwood']
[]
['aspcore', 'c#']
77
10,443
https://devpost.com/software/binge-buddy
Skill certification submission proof Inspiration I got inspired from my own search for a tool that I can use to find out all important details about a movie or a tv show so that i can decide whether to watch it or not. There was no single place to find out all such details together specifically in voice assistant world. What it does This skill allow users to find details for their favorite movie or a tv show. In future releases, it will also help user to also help whether to watch it or not based on the user interests. How I built it I used Alexa kit and SDK to built this skill. I used alexa conversation models for interaction and use some other advanced alexa tools such as multi value slot type. Success of this skill depends on how successful are Alexa conversation models to predict and understand customers input. In terms of technology, I host a lambda function in my aws account that is responsible for hosting and executing the code that finds all the required details. It will also take care of consolidating all different details and come up with a speakable response, since response will vary based on what details customer asked for. Challenges I ran into I ran into multiple changes throughout, listing some of them here: My skill needs to use multi value slot with custom slot type. I wasn't aware of how to support this usecase. I did lot of research and learning before i found out about multi value slot. Supporting a natural conversation and making sure i collect all required information from customer which i find really hard with intent based approach but alexa conversation took care of this problem. I was developing in java and i realized there were not much examples available as of now in java related to alexa conversation based skill. I found it difficult at times to figure out how to do a certain things and need to map it from some examples from different language to java. Alexa conversation itself is pretty new so it was very difficult to deal with any issue related to it because there is not much help available over internet. Accomplishments that I'm proud of Able to successfully built alexa conversation model that works as expected and cover almost all of the expected path in input. Able to quickly learn and build in Alexa space, i was new to this space and was building first time. I am proud that i am able to develop the skill exactly how i envisioned it. I am proud to pull up this all by myself. Able to work and integrate different technologies together such as Alexa framework, AWS etc. What I learned Here are some of my learnings : To not give up and keep trying as you will be able to find ways if you keep trying. Alexa conversation is the future and it has potential to disrupt the market same way once smart phone did. Apart from that, here are some of my technical learning : Learned about alexa development environment. Learned some key and new features of alexa such as Alexa conversation, multi value slot, multi turn conversation, intent based skill, learned about different ways of hosting backend systems for alexa and in-built support for hosting. Learned more in details about AWS Lambda and some other AWS technologies such as DDB, S3 etc. Learned how to ask users' permission to collect personal details and how to collect them at run-time. Learned about end to end skill development in Alexa. Learned about developing a software in more generic way(without support of your employer's development environment). What's next for binge buddy Next is to support generic search i.e. not just providing the results for top search but searching all potential results and providing result which looks closest to the customer query. Make the experience more personalized for customer by learning from customer's interaction and applying some ML solutions behind the scene to personalized the results. Make Alexa conversation more robust by retraining the models with new data that we will get with real time customer interaction. Expand it as many marketplaces as possible in next few months after US launch. Add support for card to my skill as soon as it is available for Alexa conversation. Launch next version that not only provide details but also provide recommendations to customer. Built With amazon-web-services java lambda sagemaker unofficial-imdb Try it out tinyurl.com
binge buddy
This skill will allow users to find out details(such as cast, plot, genre, ratings etc.) about a movie or a tv show and help them decide their next binge watch.
['shubham saini']
[]
['amazon-web-services', 'java', 'lambda', 'sagemaker', 'unofficial-imdb']
78
10,443
https://devpost.com/software/movie-suggestions-3z84qj
Inspiration i have always wanted a way to get great movie to watch easily instead of searching all over the internet for new movies and i didn't have a clue whether in the end it's a good movie or not What it does that's why i build the Movie Suggestions skill with "Movie Suggestions" you can always find a good movie to watch you can get movie recommendations based on : the release years you want the overall rating score of the movie the movie category you like including over 20 categories -backed with wide database with around 400 movies from the 90s and up -you can also get brief summary about the movie suggested by Alexa including the plot , director , runtime of the movie How I built it using the new feature : Alexa Conversations the skill is very flexible and makes asking for recommendations Challenges I ran into Alexa Conversations was new to me so i took some time to get used to it and fixing the bugs and error i find as i go and add more features to the skill Accomplishments that I'm proud of i was able to complete the skill and added all the features i want and it worked as expected ,sometimes i thought it's not gonna work and wanted to stop but i am proud i didn't quit . What I learned i have learnt a new way for programming Alexa skill with the new Alexa Conversations approach that makes Natural Alexa Interactions so much easier and more realistic What's next for Movie Suggestions i am planning to expand the database to 1500 movie more and keep it up to date , also try with new features for different alexa supported devices , such as devices with screens Built With amazon-alexa
Movie Suggestions
have you ever been searching for hours to get a good movie to watch ? Movie suggestions is the skill that you are looking for
['Fady Tarek']
[]
['amazon-alexa']
79
10,443
https://devpost.com/software/say-and-bake
Inspiration When I saw the example Alexa conversations skill on Devpost, I was really inspired by that. The pizza ordering seemed like talking to a real person and asking for a pizza. At that time, I decided that I'll also make something where user will get an experience of talking with a real person. What it does When we go to bakery, there are few things common that we do like looking whats available, looking at the prices, then shortlisting what we want, then further deciding what type we want of the item that we shortlisted. We then take things one by one. We may also decide to remove something from our bucket that we took. In the end we checkout. At "Say It Bakery", you get all that experience without going to bakery. Whats more? You have freedom of asking whatever you want multiple times without the hesitation of asking same question again and again. Currently, in the skill, cash on delivery is the only way to pay for the order. How I built it I built it by mostly using "Alexa Conversations" (also default dialog manager). Its Alexa hosted skill with node.js back-end using S3 Persistent Storage as a database. Different API are called for operations like adding a specific item to cart, removing items from cart, listening to the menu, etc. Alexa gathers required attributes (if required) and call appropriate API, which then return relevant response and Alexa speaks that out. Challenges I ran into There were many. Understanding how the alexa skill works (as I didn't have alexa skill development experience before) was the first challenge. Its worth to mention that tutorials and specially pizza reference skill helped me a lot to understand how skills work. I was also confused what are dialog managers, what we mean by delegating to other dialog manager but all that got cleared with time. One more challenge was decided how to keep data even after the session. Then found S3-Persistent storage and used it as DB where I had to manually write code to search through the document and find/add/update/remove the relevant entity. Finding a way to make sure that only valid slot values are passed was also a challenge. So, made different slots, api etc and was able to achieve that somehow. There were many more :) Accomplishments that I'm proud of I am proud of making what I initially thought of. Though it might not have all those features I thought of initially (also because of Alexa Communication limitations) but I have pretty much achieved the main features. that will add fun plus productivity. Ordering just by saying will not only save people time but also make the experience enjoyable. What I learned Learning was one of the reason I participated in this Hackathon. Having no experience of Alexa Skill development, I thought its a good time to jump in. I learned about differences between intent based and conversation based skills, learned about S3-Persistence technique to use it as a database. Also learned some node.js. Overall, there were many things to learn. What's next for Say and Bake More features can be added to this. For example giving discounts to customer if order total amount exceeds a specific value, asking users about the feedback, adding more items, recommending things to user, providing details about different items etc.
Say It Bakery
Craving for some delicious and tasty items? "Say" whatever you want. Yes, that's right! At "Say It Bakery" you can ask for our menu, place order and much more, all by just "SAYING"!
['Muhammad Ahsan Faheem']
[]
[]
80
10,443
https://devpost.com/software/create-your-own-virtual-phone
Inspiration The inspiration for the skill was from a tweet by MKBHD, a famous tech youtuber, regarding building a budget smartphone for a limited amount of money. What it does The skill asks the user to specify the quality of items for each of the 6 features that contribute to a phone's cost. These are "display", "battery", "specs", "software", "camera" and "marketing". The qualities chosen can be either "low", "medium" or "high". The quality chosen is directly proportional to the cost associated to add it, so people have to be careful to not choose high for every feature, else the cost will be too high and unaffordable to the public. How I built it I researched online about the various factors that go into developing a phone and found the 6 parameters as the most broader ones. Then I built a response generator through which I generated all of the 729 possible outcomes. All these outcomes are stored as a json file to a generated response key, which is reconstructed based on the customer's data. Challenges I ran into Generating 729 possible outcomes was too difficult a task to perform manually. So, I wrote a code based on the quality of each of the items to generate. The coding part took time, but it was certainly convienient than writing all possibilities manually. Accomplishments that I'm proud of I am proud to have understood and built an alexa conversation skill in a short duration of time. Before this, I had built another skill but during the submission, I realised that it might possibly not adhere to the rules of publishing. So, I had to create this skill within 5 days and I did it. What I learned I learnt a lot about the functioning of Alexa Conversation, how to train it to become more smarter and intelligent to get the required data from the user. Further, the "Pet Match" tutorial helped a lot in the process too. What's next for Phone Designer For the future, I would like to finetune the cost and people's verdict of the phone based on some research as it is currently based on basic values and cash required for development of the phone. Also, would also add modulation to the voice so that it feels more natural in some places. Built With alexa node.js
Phone Designer
Build a virtual budget smartphone based on certain features such as display and battery. Based on the selected options, alexa gives a review of the phone and if it would be a success.
['Akash Agrawal']
[]
['alexa', 'node.js']
81
10,443
https://devpost.com/software/word-match
Inspiration Before, I had only created Alexa skills with very simple user interactions. When I came across Alexa Conversations (AC), the idea was interesting as a way to explore what is possible with more natural and complex dialogs. I started with the concept of constructing search queries using AC and, combined with my time playing word games, Word Match developed. What it does The Word Match skill is a helper for word games with missing letters, like crosswords and hangman. Specify how long the word is Fill in the letters you know and any blanks Then Word Match will generate the matching list of standard English words How I built it Having discovered the Alexa Skills Challenge in August and once the idea of Word Match had formed, I planned out the tasks required to create a minimum viable product within the time available. One of the early choices was whether to follow the examples and use Node.js or to use Python which I'm more familiar with. After trying both, I chose Python in the end and because of this, writing the backend code was relatively easy. The majority of time and testing was used for determining the best structure for the interaction model. This basic prototype was then fleshed out by importing a list of real words and extending to more word lengths. After submitting the skill for certification, there were some bits and pieces to round off the entry but overall the frontend development process took the longest. Challenges I ran into This was my first time developing with a beta feature so this had its associated challenges. While the documentation and tutorials were useful, they didn't cover all the use cases I might need for Word Match. First, I needed to understand what was possible with AC by testing many variants of the given examples. This then led to designing workarounds to fit the current limitations. Finally, the process of getting a functioning skill involved going through many trial-and-error iterations of my skill. Accomplishments that I'm proud of In the end, I managed to create a new skill within a short timeframe: the whole process from inception to development to marketing materials. What I learned Participating in this competition has been a rewarding learning experience for me. In my opinion, the best way to learn something new is to actually put it into practice. In this way, creating Word Match was an opportunity to study both AC and features of the ASK that I hadn't used before. The competition helped to provide extra motivation to fit this learning regime into my spare time. What's next for Word Match Since the competition period is limited, I still have further ideas to improve Word Match that I didn't have time to implement. The first priority is to improve the ease and speed for users to enter their search terms. For example, a "rest are blank" functionality so that the user only needs to fill in the letters they know. Also, with the developments in multiple-value slots, Word Match could later allow multiple letter input in a single utterance. Another area for future work would be additional features for Word Match. It could extend support to longer words (current support is for 2-8 letters). To get users relevant results quicker, the matched words could be ranked by how commonly used they are. Lastly, as well as offering the spelling of matched words, the skill could also offer dictionary definitions. This would allow Word Match to increase its educational value. Built With alexa amazon-alexa conversations pandas python Try it out alexa-skills.amazon.com
Word Match
A helper for word games with missing letters, like crosswords and hangman.
['F Law']
[]
['alexa', 'amazon-alexa', 'conversations', 'pandas', 'python']
82
10,443
https://devpost.com/software/vedic-knowledge-kz1jou
Inspiration An attempt to let everyone learn and use ancient vedic knowledge What it does Vedic Knowledge finds reference of a common word in Vedic words. How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Vedic Knowledge Built With alexa conversation conversations
Vedic Knowledge
Vedic Knowledge finds reference of a common word in Vedic words.
['Sheenu Sarvesh']
[]
['alexa', 'conversation', 'conversations']
83
10,443
https://devpost.com/software/my-genes
Inspiration There are so many genetic traits inherited by children from their parents. However, there is no app or skill that helps parents to predict them. This skill attempts to enable parents or anyone to predict genes based traits of their babies. What it does This skill, right now, deals with four traits - Blood Group, Hemophilia, Sickle Cell Anemia and Baldness. It asks for parents status and derive their baby's status from it. How I built it I have used genetics algorithms to derive, predict, traits from father/mother status. Backend is written in .NET Core using C# and hosted on AWS Lambda. Alexa skill is based on newly launched Alexa Conversations. Challenges I ran into Understanding the flow of Alexa Conversations and making it work with C# code. As I was dealing with all json API references from Amazon. It was real challenging to create correct model and return proper json in return. Accomplishments that I'm proud of Enabling parents to access genes based traits of their kids. C# sdk which works with Alexa Conversations Interactive APL with integrated graphs. What I learned Logic flow of Alexa Conversations and its interaction with APL and APL-A What's next for My Genes Adding more genetic based traits and more medical related things which can/should be accessible by people. Built With .netcore amazon-alexa aws-lambda c#
My Genes
My Genes try to predict genes based traits of your baby. You can predict presence of following traits- Blood group, Hemophilia, Sickle Cell Anemia, Baldness.
['Gaurav Mathur']
[]
['.netcore', 'amazon-alexa', 'aws-lambda', 'c#']
84
10,443
https://devpost.com/software/n-s-train
Inspiration Need an easy way to get train information and don't need to rely on phones all the time. What it does It helps users to get information regarding next train schedules in the Netherlands How I built it Used N.S. public API to retrieve the train schedules information and combined with alexa-conversation to make the dialogue smoother. Challenges I ran into Trying to figure out how to build a smooth dialogue or conversation. Trained the skill to understand train stations in the Netherlands. What's next for N.S. Train Add more features likes reminder, disruptions information, etc. Built With alexa javascript
n. s. train
This skill helps people to get information regarding next train schedules in the Netherlands.
['ariesgun -']
[]
['alexa', 'javascript']
85
10,443
https://devpost.com/software/labretort
Inspiration What it does LabRetort is an Alexa skill that generates laboratory reports in the form of Jupyter notebooks via the Alexa Conversations dialog manager. How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for LabRetort Try it out bitbucket.org
LabRetort
LabRetort is an Alexa skill that generates laboratory reports in the form of Jupyter notebooks via the Alexa Conversations dialog manager.
['Warp Smith']
[]
[]
86
10,443
https://devpost.com/software/book-mentor
Inspiration My sister loves reading and one day she asked Alexa, "suggest me a book". Alexa didn't answer correctly because there isn't a function that does that so I decided to create Book Mentor. What it does Book Mentor can suggest the next book should you read based on genre, you only have to ask "Suggest me a book" to get a book. Once you got your suggestion you can also ask to Alexa "What is it about?" and alexa will tell you the plot of the book, and after you can decide if you want another suggestion about the same genre or another. You can also ask Book Mentor some information about a specific book saying "Search Little Prince" for example, Alexa will answer with the author of the book, the rating, the year of publication and the plot. How I built it I built it using Alexa Conversations and node.js Challenges I ran into I have experience with programming but not with node.js so I had some difficulties with asynchronous calls to the API I used. I also had some difficulties with some bugs of Alexa Conversations but the Alexa teams on Slack helped me and I managed to submit my skill. Accomplishments that I'm proud of I'm proud of my first challenge here on devpost and how I solved problems with Alexa Conversations with the help of the community. What I learned I learned how Alexa Conversations works, how to develope Alexa skills with node.js and how to solve problems with the developers and the community. What's next for Book Mentor The next step for Book Mentor will be suggestions without genre. I want to create a profile for each user and based on their interests I want to suggest to them the book should they read with a machine learning algorithm. Built With alexa google-books node.js Try it out www.amazon.com
Book Mentor
Book Mentor can give you suggestions about the next book should you read and can also give you some information about specific books.
['Nunzio Logallo']
[]
['alexa', 'google-books', 'node.js']
87