hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,492 | https://devpost.com/software/sarah-the-assistant | Sarah introduction
UI for interact with Sarah and access of previous record
Conversation between Sarah and User
Higher resolution continuous conversation with Sarah without any editing, this video will show the whole process, you can see it here :
https://www.youtube.com/watch?v=Spa0Zt1Rrt0&t=24s
Inspiration
In our daily life, we have a lot of aspect need to manage. We need to manage our day-to-day financial expenses, to do list, shopping list or grocery list. That's a lot we need to record.
Sometime we have something comes in our mind but we in the situation that not suitable to look at the phone and texting, like cooking, driving car, showering, you named it, for whatever reason, we just can't look at the phone and text.
What if we have an app that can:
1) Help us to record our expenses, to do list and shopping list,
without looking to our phone
.
2) When something comes in our mind, we just talk to our phone, the app will record down and understand what
we said, organize the data for us.
3) Present the data anytime when we want we need it.
Will it be great?
Introducing Sarah
Sarah
is a private virtual assistant in your mobile phone that can do all the thing mention above just by using your voice as input. So this will be like Jarvis for Ironman.
Sarah will talk to you, ask you question, when you answer back, Sarah will transform what you say into Text form, record, organize and present it to you when you you need it.
It can understand whether you want to record the sentence you said as a
Shopping List item
, a
Expenses record
or a
To do list task
.
When you want to access back to what you have recorded, you can:
Say this:
Show my total -- this will present your total expenses amount of the day.
Say this:
Show my to do list -- this will present you all your to do task that you recorded before.
Say this:
Show my shopping list -- this will present you all the item in your shopping list that you recorded before.
Therefore you no need to look at your phone and texting in order to record the thing you want, you just talk to Sarah like you have a normal conversation with a human being as usual, Sarah will take care the rest and present it back to you when you need it.
Challenges I ran into and How I build it
In order to achieve the functionality above, I need to transform the audio sound of user speech into text,understand what the user trying to do, the information inside the sentence and the context when user speak.
With the help of Wit.ai
We able to turn user voice into text.
We can predict the intents of user's voice by using keywords and pattern of sentence
We able to extract the information inside the sentences.
Here is some example
:
Say
: Chicken rice 11 dollar- Sarah know that user spend 11 dollar for a Chicken rice so record this as expenses.
Say
: Pick up my daughter 7am tomorrow -Sarah know that to do task is Pick up my daughter, time is 7am tomorrow, therefore we will record this into his to do list.
Say
: Shopping list iPhone - Sarah know user what to add iPhone into his/her shopping list, by detect the keyword "Shopping list"
What if we can't predict the intent from the sentence user speak?
We can't guarantee user always speak with the pattern we set above. For instance, user only give the to do task but not giving the due time, an expenses but have no amount, a "Shopping list" keyword, but don't have any item following?
Thanks to Wit.ai again , even though Sarah don't know the intent of the sentence, Wit.ai also will return the
Built-in Entities
to Sarah, therefore Sarah can response according to what entities it receives.
Here is some of example:
User say:
Chicken rice.
Sarah will response
: How much you spend for Chicken rice?
Then if user answer the amount, will record this as expenses record.
User say:
Pick up my daughter
Sarah will response
: What time?
Then if user answer the time, will record this as a to do list record.
User say:
Jonathan
Sarah will response
: What you want to do with Jonathon?
Then if user answer to do task and time, then will record this as to do list
User say:
Nothing else, That's all
Then this will cancel all the operation
And a lot more possibility, I suggest you to try Sarah to experience it.
All record will store in local database of Android phone
If you want to see the live conversation video without any editing, you see it here
Next challenge
is sometimes user just can't speak the perfect English, therefore wit.ai will transform the voice into the different words, then this will causing the incorrectness of the data recorded. Therefore I will save the audio file of user voice along side with
Text
return by Wit.ai like the screenshot attached, so user can listen back to their voice and change the record themselves later.
Another challenge
is need to make Sarah keep the process of
Talk to user
->
Listening to user
->
Get response from Wit.ai
->
Process the response according to context of moment
. Then the process just repeat, repeat and repeat. Therefore this need a lot of logic behind the scene.
Last challenge
is we need some kind of visual experience when user in situation like Sarah is processing their input,listening to their voice, speaking and waiting the response so that user know Sarah is actually functioning to avoid the frustration.
Therefore I make it like a Chat room that represent the conversation between user and Sarah. By this, user can look back all the history of interaction with Sarah and have the expectation of Listening, Processing, Waiting of response and so on.
By the way, user can interact using Text as well, cause for whatever reason user can't interact with voice at the moment like disability, noise, privacy and so on, user can choose to interact using text, besides wit.ai is functioning a lot better if using text.
Additional features:
In situation of user can't interact using their hand at all in that particular moment,
User can say
: "Hey Google, Open Sarah" to Google Assistant
Once opened, Sarah will continue the conversation.
Accomplishments that we're proud of
I able to build a MVP that address all the functionality and challenges above in the such short period of time. Now Sarah work well in Android, next I will make it support in iOS and more major messaging platform.
What's next for Sarah- The Assistant
Make Sarah understand more complex conversation and do other different things.
Make Sarah exist in iOS, Windows and all other major platform
Sync the recorded data into cloud to let user check in web and other device
Make Sarah available in all major Messaging platform like Slack, Messenger and Telegram
Make Sarah available in Android Auto and Apple Car, so user can access while driving
Make Sarah available in Android wearable device and Apple watch, they can access while working out
Make Sarah can integrate with Amazon Alexa and Google Home(This I need to figure it out how it work)
When comes to VR, I will make Sarah as cartoon character that can talk to user inside Oculus Rift.
This is cool. The possibility is endless, I am super excited about this. Stay tuned.
Fun Fact
I am the big fans of Ironman, if you talk about AI, my first thinking will be Jarvis.Therefore I always want to build one Jarvis for myself. Never expected will build a V1 this soon.
Special Thanks to
Wit.ai
Avatar Icon by Coquet Adrien
Awesome audio library OmRecorder by Kailash Dabhi
Built With
android
wit.ai
Try it out
github.com | Sarah -The Assistant | Sarah is your virtual private assistant. Sarah will help you organize your daily life by record your expenses,todo list,shopping list without using your hand.Of course you want to use hand, is ok too. | ['Ken Choong'] | [] | ['android', 'wit.ai'] | 62 |
10,492 | https://devpost.com/software/covid-cases-tracker | For you daily COVID needs!
Inspiration
With the rise of Corona Virus cases all around the world, it has become a daily activity to check on corona cases in the country quite frequently. This bot built with wit.ai and python helps you do so! It eases out tons of my time of browsing internet. You can also inquire about countries of your loved ones or see a global trend...
What it does
It tells you the covid cases detected today in you country along with the total statistics like deaths, confirmed, recovered, critical, etc
How I built it
I built it by integrating wit.ai's location intent, converting it into country coded and querying covid api to fetch the latest information!
Challenges I ran into
Converting countries to their code
Accomplishments that I'm proud of
Learnt how wit.ai functions
What I learned
How wit.ai works and how simple it is!
What's next for Covid Cases Tracker
I plan to integrate it with a messenger app, smart IoT app and have regular push notificationson prefrential country set
Built With
covid
python
Try it out
github.com | Covid Cases Tracker | Tackles frequent checks on Corona cases | ['Kushal Majmundar'] | [] | ['covid', 'python'] | 63 |
10,492 | https://devpost.com/software/nlpy-a1b9rt | NLPy Interface
Sending a Sentence via Text
Output of Above Text
NLPy Prime Calculator
NLPy Input for Prime Calculator
Inspiration
I had always been fascinated by the potential of machine learning; going into this hackathon, I knew I wanted to explore the boundary of these possibilities. While brainstorming with respects to NLP, I noticed many applications followed a very cookie-cutter process; some input channel would be replaced by speech, that speech would then be parsed and the app would continue working as usual. This often doesn't feel impactful - for example as a user of a fitness app, I don't really care if I can enter my daily info by spreadsheet or by talking to my phone.
When I asked myself what NLP and Wit.ai is really all about, I thought of scanning and tokenization; not only does it detect the intent of a sentence, it also detects and classifies keywords. Oddly enough,
compilers
do the same thing, except with strict syntax rules on how 'sentences' (blocks of code) can be formed. Following the connection, I realized that if I could replace this process with NLP, I could leverage the ability to scan and parse input without require strict syntax rules; otherwise, I could create a language with no syntax.
What it does
NLPy
is in and of itself a coding language. Users write 'code' by either talking to the application or writing the sentence they wish to send; these queries could be as simple as "set variable to five" or "add fifteen to temp". What makes
NLPy
special is that there's no syntax; just give it instructions in plain old English and it should work.
Currently,
NLPy
supports:
IDs (i.e variables) and integers
Arithmetic (+, -, *, /, %)
Assignment (=)
Basic Comparisons (<, =, >)
If statements
For loops
Std. Output
'
function
' calls
How I built it
There's really two integral parts to
NLPy
, the code generator and the real brains of the language, the NLP model.
To give a brief overview, the Wit.ai model receives a sentence and tries to identify the type of instruction it represents; it also tries to identify the specific tokens inside the sentence - for example, "set variable to five" would be recognized as an assignment with variable as the left value and 5 as the right value. Currently, it recognizes all the available commands listed under
What it does
, and then some. I've trained it to associate certain keywords to certain intents as well as entities; but the bulk of the training is invested into free text lookup strategies, as keywords are risky and can seriously mess things up if one were used out of context (e.g as a variable name).
These results are returned to the code generator, which just takes these intents and tokens and creates the actual code for it - for the previous example, "variable = 5" would be generated. Beyond that, it's just a matter of stringing together APIs and cobbling an interface together.
I cheated a little bit;
NLPy
actually transpiles to Python instead of generating the actual machine code, this is exposed to the user for debugging purposes, but could be abstracted away as well.
Challenges I ran into
The design of the Wit.ai model was extremely challenging; what intents should I recognize, and what kinds of entities do I want? My initial models struggled to classify many of my entities because they were either too generic or too specific; this made training a nightmare, as I'd find myself training the model using previous data with extremely minor differences.
Furthermore, nested expressions would be a nightmare to handle without being extremely inefficient. For example, recognizing the query "if a isn't equal to b..." as an if statement wouldn't be difficult, but training the model to parse the actual content to get a, not equal, and b would be very inefficient, as I'd essentially have to retrain everything if I wanted to implement a while statement, and then the model might confuse if's for while's because it may associate the conditional with one over the other. I overcame this by identifying "a isn't equal to b" as an entirely separate entity, and recursively sending this substring back into my model, and then training my model to recognize specific comparison patterns.
Similarly, calculating nested arithmetic expressions was near impossible; training aside, the real deal-breaker was in the ambiguity of expressions. A query such as "let x equal three plus two times two" is completely ambiguous, as "x = 3 + 2 * 2" and "x = (3 + 2) * 2" are both valid interpretations. Consequently, I only allowed one arithmetic operation each instruction to avoid this headache; funnily enough this meant NLPy felt a lot like assembly at times, where I'd have to use temporary variables (scratch register) to evaluate expressions without modifying the original variables.
Accomplishments that I'm proud of
I can't begin to describe the sense of achievement when I successfully compiled my first program. Using NLPy, I cobbled together a quick script to calculate and return all primes below 100; an speculation that had originally started as a _ "What if I could do this ..." _ had turned into _ "Oh shit it actually works" _.
I'm amazed at how well everything comes together so cleanly; the logic behind the compiler itself is essentially one glorified switch statement. Despite its simplicity, NLPy is already able to handle everything listed in
What it does
and then some.
What I learned
I learnt a lot about training models effectively and efficiently; going in I thought 'training is training, what does it really matter what I use'. After my first couple models crashed and burned, I realized I had to adapt my set of intents and entities to better incorporate the possible training data I could generate so as to efficiently train my model.
What's next for NLPy
As it stands, NLPy in it's currently operates like a syntax free assembly language with some Python cheats baked in (lists and function calls). This is nice and all, but assembly is very low-level and detail-oriented; I want to extend NLPy to handle instructions on a much higher-level, after all, the entire point is to offer a more human and intuitive approach to programming. Ultimately, I want NLPy to feel just like pseudo-code, to the point where someone just describes an algorithm as naturally as they can and NLPy will just compile it. This would be an incredibly powerful tool; not only would it be very practical, but extremely beginner friendly as new programmers wouldn't have to bog themselves down in syntactical details.
Built With
python
scipy
wit.ai
Try it out
github.com | NLPy | Natural language - the programming language | ['Eddy Yao'] | [] | ['python', 'scipy', 'wit.ai'] | 64 |
10,492 | https://devpost.com/software/a-voice-powered-controller-to-level-the-playing-field | Suave Keys
Suave Keys with wit.ai
Adding Macros
Keyboard profile
GIF
Android app with Call of Duty
Inspiration
People with certain physical disabilities often find themselves at an immediate disadvantage in gaming. There are some amazing people and organizations in the gaming accessibility world that have set out to make that statement less true. People like Bryce Johnson who created the
Xbox Adaptive Controller
, or everyone from the Special Effect and Able Gamers charities. They use their time and money to create custom controllers that are fit to a specific user with their own unique situation.
Here's an example of those setups:
You can see the custom buttons on the pad and the headrest as well as the custom joysticks. These types of customized controllers using the XAC let the user make the controller work for them. These are absolutely amazing developments in the accessible gaming world, but we can do more.
Games that are fast paced or just challenging in general still leave an uneven playing field for people with disabilities. For example, I can tap a key or click my mouse drastically faster than the person in the example above can reach off the joystick to hit a button on a pad. I have a required range of motion of 2mm where he has a required range of over 12 inches.
I built SuaveKeys to level the playing field, now made even better with more input options via an Android app powered by wit.ai
What it does
SuaveKeys lets you play games and use software with your voice alongside the usual input of keyboard and mouse.
It acts as a distributed system to allow users to make use of whatever resources they have to connect. For example, if the user only has an alexa speaker and their computer, they can play using Alexa, but now they can use their Android phone or iPhone using the SuaveKeys mobile app.
Here's what it looks like:
The process is essentially:
User signs into their smart speaker and client app
User speaks to the smart speaker
The request goes to Voicify to add context and routing
Voicify sends the updated request to the SuaveKeys API
The SuaveKeys API sends the processed input to the connected Client apps over websockets
The Client app checks the input phrase against a selected keyboard profile
The profile matches the phrase to a key or a macro
The client app then sends the request over a serial data writer to an Arduino Leonardo
The Arduino then sends USB keyboard commands back to the host computer
The computer executes the action in game
Now with the mobile app and wit.ai, we can use our phone as a new input device which creates a much faster turnaround on the request and enables more users to play games with their voice:
The app also allows the user to customize their profiles from their phone as well as their desktop client. So if you want to quickly create a new command or macro, you can register it right within the app.
Here's a quick gif of it in action in Call of Duty: Modern Warfare where I use my voice to get a headshot. I was able to use my hands to process movement, but all attacking was done with my voice:
If you watch the bottom left, you can see my phone screen where I say "attack" which then triggers the right intent in wit.ai, and then sends it to Voicify, to the SuaveKeys API, to my desktop, to Arduino, and actually fires the gun in the game to get a headshot.
Here's an example of a Fall Guys profile of commands - select a key, give a list of commands, and when you speak them, it works!
You can also add macros to a profile:
How I built it
The SuaveKeys Mobile app is built using C#, .NET, and Xamarin with the help of wit.ai, SignalR, and a whole lot of abstraction and dependency injection.
While the SuaveKeys API and Authentication layers already existed, we were able to build the client apps to act as both ends of the equation.
Each page in the app is built using XAML, C#, and MVVM. To handle differences in platforms such as:
Speech to text providers
UI differences
Changes in business logic
I built a dependency abstraction that lets us create an interface in the shared code, an implementation of that interface separately in each platform project, then inject it back into shared code.
For example, our
ViewModel
that handles the Speech to text flow that lets us actually talk to our app and have it work looks like this:
public class MicrophonePageViewModel : BaseViewModel
{
private readonly ISpeechToTextService _speechToTextService;
private readonly IKeyboardService _keyboardService;
public ICommand StartCommand { get; set; }
public ICommand StopCommand { get; set; }
public bool IsListening { get; set; }
public MicrophonePageViewModel()
{
_speechToTextService = App.Current.Container.Resolve<ISpeechToTextService>();
_keyboardService = App.Current.Container.Resolve<IKeyboardService>();
_speechToTextService.OnSpeechRecognized += SpeechToTextService_OnSpeechRecognized;
StartCommand = new Command(async () =>
{
await _speechToTextService?.InitializeAsync();
await _speechToTextService?.StartAsync();
IsListening = true;
});
StopCommand = new Command(() =>
{
IsListening = false;
});
}
private async void SpeechToTextService_OnSpeechRecognized(object sender, Models.SpeechRecognizedEventArgs e)
{
_keyboardService?.Press(e.Speech);
if (IsListening)
await _speechToTextService?.StartAsync();
}
}
This means, we need to implement and inject our
IKeyboardService
and our
ISpeechToTextService
. So to let Android actually use the built-in
SpeechRecognizer
activity and pass it to wit.ai then voicify, we implement it like this:
public class AndroidSpeechToTextService : ISpeechToTextService
{
private readonly MainActivity _context;
private readonly ILanguageService _languageService;
private readonly ICustomAssistantApi _customAssistantApi;
private readonly IAuthService _authService;
private string sessionId;
public event EventHandler<SpeechRecognizedEventArgs> OnSpeechRecognized;
public AndroidSpeechToTextService(MainActivity context,
ILanguageService languageService,
ICustomAssistantApi customAssistantApi,
IAuthService authService)
{
_context = context;
_languageService = languageService;
_customAssistantApi = customAssistantApi;
_authService = authService;
_context.OnSpeechRecognized += Context_OnSpeechRecognized;
}
private async void Context_OnSpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
var languageResult = await _languageService.ProcessLanguage(e.Speech).ConfigureAwait(false);
var updatedSlots = languageResult.Data.Slots.ToDictionary(s => GetVoicifySlotName(languageResult.Data.Name, s.Name), s => s.Value);
var tokenResult = await _authService.GetCurrentAccessToken();
updatedSlots.Add("AccessToken", tokenResult?.Data);
var voicifyResponse = await _customAssistantApi.HandleRequestAsync(VoicifyKeys.ApplicationId, VoicifyKeys.ApplicationSecret, new CustomAssistantRequestBody(
requestId: Guid.NewGuid().ToString(),
context: new CustomAssistantRequestContext(sessionId,
noTracking: false,
requestType: "IntentRequest",
requestName: languageResult.Data.Name,
slots: updatedSlots,
originalInput: e.Speech,
channel: "Android App",
requiresLanguageUnderstanding: false,
locale: "en-us"),
new CustomAssistantDevice(Guid.NewGuid().ToString(), "Android Device"),
new CustomAssistantUser(sessionId, "Android User")
));
OnSpeechRecognized?.Invoke(this, e);
}
private string GetVoicifySlotName(string intentName, string nativeSlotName)
{
if (intentName == "PressKeyIntent" && nativeSlotName == "wit$search_query")
return "key";
if (intentName == "TypeIntent" && nativeSlotName == "wit$search_query")
return "phrase";
if (intentName == "VoicifyLatestMessageIntent" && nativeSlotName == "wit$search_query")
return "category";
return "query";
}
public Task InitializeAsync()
{
sessionId = Guid.NewGuid().ToString();
// we don't need to init.
return Task.CompletedTask;
}
public Task StartAsync()
{
var voiceIntent = new Android.Content.Intent(RecognizerIntent.ActionRecognizeSpeech);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);
voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);
_context.StartActivityForResult(voiceIntent, MainActivity.VOICE_RESULT);
return Task.CompletedTask;
}
}
The gist of it is kicking off the speech recognition, then when we process the speech, send it to our
ILanguageService
(this is where we implement the wit.ai call), then fire that processed data off to the voicify
ICustomAssistantApi
.
Here's the gist of the
WitLanguageService
that is then injected into our android service:
public class WitLanguageUnderstandingService : ILanguageService
{
private readonly HttpClient _client;
public WitLanguageUnderstandingService(HttpClient client)
{
_client = client;
}
public async Task<Result<Intent>> ProcessLanguage(string input)
{
try
{
if (_client.DefaultRequestHeaders.Contains("Authorization"))
_client.DefaultRequestHeaders.Remove("Authorization");
_client.DefaultRequestHeaders.Add("Authorization", $"Bearer {WitKeys.WitAccessKey}");
var result = await _client.GetAsync($"https://api.wit.ai/message?v=1&q={HttpUtility.UrlEncode(input)}");
if (!result.IsSuccessStatusCode)
return new InvalidResult<Intent>("Unable to handle request/response from wit.ai");
var json = await result.Content.ReadAsStringAsync();
var witResponse = JsonConvert.DeserializeObject<WitLanguageResponse>(json);
// map to intent
var model = new Intent
{
Name = witResponse.Intents.FirstOrDefault()?.Name,
Slots = witResponse.Entities.Select(kvp => new Slot
{
Name = kvp.Value.FirstOrDefault()?.Name,
SlotType = kvp.Value.FirstOrDefault()?.Type,
Value = kvp.Value.FirstOrDefault()?.Value
}).ToArray()
};
return new SuccessResult<Intent>(model);
}
catch (Exception ex)
{
Console.WriteLine(ex);
return new UnexpectedResult<Intent>();
}
}
}
Here, we send the request off to our wit app, then take the output and map it to an simplified model that we can send to the Voicify app.
So all-in-all the flow of data/logic is:
User signs in
User goes to microphone page
User taps "start"
User speaks
Android STT service listens and processes text
Android STT service takes output text and sends to wit for alignment
Android STT takes aligned NL and sends it to Voicify
Voicify processes the aligned NL against the built app
Voicify sends request to SuaveKeys API webhook
SuaveKeys API sends websocket request to any connected client (UWP app)
UWP app takes request and sends it to Arduino
Arduino sends USB data for keyboard input
BOOM HEADSHOT (or whatever other action should happen in the game)
Challenges I ran into
The biggest challenges are performance from request to game, and testing while also talking to my chat on stream! Since the whole thing was built live on my twitch channel, I'm always talking to chat about my thought process, what I'm doing, and answering questions. So trying to keep that interactivity while also testing something that requires me to speak to it can be messy. This led to a TON of weird utterances added in my wit app to either ignore or align which just made more work, but outside of that, everything was pretty straight forward.
For example:
With regards to performance, I'm exploring a couple things including:
Balancing the process timing while speaking
Running intermittent spoken word against wit to see if it is valid ahead of time
Accomplishments that I'm proud of
The biggest accomplishment was being able to see it in use! I was able to play games like Call of Duty, Sekiro, and Fall Guys using my voice! It feels like it's closer to a real option for people with disabilities to play competitive, fast paced, and difficult games with as much ease as able-bodied people do and really level the playing field.
What I learned
The biggest learning moment was honestly how easy it was to integrate wit.ai - I had already created a basic language model via Voicify to use on the other supported platforms like Alexa and Dialogflow. Getting that working in wit was unbelievably easy. So many NLU tools overcomplicate the tooling and creation, but I was shocked that I was able to just spin up a wit app, add and align utterances, and just get going.
What's next for Suave Keys
Tons of stuff! I'm working on Suave Keys at least twice a week from now on stream and have tons of new stuff lined up including:
Making the UI a WHOLE lot cleaner and easier to use
Working on performance (as mentioned in my challenges faced)
Enabling more platforms to help more people use it
Distributing hardware creation to let people actually use it
Adding more device support for the XAC
Building shareable game profiles
Conclusion
I think Suave Keys has the chance to enable so many more people to play games that they never could before, and with a mobile app
Built With
alexa
amazon-alexa
android
arduino
azure
c#
dotnet
google-assistant
ios
signalr
uwp
voicify
websockets
wit
wit.ai
xamarin
Try it out
github.com | A Voice Powered Controller to Level the Playing Field | Project Suave Keys is a distributed voice controller to enable people with physical impairments to use their voice and motor skills to play games and use software across any device. | ['Alex Dunn'] | [] | ['alexa', 'amazon-alexa', 'android', 'arduino', 'azure', 'c#', 'dotnet', 'google-assistant', 'ios', 'signalr', 'uwp', 'voicify', 'websockets', 'wit', 'wit.ai', 'xamarin'] | 65 |
10,492 | https://devpost.com/software/business-enterprise-to-establish-causation-of-marketings |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
A1
Inspiration
What it does i
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for Business Enterprise to establish causation of marketings. .
Built With
europeana
pascal
Try it out
www.emmanuelkcheruiyot.com | Business Enterprise to establish causation of marketings. . | The aim of the project is to provide the best possible solution to the environment thus mitigation of societal trade in supply of goods and services. It's imperative and worth noting as it's techsavy. | ['eakch Cheruiyot'] | [] | ['europeana', 'pascal'] | 66 |
10,492 | https://devpost.com/software/information-covid-19 | Chatbot Covid-19
Inspiration
The current covid situation is very complicated, affect big on the whole world. So team planned build chatbot support everyone update information , symptom, prevention.
What it does
greeting with you
support update new covid-19 in VietNam and world
symptom
prevention
health declaration
How we built it
First, we read document about wit.ai, webhook, restapi. Next we code project simple connect wit.ai and fanpage of facebook. Then we build code in heroku. First we build complete project Chatbot.
Challenges we ran into
Error in connect with wit.ai and build in heroku
Accomplishments that we're proud of
Project help many community in epidemic time
What we learned
After finish project , we learned many about restApi, build code local to heroku, NLP and knowledge about wit.ai
What's next for Chatbot Covid-19
continue improve project : support information at provinces
voice chat
Built With
heroku
python
python-package-index
wit.ai
Try it out
github.com | Chatbot Covid-19 | assistant virtual Covid-19 help you question for symptom, number cases, prevention,... about virus Corona ( Covid-19) | ['TRUC TRAN TRUNG', 'VO VAN PHUC'] | [] | ['heroku', 'python', 'python-package-index', 'wit.ai'] | 67 |
10,492 | https://devpost.com/software/glowbom-chat-n1fhqv | In-chat editor
Preset preview
Running a project
In-chat editor - adding a new question
Choosing a preset
Glowbom is the first no-code platform that lets you create software via chat, using just your voice.
Built With
flutter
javascript
Try it out
glowbom.com
www.producthunt.com | Glowbom Chat | Glowbom is the first no-code platform that lets you create software via chat, using just your voice. | ['Jacob I', 'Maria Ilina'] | [] | ['flutter', 'javascript'] | 68 |
10,492 | https://devpost.com/software/vocai | VOCAI web application
Customization Menu
Online Code Editor
Inspiration
We wanted to build an application to increase the developer's productivity and hence letting him/her focus on the logic of building the application and not on the repetitive syntax.
What it does
Using the programmer's voice input(prompt), it extracts intent and entity from it and returns suitable Python code syntax for a given task.
How I built it
We built it mainly using Python. We used streamlit library for building the web page and used libraries like SpeechRecognition for recording the user's voice. We also used wit.ai for processing the audio input from the user, extracting intent and entities from it and returning meaningful outputs to the end-user.
Challenges I ran into
Integrating voice input from the user to the web app was initially a channel. We also had to spend some time to make our prompts well-crafted so that our entity recognition was generalized enough to help the end-user(developer).
Accomplishments that I'm proud of
Building a fully customizable code editor that runs on the web browser without any installation but has all the functionalities an editor would otherwise have.
What I learned
We learnt a lot about web development as this was the first time building a web application for a few of our team members. We also learnt to deal with voice-based applications and the fundamentals of intent recognition in the text through wit.ai.
What's next for VOCAI
Scaling the application and adding support for many more programming languages.
Built With
python
speechrecognition
streamlit
Try it out
github.com | VOCAI | A voice-based smart code editor assistant made using intent and entity extraction(wit.ai). | ['Srikar Samudrala', 'Sharan Babu', 'Rajesh Silvoj'] | [] | ['python', 'speechrecognition', 'streamlit'] | 69 |
10,492 | https://devpost.com/software/covid-19-detector | Inspiration
In light of the ever-evolving Covid-19 situation.
One day I was reading the news that CoronaVirus is spreading very quickly and highly developed countries like the USA and Italy are not capable of handling this virus. Most striking news was earlier it takes up to 2 days for just testing whether the patient is COVID positive or not. Then after some advancements also, it is taking some hours to test. So, I thought there should be the minimum time to test this virus so that actions can be taken as soon as possible because of its highly contagious nature.
What it does
This project is my contribution to helping to analyze the probability of a person having the infection. Technologies that were made to convert information so that it can be accessible to computers are used to aid people and I can contribute to a social cause.
COVID 19 Detector is a Web Application Prototype Developed by ME and built for Doctors to find out whom to test for the infection first under a limited testing capacity by finding out the probability of a person having the infection. It takes the symptoms of the patient and within seconds it will try to predict the probability whether the patient may be positive for coronavirus or not.
How I built it
This Web App is a dashboard developed in Flask (Python), HTML, Bootstrap/CSS and using Machine Learning. World map is created using Mapbox-Map-GL. Currently, I'm using pythonanywhere.com 's server for the deployment of this web app.
Challenges I ran into
Accuracy was the greatest challenge because symptoms of a pneumonic patient and a person having COVID can be similar. After all, pneumonia can be a symptom of COVID-19. So, to classify COVID cases from pneumonic cases separate was a big challenge in itself.
Accomplishments that I'm proud of
What makes my model different from other detectors out there is I didn't use transfer learning for training my model.
This model uses a technique called Logistic regression. By training the Database and import the machine learning model into an HTML file with the Flask (web framework).Data to be randomly generated for this Prototype. I tested my model the data and achieved 81% accuracy on training set and 80% accuracy on the test set.
What I learned
During developing this system, I learned how new technologies like AI and ML can be helpful in any field like healthcare. Understand the important effect that technology has on the medical facility today.
What's next for COVID-19-Detector
For future work
This model allows better priority to certain people who are affected by the virus.
Right now I have designed the project for One Disease but it can be designed for more number of diseases.
It acts as a life-saving device.
-All the tools and technologies used for developing COVID-19 detector are free hence the cost of production and development is close to NULL.
All COVID-19 detector required is an internet connection which makes it affordable for everyone.
Also, I am doing more research on how to make this system even more accurate so that govt. and hospitals can make use of this system at a higher scale and people can be benefited from this.
GitHub Repository
MY_Repo
Built With
bootstrap
flask
html5
mapbox
numpy
pandas
python
sklearn
Try it out
ayushman17.pythonanywhere.com | COVID-19-Detector | A Web Application Prototype built for Doctors to find out whom to test for the infection first under a limited testing capacity by finding out the probability of a person having the infection. | ['Ayushman Singh Chauhan'] | [] | ['bootstrap', 'flask', 'html5', 'mapbox', 'numpy', 'pandas', 'python', 'sklearn'] | 70 |
10,492 | https://devpost.com/software/web-assistant | Inspiration
Voice assistants in mobile and pc.
What it does
It controls the browser using voice commands. Can control web pages for opening links and scrolling. Supports custom commands for sites such as google, amazon, and youtube
How I built it
It was built using WebExtensions API. Tested on firefox and chrome.
Challenges I ran into
Recording audio and sent to wit.ai. Wit.ai doesn't support the audio recorded with firefox and chrome. So I had to find different solutions.
Accomplishments that I'm proud of
Recorded audio in wav format.
What I learned
Got some more experience in wit.ai and WebExtensiona API. Learned to deal with a microphone in browser
What's next for Web Assistant
Now it will only accept the command and do accordingly. Next update it to interactively communicate with the user. Have to add some more popular sites for custom commands.
Built With
javascript
webextensionsapi
Try it out
addons.mozilla.org
chrome.google.com | Web Assistant | Browser Voice Assistant. Controls the browser with commands. Can control web pages for opening links and scrolling. Additional commands for sites such as google, amazon, youtube | ['Samar Yalini', 'Libin JS', 'serlin benisha'] | [] | ['javascript', 'webextensionsapi'] | 71 |
10,492 | https://devpost.com/software/xchange-bot | FB Page with bot.
Messenger
Inspiration
Get instant price of stock market and cryptocurrency under one umbrella.
What it does
Helps in get live real time stock price, market cap of crypto currency, and a real time price of digital currency in desired fiat currency.
How we built it
Build it using wit.ai, facebook messenger, heroku cloud platform and Node.js.
Challenges we ran into
Challenge was getting started with wit.ai as very less documented.
Accomplishments that we're proud of
Live price of crypto currency and stock market.
What we learned
Natural language processing.
What's next for XChange Bot
That is suprise.
Built With
facebook-messenger
heroku
node.js
wit.ai
Try it out
github.com | XChange Bot | Live price of cryptocurrency and stock market. | ['ayush2304shah', 'aarti shah', 'Ankit Shah'] | [] | ['facebook-messenger', 'heroku', 'node.js', 'wit.ai'] | 72 |
10,492 | https://devpost.com/software/workflow-dictation | Webapp Homepage
Sample flowchart
Downloaded file
Inspiration
Having worked at several different companies across industries ranging from Management consulting, Technology and Banking as an analyst, we realized that a common tool used is the flowchart. However, flowcharts tend to be usually done by a single analyst after discussions. This creates many inefficiencies as the flowchart would go through several rounds of iterations due to differing opinions from stakeholders. We believe that there should be a better way to optimize this essential activity within the workplace and make our lives simpler.
What it does
Workflow dictation is an in-meeting web application that records process flows via the speaker's voice. Using Wit.AI, voice technology, it will listen out for common connectors such as Firstly, Secondly. This will allow the application to identify key processes that follows after the connectors. Using python, a powerpoint presentation is then created reflecting the order and the contents of the process flow. This PowerPoint will then be downloaded for the users to edit and allow key stakeholders to reflect their opinions in real time, removing any need for subsequent iterations.
How I built it
Workflow dictation was primarily built using the flask framework. Python packages such as powerpoint-pptx and pyaudio was utilized heavily to create the presentation as well as recording the audio. Wit.AI was trained using utterances prior and then called on to evaluate the voice recordings.
Challenges I ran into
Recognizing Singapore accented English was a difficulty resulting in many edits in the Wit.AI, we have to slow down our training speeches to enable clarity.
Accomplishments that I'm proud of
Creating our first voice enabled app as a couple
What I learned
The boundless possibilities of creating voice enabled application
Implementing the flask framework
What's next for Workflow dictation
As this is a proof of concept, further work could be done to include more varied use cases and complex shapes that would reflect commercial situations. After which, partnership with existing flow chart sites such as lucidchart could be in the pipeline to get more data samples and train the model accordingly.
Built With
flask
python
python-pptx
wit.ai
Try it out
github.com | Workflow dictation | Charting flowcharts, one voice at a time | ['Shi Tong Tan', 'Andrew Chin'] | [] | ['flask', 'python', 'python-pptx', 'wit.ai'] | 73 |
10,492 | https://devpost.com/software/tony-create-me-a | Inspiration
What it does
Tony helps you to improve the productivity by creating or removing quickly documents or presentations. It can also change the template of presentations. Or search for documents in your cloud document sever.
How I built it
The first step was to define a set of use cases, which were our test case to be solved. Next, we develop an Architecture and so fort we start with a first prototype.
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Tony, Create me A ...
Built With
node.js
python
wit.ai | Tony, Create me A ... | Artificial Intelligence that improves the productivity by creating or removing quickly documents or presentations. It can also change the template of presentations. | ['Cesarin G', 'Maulik Markan'] | [] | ['node.js', 'python', 'wit.ai'] | 74 |
10,492 | https://devpost.com/software/sigma-re1gaf | SIGMA
Inspiration
Solucione para atención online
What it does
Atención Online onnicanalidad
How I built it
Desarrollo:
Servidor JAVA host servidor-cliente = CONTROL empresa
Node
Servidor Online servidor-Cliente = ventas web(tiendaWeb,Amazon,mercadolibre,alibaba,facebook)
Challenges I ran into
Conexion Plataformas como facebook and Amazon
Accomplishments that I'm proud of
Bot personalizable desde App Desktop-Server=Empresa
What I learned
Sigo Aprendindo
What's next for SIGMA
IA control con cliente autónomo
Built With
google-cloud
google-web-speech-api
java
node.js | SIGMA | Sistema de Inteligente de Gestión Modular Administrativo.* Multicanal* Aprendizaje | ['Pablinux PXF'] | [] | ['google-cloud', 'google-web-speech-api', 'java', 'node.js'] | 75 |
10,492 | https://devpost.com/software/kiko-v9c20g | Icon
Inspiration: Alexa
What it does: Amazon Product
How I built it: wit.ai
Challenges I ran into: Devost Facebook ai
Accomplishments that I'm proud of: I made this application as one of my efforts to increase knowledge and also if this is successful I hope it will be useful for the community.
What I learned: AI
What's next for Kiko: New skill
Built With
android
authorization:,bearer,m7lxxs5muo77jggitedg2bvjl3c3x65i
curl
english
h
oxford-english-dictionary
smartphone
wit.ai
Try it out
www.kiko.com | Kiko | #Kiko | [] | [] | ['android', 'authorization:,bearer,m7lxxs5muo77jggitedg2bvjl3c3x65i', 'curl', 'english', 'h', 'oxford-english-dictionary', 'smartphone', 'wit.ai'] | 76 |
10,492 | https://devpost.com/software/prelimversion4 | processing
Inspiration- learning and using wit.ai
What it does- trains NLP enabled apps address complex syntax
How I built it
wit.ai plus other kits and resources
Challenges I ran into
working alone
Accomplishments that I'm proud of- working alone
What I learned- platform efficiency / inputs
What's next for prelimversion4-
open to all wit.ai users
Built With
wit.ai
Try it out
github.com | prelimversion4 | using wit.ai platform to train voice / NLP model for usage in different scenarios / variables | ['karam thapar'] | [] | ['wit.ai'] | 77 |
10,492 | https://devpost.com/software/virtual-assistant-h1gupa | Inspiration
I am hoge fan of tony stark from iron man . I know that he is not a realstic caracter but i like him a lot and also when i am in school i used to watch a cartoon "mickey mouse club house" i that there is some automated hands which work on voice commands of mickey mouse, i used to see those hands and wonder what if i create this.
What it does
it can work on voice commands and make lots of thing easy for the users
How I built it
i buil it using python and pyhton library like selenium, etc.
Challenges I ran into
chalange is to make the functonality that is not present in any other virtual assistant fo windows like contana.
Accomplishments that I'm proud of
I am proud that i wrote the research papper for this and it got published easily.
What I learned
I learned a lot from this like how to use multithreading in real world projects and much more
What's next for Virtual Assistant
i dont have money to fund my project , if i won then i would like to use this virtual assistant in my home automation. | Virtual Assistant | This is a virtual assistant on which i am working from last one year , now it is capable of automating google chrome on voice command it can do many thing also. | ['kartik chauhan'] | [] | [] | 78 |
10,492 | https://devpost.com/software/know-the-cats | More than text response! We can synthesize talking-head video response.
System Flow
Technology Stack
Inspiration
How many hours have you spent in learning? It is estimated that
18 million hours
spent per school day in the U.S. Due to the COVID-19 epidemic, students have been taking part in lessons at home. Many teachers have struggled to adapt to online learning. How can AI help?
Scope of project
We have decided to leverage the power of Wit.ai to build an engaging conversational AI response engine. This engine is applied to build a virtual classroom assistant that brings better learning experience and efficiency.
Problems to solve
We identified three problems in traditional education:
Lack of self-initiation to ask a question by a student;
High cost in video filming for online learning; and
Limited teaching efficiency in limited teacher's preparation time.
Objectives
Based on the studies [1,2,3] related to the aforementioned problems, our project aims at:
Improving students’ questioning skills which are important to achieve deeper learning;
Providing engaging responses to motivate students' to learn; and
Increasing the accessibility of learning from best teachers.
Solutions
With the help of Wit.ai, we develop Witeach.ai: a low code solution to achieve the above objectives. Our system supports a direct and natural way to interact with technology through language. Witeach.ai provides the followings:
Personal classroom experience for students to ask questions;
More personal feel realtime video response via teacher’s talking head with lip sync technology;
Low code tool to help teachers to answer students' questions supported by Wit.ai.
Implementation
In the backend, the natural language understanding engine is achieved by leveraging the natural language understanding technology powered by Wit.ai (
http://wit.ai
) to analyze user messages. Another component in the backend is the response compilation engine, which is done by integrating Google's text-to-speech service and a recent lip-sync technology published at ACM Multimedia 2020 (
https://github.com/Rudrabha/Wav2Lip
). Wav2Lip is written in PyTorch from Facebook. This enables us to synthesize talking-head videos. Flask server provides the entry point to access the Witeach.ai engine. Please refer to the GitHub repo for details.
A picture showing the technology stack can be found in the image gallery.
To Do
Provide better analytics and insights for educators (or people in the customer service department).
Perform in-depth sentiment analysis from student (or customer) activity logs.
Add state machine support for storyboard response generation.
Automate the generation of the knowledgebase files associated with the Wit.ai app
Build an ecosystem for teachers to exchange teaching materials built on top of Witeach.ai
References
Bugg, Julie M., and Mark A. McDaniel. "Selective benefits of question self-generation and answering for remembering expository text." Journal of educational psychology 104.4 (2012): 922.
Guo, Philip J., Juho Kim, and Rob Rubin. "How video production affects student engagement: An empirical study of MOOC videos." Proceedings of the first ACM conference on Learning@scale conference. 2014.
https://ncte.org/statement/why-class-size-matters/
In the demo application, the knowledgebase mainly comes from
https://en.wikibooks.org/wiki/Wikijunior:Big_Cats/Complete_Edition
.
The video clip of the first talking-head video comes from Liz Bellward, Australian Zoologist (
https://www.youtube.com/watch?v=HeQwAggzkNc
), who loves Big Cat!
Built With
bootstrap
flask
html5
pandas
python
pytorch
sqlite
tensorflow
wit.ai
Try it out
witeachai.wongpakm.com
github.com | Witeach.ai | Low code solution to build engaging talking head responses on Wit.ai | ['Pak Kan Wong', 'Pak Wai Wong'] | [] | ['bootstrap', 'flask', 'html5', 'pandas', 'python', 'pytorch', 'sqlite', 'tensorflow', 'wit.ai'] | 79 |
10,492 | https://devpost.com/software/mystanceacademy-org | we faced soo many problems but we use social media platforms to know it like- errors, fronted and backend development.
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for MYSTANCEACADEMY.ORG | MYSTANCEACADEMY.ORG | we worked in a website that is MYSTANCEACADEMY.ORG. It is a educational institutional website where we can store all data about institution and customers can easily access the information. | ['Ashutosh Kumar'] | [] | [] | 80 |
10,492 | https://devpost.com/software/social-network-pz9ceq | Inspiration - Facebook
What it does - It helps people to communicate
How I built it - with php/mysql , html/css/js(jquery)
Accomplishments that I'm proud of - I have social network with custom and powerful design and features.
What's next for Secondall - Future of World :))
Built With
css3
html5
javascript
jquery
mysql
php | Secondall | I have my own social networking platform based on php and it has all of features like Facebook. I created new , minimalistic design. | ['George Makhauri'] | [] | ['css3', 'html5', 'javascript', 'jquery', 'mysql', 'php'] | 81 |
10,492 | https://devpost.com/software/corvid-ion-windows-10 | Inspi
What it does:
https://devpost.com/software/corvid-ion-windows-10/joins/WA-UVsHalj2akaPuO0zhuQ
How I built it: usim
Challenges I ran into
Accomplishments that I'm proud of:usb connection event
What I learned
What's next for corvid ion windows 10
Try it out
jima2617.wixsite.com | smart powershell api | develop on movebox | ['joram owuor', 'faten990 Tafesj'] | [] | [] | 82 |
10,492 | https://devpost.com/software/logos-one-project-days | Inspiration
layanan masyarakat dalam bidang lapangan pekerjaan dan keamanan
What it does
bekerja membangun gedung distribution centre LOGOS CILEUNGSI ONE
How I built it
menyewa kontaktor pt.multibrata anugerah utama untuk pembangunan gedung
Challenges I ran into
menjaga keamanan area proyek,keamanan dan keselamatan para pekerja ,juga alat alat berat proyek agar tetap aman dan lengkap
Accomplishments that I'm proud of
Situasi selalu aman kondusif tidak ada tindak kejahatan di proyek kami
What I learned
Kekompakan semua unsur di dalam proyek dari mulai pekerja kuli sampai bos kontrakator ,bahwa semua manusia saling membutuhkan satu dgn yg lain
What's next for Logos cileungsi one project
Succes for logos ..
Built With
appy-pie
construction
hardware
steel
true-key-video-hosting
Try it out
properti.com
youtu.be | Logos cileungsi one | Berpacu dengan waktu , kinerja di gerus terus ..semangat para pejuang | ['Firman ganteng'] | [] | ['appy-pie', 'construction', 'hardware', 'steel', 'true-key-video-hosting'] | 83 |
10,492 | https://devpost.com/software/monograph |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Inspiration
Assistance
What it does
Discretionary
How I built it
International
Challenges I ran into
None
Accomplishments that I'm proud of
Executive Director
What I learned
Everything
What's next for Monograph
Bureau of justice assistance
Built With
apis | Monograph | Understanding Police Community Relations | ['Abdurrasheed Saeed Ahmad'] | [] | ['apis'] | 84 |
10,493 | https://devpost.com/software/ma-2yohxn | Inspiration
A huge amount of Reels are focused on dancing. We wondered: How can we make the expression of dancing in Reels more interesting.
We were inspired by motion amplification that shows micro movements in machines. The tool finds the moving areas, the direction they're moving in and enhances those movement to become visible.
What it does
While dance movements are already visible, we enhance them by extracting their directions and create different effects that dancers can use to make their dancing look more exciting.
Arrows, the first effect we created for this Hackathon
The filter understands which part of the dancer is moving up, down, left or right and translates those movements into the separated edges of the silhouette.
After the SegmentDirection Tool was created we came up with four different filters that use that new function in fun ways.
Our ideas came from manga, kirakira and... Rihanna.
0. Line Ripples
The dancer's movements are translated into graphical ripples around them. They emphasize the current dance moves and also, by slowly fading away, show the past moves.
1. Energetic Steam
This effect is created to give more energy to the dancer's performance in Reels. The divided edge area of the segmentation balances the amount of steam and its direction.
2. Kirakira Trail
The fascination of kirakira effects is undeniable. We wanted to bring kirakira to the next level by making them particles that react to your movements.
3. When the Sun Shines We Shine Together
The dance moves create water that travels, as if it got momentum. So rather than just upwards "falling" water, the movement's direction is shown in the direction the water flies away.
How we built it
The core system of all effects, the SegmentDirection Tool, is based on the extraction of the segmentation alpha channel's travel direction. This is achieved by acquiring the adjacent pixels and the delay frame. We have generated segmentation alphas for each direction of movement and created a patch asset that can retrieve them.
A Patch Asset to split the segmentation we created
The SegmentDirection Tool allows you to create more complex segmentation effects that are based on the movement's direction. We created the four different effects to show the potential of this idea and its advantage for Reels filters.
The filters will be published separately, but for demonstration purpose we combined them into one. Eventual fps lacking doesn't happen in the separated filters.
Accomplishments that we're proud of
We started our team for the first Hackathon this year. Unfortunately, our
black hole filter
didn't make any place at that time, but the teamwork was perfect.
We appreciate each other's talents and we are proud that we were able to participate in this Hackathon again with the same team.
This time we took the hacking part more literal and "hacked" a new function into Spark AR, while also focusing on making useful and fun filters. We hope other creators benefit from the SegmentDirection Tool and create beautiful filters.
Adrian Steckeweh
@omega.c
+ Hayato Kuno
@kuno.fell.asleep
What we learned
We've gained a better understanding of new features like delay frames and render passes.
What's next for „Motion Effect Kit“
With the SegmentDirection Tool it is now possible to track motions and their direction. It's an idea that's only possible because of the latest Spark AR Studio, and we think it's going to be the helpful for other creators. We'd love to see what kind of filters other people can create by distributing the patch. We also hope to create more effects that take advantage of this feature ourselves in the future.
Built With
shader
sparkar | Motion Effect Kit | A set of effects for Instagram Reels with custom patches to extract movement directions and areas. | ['Hayato Kuno', 'Adrian Steckeweh'] | ['First Place'] | ['shader', 'sparkar'] | 0 |
10,493 | https://devpost.com/software/geometry-dance-vfx-mixer | Inspiration
Since this hackathon's topic is for Instagram Reel, the first idea is to create a tool for dancers. Something that can help content creators to do real-time visual effects.
But since this project is also a piece of creation of mine, I would like this project to have a style. After doing some research into daning VFXs , I was inspired by
Kosuke Iwasaki's Kyoto Girl
. Then I decide to make this project geometric.
What it does
It's a tool that lets you create geometry shapes around you, making dance visual effects in real-time.
How I built it
Screen to world position with Plane Tracking
The most important functionality is to calculate the depth between the camera to the user and convert the gesture's position to world space. This let the user create shapes wherever they wanted. And with the help of the plane tracker, the user can pan the camera and feels the distance between the dancer and the geometries. Which makes the space feels real.
Nested Native UI Picker
There are a lot of different settings and options for the filter. So I set up a nested UI picker through javascript. And the user can set up their custom shape queue with it!
Animated SDF Shapes
The basic shape of geometry is simple. So it takes efforts to make it looks nice. There are 8 control channels of each shape: main size, main thickness, outer distance, outer thickness, inner distance, inner thickness, shape edges, and alpha. Spent some time animating these values.
Bloom
With some help in the Spark AR Community, I add some blur and bloom effect to this project. This does make the project a lot better. Thanks to the great community!
Segmentation and Render Pass
To put the shapes in front and behind the dancer, it needs segmentation. And with the help of render pass, I added some lighting details. For example, if you create a geometry in front (with the pinch in gesture), the shape will make the segmentation layer brighter, which makes it feels like the shape is really in the space.
Persistence Module
Use the persistence module to store user's custom shape sequence data. Or it would be too annoying if the sequence resets after every video recording.
Challenges I ran into
The main challenge of this project is to achieve the need for dancers. (though I didn't really ask any dancers about it)
Before I started this project, I've done some research on the topic of VFX dance videos. And I found that it is very important to make the visual feel around the body. For example, there are lots of dancing videos that have drawing strokes passing around their arms and legs. But since Spark AR does not having body tracking yet, this seemed impossible to achieve.
But after some tests and digging, I found a way to mimic the surrounding effect using two planes, one behind the user and one in the front, use the animation to make them feels connected.
And also the space is important. Spent some time working on the world coordinate system. Since the transform channels are point and scalar signals, the default way to modify their values is through binding instead of setting to a fixed value. This kind of API is kinda awkward for this project, so spent some time testing out the APIs ... but it's all being solved anyway!
Accomplishments that I'm proud of
All of the challenges listed above are part of the pride. But the proudest thing is the visual result. I put a lot of effort into the lighting.
For the shapes in the back, there's an additional ground light. Which makes it feels like the shape is really positioned in the space. And for the shapes in the front, there is an overlay glow on the dancer. (the segmented person layer) Which makes the shape feels like really putting in front of a person.
And also the shape animation and the bloom. Since the shape s simple, it has to have some additional effect to achieve the visual goal.
And at last, I'm pretty satisfied with the result!
What I learned
This is a scripting heavy project, and this is my first time scripting ES6 and ES8. So I think my javascript skill might be improved a lot ...
What's next for Geometry -- Dance Vfx Mixer
Through the working process of this project, it kinda pushes the limit of my imagination of how a filter could be used. I used to treat filters like a piece of work, like a painting, something feeling more "fixed". And this project is like a tool, a brush, which helps users to express themselves, to create their own performance.
This project only supports two shapes, but I think is kinda a demonstration, it could be more. For example, let the user customize the color, or even customize the shape's animation, or it could be other kinds of visual.
This kind of "creation tool" might be a brand new category of filters. Although this kind of filter might be a little difficult to operate since filters are usually easier to use. But just through some practice, it could be handy. And when users getting used to this kind of filter, there might be more interesting videos to be made!
Built With
javascript
sparkarstudio
Try it out
www.instagram.com | Geometry -- real-time dance vfx | A tool that allows you to add vfx in your performance in real-time. | ['Hsing Huang'] | ['Second Place'] | ['javascript', 'sparkarstudio'] | 1 |
10,493 | https://devpost.com/software/mirror-world | Inspiration
Mirrors are the greatest creation that creates different looks . I was seeing a dance video and noticed the creator was using the mirrors as a tool to create different perspective . So decided to make a filter that has mirrors in it.
What it does
It has 4 mirrors in the plane tracking and it reflects the elements .It create a illusion of having multiple mirrors in the real world so it create different look to the video .
How I built it
I made the model in blender . Created the mirror look with patch editor and placed them in the plane tracker to make it appear on the world.
Challenges I ran into
I tried to make a reflection using the render target but it was not coming as expected , as I was trying to give a input to an element before the scene is been rendered .
What I learned
Making simple reflective mirror look
What's next for Mirror World
making the 3d models also reflected and making the look better
Built With
sparkar
Try it out
www.instagram.com | Mirror World | World filled with mirrors and it reflects our actions and emotion . Mirrors are the best way to see a persons soul here its the best way to have fun as the users action is been mirrored. | ['Kavin Kumar'] | ['Third Place'] | ['sparkar'] | 2 |
10,493 | https://devpost.com/software/save-our-stages | GIF
Live performance events have been postponed indefinitely due to the pandemic. This is our team’s quirky tribute to the performing arts, crew, and community, bringing the stage to you and your home. We were inspired by the Save Our Stages movement, and the social and performance-based nature of Instagram Reels. Our team learned how to collaborate remotely, the new features of SparkAR, and faced challenges such as simplifying a complex concept into an elegant idea.
Built With
unreal-engine
Try it out
github.com
www.facebook.com | Save Our Stages | Has social distancing got you down? Do you miss the days of being able to attend concerts? Use our filter to find your voice and your stage! But be careful, your voice might shatter the glass! | ['Olivia Seow', 'Aaron Santiago'] | [] | ['unreal-engine'] | 3 |
10,493 | https://devpost.com/software/face-tracking | A simple face tracking effect with Spark AR
Try it out
github.com | Face Tracking | Face Tracking Effect | ['Tai Vu'] | [] | [] | 4 |
10,493 | https://devpost.com/software/djdj | InstaGJ I'm the Dj Instagram Effect cover
InstaGJ I'm the Dj Instagram Effect Placement
InstaGJ I'm the Dj Instagram Effect Used on Instagram reels
InstaGJ I'm the Dj Instagram Effect close up of the Dj Deck
InstaGJ I'm the Dj Instagram Effect preview/Adjustment/Scaling in Instagram reels
InstaGJ I'm the Dj Instagram Effect Disco Light
InstaGJ I'm the Dj Instagram Effect Demo video snapshot
InstaGJ I'm the Dj Instagram Effect Mimicking the Dj
InstaGJ I'm the Dj Instagram Effect close up of the Dj Deck
Inspiration
The Inspiration of the
InstaGJ I'm the DJ
Instagram effect came from the idea where we usually see DJs' as the king of music mixing with there prowess in making people enjoy music in a special way. Out of this, we thought that almost everyone in the world has at one time or another had the thought of how it would be if they happened to be a DJ or else how would they hype up the masses if they happen to be the lead Mc and the DJ at the same time in a party
What it does
The
InstaGJ I'm the DJ
Effect is an Augmented Reality World Effect that employs the power of Spark AR to augment your world with a Dj Booth ready with a spinning deck and disco lights to help you be creative with your Instagram stories and reels.
The 3D model is scalable by pinching the screen in or out, can be rotated to face different direction by the use of two fingers and for one to place the model into the real world one only need to find a plane surface and tap on it to place the model.
For the best result and for the model to anchor its self to the ground appropriately while placing being in a well-lit area is advised
The model is viewable from the front and the back and all angles including Birds view, warms view, Close up, mid-shot, and long-shot hence with good video editing skills on Instagram reels one can be able to do some awesome content
How we built it
We created and customized the model in a 3d application to suit our need and edited the Texture Images in photoshop
Challenges we ran into
-Customizing the model to be as simple as possible without losing the High-Quality look of everything with the effect
-Some parameters that we wanted to use for the effect aren't supported on Instagram effect ie Editable text to add time and location
-We wanted to add Color leaks to our effect and the ability to turn it on and off but then we noticed that we were kind of losing the crispiness that we wanted and we had to do away with it. We tried several ways to achieve it but it was all in vain
Customizing the models to meet the recommended polycount and total file sizes
Accomplishments that we're proud of
Having completed the major part of the project
Managing to take part in this 2nd SparkAR Hackathon.
Managing to work out an idea to a concept and to a finished effect.
What we learned
Patience pays: Doing things again and again and again trying out different ideas will finally lead one to an answer (At first most of our materials weren't being mapped on the model once we import them in spark ar but after many tries, we managed to figure out what the issue was ie Spark Ar doesn't support VRAY Materials or some material blend modes from external software
What's next for
InstaGJ I'm the DJ
Waiting to see what people will create out of it and also learn on areas that we can improve on overtime to help us manage to do updates to the effect.
Built With
3ds-max
photoshop
sparkar
Try it out
www.instagram.com
github.com | InstaGJ I'm the DJ | Ever wanted to be behind a Dj Deck with your best Song Playing in the background? Spin the Deck on Instagram Reels while syncing your best Music to be the Dj of the Day. | ['James Mobutu'] | [] | ['3ds-max', 'photoshop', 'sparkar'] | 5 |
10,493 | https://devpost.com/software/brendan-luu-gradient-body | GIF
A few weeks ago, I was dancing in front of the mirror in my bedroom and thought myself:
"I feel like I'm the star in the classic iPod commercials."
Moments later, it hit me, "I could use Spark AR to actually make that happen!"
I made a quick prototype, propped up my phone with the effect in selfie mode, and my body just started moving to the groove. I'm horrible at dancing, but effect's silhouette masked how silly I thought I looked. I felt free to express my body's movement.
I paired the silhouette effect with contrasting gradient colors to match people's moods in an atheistically pleasing manner.
Gradient body
is all about the dance moves, people's body language, and helps Reels users discover their inner pop star.
Built With
ar
particle
sparkar
Try it out
www.instagram.com | Brendan Luu - Gradient body | Remember the classic silhouette iPod commercials? Gradient body enables Reels users to feel like the star of the shoot! | ['Brendan Luu'] | [] | ['ar', 'particle', 'sparkar'] | 6 |
10,493 | https://devpost.com/software/heart-toss | Inspiration
We wanted to pelt our cat with hearts
What it does
Tapping on a plane creates a heart which you can scale or reposition by pinching or dragging. Drag to the destination and hold record to start the animation. Send a heart from yourself or from a friend to another
How I built it
Used SparkAR plane tracking to set an origin and gesture control to adjust scale/position and set destination. Animated the heart using Autodesk Maya as well as the sparkAR animation controller
Challenges I ran into
Instruction manager appeared to only use predefined tokens for instructions - inputting strings caused the app to break. Was not able to create loops with the patch editor which caused us to remodel the implementation. Documentation was out of date or recommended deprecated functionality. Interaction between script and patch editor was not as parallelized as we hoped. Debugging could have also been easier if debug statements weren't the only option.
Accomplishments that I'm proud of
It works! and also translating 2D rotational gesture to 3D height adjustments.
What I learned
Reactive programming is streamlined and good to have but also very difficult to implement. Plane tracking is solely based on image processing and can be used for 2D images.
What's next for Heart Toss
Adding trail particle effects and adding an option for a 'rejection heart' that withers away once it lands.
Built With
maya
sparkar
Try it out
github.com
www.facebook.com
www.instagram.com | Heart Toss | Have you ever wanted to visualize your love for someone else? Well now you can! Show your affection by tossing your heart at your loved ones, your friends and even your pets <3 | ['Winnie Feng', 'Connor Tidd'] | [] | ['maya', 'sparkar'] | 7 |
10,493 | https://devpost.com/software/jarvisar-filter | Inspiration
-Tony:JARVIS,you up?
-JARVIS:for you,sir,always
This filter is inspired by the one and only Iron Man's AI assistant J.A.R.V.I.S. This filter is a small tribute from our side to Iron Man.
Keep Calm and Call JARVIS!
What it does
Creates a interface of JARVIS using the front camera
How I built it
Spark AR using concept of multiple layers
Challenges I ran into
Handling Mulitiple layers and segmentation errors.
As a beginner took time to get adjusted to the environment of Spark AR
Accomplishments that I'm proud of
Creating the prototype of my favorite assistant
What I learned
Creating different types of effects using Spark AR and managing multiple layers
What's next for JarvisAR filter
To add more elements,music effects,voice reply and many more features
Built With
adobe-illustrator
photoshop
sparkar
Try it out
www.instagram.com | JarvisAR filter | Missing our beloved Iron Man then you will love this filter 3000 | ['Shanu Mishra', 'Ayush Choudhary'] | [] | ['adobe-illustrator', 'photoshop', 'sparkar'] | 8 |
10,493 | https://devpost.com/software/ar-world-sbu94o | Inspiration
Everybody has ideas about what they want in a filter, or they wish to view how they will look in certain situations; for example my friends always wondered how they would look like, if they had dimples :) using Augmented Reality for this is just perfect! Also, 3D objects fascinate me a lot, so I have created my own effects and have worked with 3D objects!
What it does
I have created a bunch of effects, some of which have animated face distortions, while some others have animated cartoons and background and last but not the least, I have created animated 3D objects and face cutting effects which will surprise you!
How I built it
I built all the effects using Spark AR. Talking about the face distortion effects, I have used a tool named sculptGL to create the desired distortions in a face mesh and have animated them using loop animation feature in the patch editor.
Next, the animated cartoon and background effects have been created using a tool for converting gif to sprite sheet, and then importing the JPG file in an animation sequence in Spark AR.
For 3D models, I am extremely thankful to some websites, from where I could download the 3D models and could animate them. After animating on mixamo, the objects were imported to Spark AR where I linked the object to the animation and there we go!
And lastly, the face cutting effect was created by importing a 3D object and then, using plane occluder and face tracker, to shape the desired effect.
Challenges I ran into
My first experience with AR and Spark AR was during this hackathon itself, and so, at first I found it difficult to figure out why the effect isn't working properly, or how to animate 3D objects.
Accomplishments that I'm proud of
Since my first experience with AR was during this hackathon itself, it is really encouraging to have created some effects of my own! It was an amazing experience to work on something which we use so often in our life, and creating my own version of it!
What I learned
I learned to animate the face distortion and face tracking, to change the background, and in fact, animating it. I also learnt how to use gifs for animation. And, I learned to animate 3D objects and using the plane occluder for the face cutting effect.
What's next for AR World
I will try to add music and sounds to my effects, also, I wish to add multiple animations in a single project.
Built With
ezgif
mixamo
sculptgl
sparkar
Try it out
github.com | AR World | Welcome to the world of augmented reality. Find animations and filters which will fascinate you. | ['Anushka Gupta'] | [] | ['ezgif', 'mixamo', 'sculptgl', 'sparkar'] | 9 |
10,493 | https://devpost.com/software/see-19 | Inspiration
With the proliferation of Covid-19 testing sites, we wanted to provide a way for people to spread awareness of where these sites and resources can be found. On a personal level, I get tested once a month and view it as my civic duty to assist in the fight against Covid-19. What we want to do is provide a way using Spark AR where users can share the results of their test and connect their audience with resources to get tested as well.
What it does
Using See 19, users are able to place an AR filter on their image. They can then select what the status is of their latest test from "Covid-19 Positive", "Covid-19 Negative", and "Covid-19 Tested".
If the user chooses the Positive filter, then their viewers can swipe up to be connected to resources on what to do if they've been in contact with this person
If the user chooses the Negative or Tested filter, then their viewers can swipe up to be connected to testing sites near them.
How We Built It
we used the Spark AR filter to be able to create this. For the graphics, we were able to create the graphics in Sketch
Challenges We Ran Into
we wanted to try to add a face mask to the user but we didn't have the time to create the model and render it's movement onto the user
Accomplishments that we're proud of
were proud of the fact that we were able to create something that's leveraging social media to connect people with locations that they can get tested
What I learned
we learned how to use Spark AR for the first time as well as discover new friends during this hackathon.
What's next for See 19
next, we'd like to engineer a way for us to be able to verify the test result. This would require us to access the back-end of the user's medical records which requires HIPAA level security, but this would also prevent the spread of misinformation by verifying the result of a user's test result.
Built With
ar
particle
Try it out
github.com | See 19 | See 19 is an AR filter that enables users to share the results of their latest Covid test. | ['Shubhendra Shrimal', 'Sonny Tosco', 'Stephen Bartkowski', 'Hans Gutt'] | [] | ['ar', 'particle'] | 10 |
10,493 | https://devpost.com/software/360-portal | Inspiration
I like to work with all kinds of XR, but AR is my favorite, so I thought what could be a great AR effect that people would love to use in Instagram Reels, and this idea popped up.
What it does
It takes the user to an immersive virtual environment of his/her choice
How I built it
I used SparkAR Studio to build this project.
Challenges I ran into
The main challenge I faced while making this project was to find ways to integrate various stuff into SparkAR Studio.
Accomplishments that I'm proud of
I am proud of completing the project on time and also learning a lot on the way which I will also be able to implement in future projects.
What I learned
I learned how to operate SparkAR Studio pretty well. I also learned blender quite enough to make my own models for small projects. I also learned some VR techniques.
What's next for 360 Portal
I hope to incorporate better quality 360 images and a better UI also adding some extra features.
Built With
blender
javascript
sparkar
Try it out
www.instagram.com
github.com | 360 Portal | An AR effect that can be used in Instagram reels to immerse yourself in a completely different environment. It has various 360 environments that the user can choose from and take enthralling pictures. | [] | [] | ['blender', 'javascript', 'sparkar'] | 11 |
10,493 | https://devpost.com/software/no-name-3dewkx | GIF
Choose your style.
GIF
Be yourselves!
Inspiration
We were inspired mostly by the music and commercial making industry, the marvel of editing that can make any activity look captivating and visually stunning when combining the right shot with the right sequence, timing and sound. We were aiming to create a dynamic bridge between different effects that can inspire users to create and share their own short content with others, without needing the editing skills and hours of trying to find the right combination of filters that work best together. Our ambition was to create an effect that users would find engaging and would feel inspired to use, inviting them to try out more dynamic and bold content to express themselves.
What it does
The Lucid Dynamics effect is a series of rotating montages that start with the user pressing the record button which activates the rotation of Instagram effects that as a result create an edited video. By pressing one button the user can put together their favorite music and the editing effect, making it both easy and fun to try out the new Instagram Reels feature.
How We built it
The effect was built using Spark AR, 3D objects, face tracking and segmentation. We also used the picker UI, UI sliders, light staging, 3D object animation, material setting and logics building.
Challenges We ran into
The biggest challenge we faced was the limitation of body tracking which has shifted our original direction of building the montage effect upon dynamics of body motion and music, towards the predefined effect rotation solution. The second challenge was to find the right combination of bold and edgy effects that would be dynamic, fun and well combined in a variation of rotating sequences.
Accomplishments that We are proud of
We are proud to share an effect that can reduce the team and time that is required to make short-form creative performances, allowing users to create content that looks complex, dynamic and fun but does not require heavy editing skills.
What We learned
We learned more of IG features, the capabilities of Reels and discovered a lot of daring trends.
What's next for Lucid Dynamics
Lucid Dynamics effect is the first step towards the creation of a series of ultimate filters, based on the body tracking or its simulation that can bring real-time editing and content creation to the next level for influencers, brands and users that want to share their own form of dynamic short-form creative performances.
Built With
3dscene
colorscheme
dynamic
facelightning
facetracking
nodesystem
segmentation
Try it out
www.instagram.com
gitlab.com | Lucid Dynamics | Imagine a one click real-time creative performance studio in your Instagram Reels | ['Alex Dzyuba', 'Alexandr Basiuk', 'Kris Masera', 'Anfisa Dzyuba', 'Anna Rohi', 'Eugene Ovcharenko'] | [] | ['3dscene', 'colorscheme', 'dynamic', 'facelightning', 'facetracking', 'nodesystem', 'segmentation'] | 12 |
10,493 | https://devpost.com/software/scripto-sjzfo6 | before clicking
when clicked
Inspiration
Make Effect when Memorized moment comes 🔄
What it does
It can work with both rear camera as well as front camera.. the first thing you would see is a custom instruction saying smile. This shows the image of the moment as GIF in the camera to make your imagination more natural and effective when clicked. Users could choose to go back to the place where they created this post, and create a new story where the old content is on top of the real world layer. The world idea of this filter is to encourage users create new content by leveraging existing posts and support their reminiscing behavior.
How I built it
I collected large assets of images in png format. We used them in particle emitter, l .I added background to make contrast. Then added functionality to the face tracker, background effect with the help of Spark AR.
Challenges I ran into
For patching it seems difficult. Lots of things get conflicted while building. Detection of things was no so easy to detect, we need to take care of eyebrow movement, eyes opening closing etc. (Large images was deleted because of the large export size)
Accomplishments that I'm proud of
I worked in this project from starting of this month(September) but unfortunately(Health issue) got delayed and completed on the last hour of last day. I need to build something as an open source contributor towards organization to develop skills more strengthening.
What I learned
It was quite interesting to learn Spark Ar tutorial from scratch.
What's next for Scripto
Well, we have planned to make this filter more variant, we would be adding support for more facial expression, so more people can express them self as emojis also.
Built With
assets
clicking
gif
Try it out
github.com | Scripto-Lava | I always thought about how AR changes its to make GIF image, so, I decide to make AR tech to build Animated effect into GIF when clicked/ | ['Anupam Haldkar'] | [] | ['assets', 'clicking', 'gif'] | 13 |
10,493 | https://devpost.com/software/multiclip | Inspiration
The trend of story reels
What it does
It allows the user to edit and stitch together two video clips
How I built it
It's built around the galleryTexture and a whole heap of spark patches as well as neat techniques discovered with time.
Challenges I ran into
Applying the edge detection on both the background and foreground textures. Also having object tap work properly seems impossible, works in the player but not so much live. A lot of the UI actually was a bit hard to implement because of current constraints. (using the NativeUI was also a bit out since it pushed the "Add Media" button out of sight on launch)
Accomplishments that I'm proud of
Having made the deadline, just barely, but because I really wanted to at least submit one thing despite the plethora of other projects we currently have.
What I learned
That making even such a short video can be quite time consuming
What's next for MultiClip
I'd finetune it first, lots of the things that worked very well in the player don't on the phone, possibly rethink the approach to UI and maybe experiment with render passes to potentially allow for 3 camera feeds.
Built With
sparkar
Try it out
www.instagram.com | MultiClip | Multiclip is aimed to allow users more creative freedom in their content generation by allowing them to have multiple clips at once whilst editing them individually. | ['Boris Josz'] | [] | ['sparkar'] | 14 |
10,493 | https://devpost.com/software/glass-twin | Fight yourself!
Inspiration
Neon Twin, for sure. This is my spin on it.
What it does
The effect creates a glassy ethereal twin that will have a good time doing whatever you want to do. Just do what you want!
How I built it
P A T C H E S. NO OTHER ASSETS. 27kb, somehow.
Challenges I ran into
Editing a 2 minute video. Inshot is a pretty cool app and I got it done pretty quickly. Mostly I had a hard time finding two minutes of content to record. I'm so used to the 15 second clips in IG!
Accomplishments that I'm proud of
I'm happy that I have an optical flow patch now!
What I learned
I learned a lot about normal maps! Also found a number of bugs in Spark :)
What's next for Glass Twin
I'll keep refining it! I already have several ideas on how I can improve it.
Built With
sparkar
Try it out
www.instagram.com | Glass Twin | Glass Twin is your ethereal buddy who shows up to back you up, or just keep you company. Glass Twin is great at dancing and fighting and just being a good friend. | ['Josh Beckwith'] | [] | ['sparkar'] | 15 |
10,493 | https://devpost.com/software/hallucinate | GIF
Reels_demo
GIF
Reels_demo_2
GIF
Reels_demo_3
Front camera
Front camera
Front camera
## Inspiration
The inspiration was creating a personalised experience for the person who uses the filter.
Whoever uses the filter, the resulting visual will be unique for each individual.
## What it does
The filter creates random shapes with colors, based on the camera texture as long as it has a movement.
You can capture any moment with the random shapes you created on the canvas which is your screen. And you will have your own unique filter painting.
## How I built it
I used the audio analyzer for colors and shapes to differ by sound input coming from the microphone, while render pass is working on the background for the richness of the shapes.
## Challenges I ran into
The most challenging part was making the filter produce random shapes and different colors perpetually.
## Accomplishments that I'm proud of
The whole filter :)
## What I learned
I learned a better understanding of creating something with the audio analyzer and the render pass and how to combine them to improve the randomness.
## What's next for hallucinate
I want to improve the transition between the scenes with smoothening them.
Try it out
www.instagram.com | hallucinate | Hallucinate allows you to turn your environment into a magical | ['Harun Köktürk'] | [] | [] | 16 |
10,493 | https://devpost.com/software/a-qyc621 | Inspiration
A performing tunnel for aspiring performers at Instagram Reels.
What it does
With back camera, for a cameraman/woman to take videos of a performer inside a "Tunnel" augmented into the world space. The camerawoman can move inside the tunnel in three dimensions and rotate her camera in three directions. The walls of the tunnel store shots of the performer during his/her performance.
How I built it
I used the SparkAR segmentation, plane tracker and render passes.
I used plane tracker to enable camerawoman to move inside the tunnel in three dimensions and to rotate her camera in three directions. Intentionally, I did not place the “Tunnel” 3d-object under the plane tracker. This is because 1- plane tracking isn’t yet stable enough, and 2- when a plane tracker is placed into the scene even without placing any object under the plane tracker, a horizontal plane at the device’s starting position is assumed, and SparkAR’s environmental understanding tracks this point.
I used segmentation to enable the performer "inside" the tunnel to be visible to the camera.
I did not intend this effect to be used when the performer is “outside” the tunnel.
I used delay frames to keep snapshots of the performer on the walls. When there is no person inside the tunnel, the tunnel initially shows pics of the outside world. Once there is a person inside the tunnel and once the segmentation percentage is high enough, the performer is visible on the walls. If the segmentation percentage becomes low, the last snapshot with "ok" segmentation percentage is shown on the walls.
The back panel of the tunnels shows a concatenated snapshot of all the people inside the tunnel..
Challenges I ran into
Loading SparkAR effects to Instagram reels on Android phones. Instability of plane tracker. Glitches of segmentation.
Accomplishments that I'm proud of
Performing tunnel is a useful immersion for aspiring performers to use as a platform.
What I learned
SparkAR render passes.
What's next for Performance Tunnel
Using render passes and delay frames, walls of the tunnel can display more variety of snapshots of the performer and other people, e.g., including shots from the front camera.
Built With
sparkar
unity
Try it out
www.instagram.com | Performance Tunnel | A virtual tunnel embedded in world space for aspiring performers | ['Banu Ozden'] | [] | ['sparkar', 'unity'] | 17 |
10,493 | https://devpost.com/software/magic-mirror-bg | Inspiration
The magic mirrors you can find at carnivals that distort the image.
What it does
The effect distorts the background with some fun effects that go well with full body segmentation. Looks cools for dances.
Built With
sparkar
Try it out
www.instagram.com | Magic Mirror BG | Create fun videos by bringing magic mirror distortion effects to your background! | ['David Pripas'] | [] | ['sparkar'] | 18 |
10,493 | https://devpost.com/software/something-bad-is-going-to-happen-moment | GIF
indoor space: using it for a game of tic-tac-toe
outdoor space: on grass
"what's next"
Inspiration
Inspired by Stuart Williams' work “
Luminous Earth Grid
” (1993) where he explores "the potential harmony between technology and nature", the grid is a simple and consistent statement on what my interpretation of "AR" is - a possible middle ground between the overwhelming virtual space/tech and reality.
I wanted to make something simple and consistent, which is the opposite of what the digital experience of tech/social media has become, and also what AR filters (for example, in tiktok) has become.
What it does
It detects horizontal surface in the real world and places a grid on top.
People can play noughts and crosses with it. Potentially using it to line up and organize things in real life.
How I built it
Using Cinema4d for the asset and SparkAR for the filter.
Steps: Following the worldAR tutorials and understanding what it can do and can't do, think of the concept, making the asset, editing the manipulation ring, adding plane tracker, putting the asset in, testing materials, testing and reiterating make small adjustments here and there.
Challenges I ran into
Trying to get a better glow (more textured) of the grid, which I did not figure out.
plane detection is pretty unstable and I tried figuring out ways to optimize it (did not figure out).
Getting rid of things. Initially I had a picker which switches between grids with different dimension, so 3x3, 4x4, 5x5, I then realize I was once again falling into the trap of "adding features will make things better" - not only would the user already busy placing and moving the grid and making sure it is on the right surface, the difference between dimension does not add anything to the experience,
I was hoping to make it into something more engaging and well-polished and conveys the same target experience. But because of time I wasn't able to polish it more and experiment more things that could make it better.
Accomplishments that I'm proud of
This is the second filter I made with SparkAR, thinking more about the content itself instead of just the tech and features.
What I learned
AR is more than just filter on people faces.
World AR effect and plane tracking features.
What's next for "The Grid. A Digital Frontier"
More landscape filters that interacts with landscapes in interesting ways.
plane tracking that can detect surfaces very far away (mountains, rivers etc) and use this filter on top.
Trying to do a better glow material
If Spark AR adds more features like detecting actual depth in space so different things can be put in front of the filter, the user can interact with the grid and maybe use it to do things like frame and organize plants in the garden (like
this
).
Built With
ae
cinema4d
photoshop
planetracking
sparkar
worldar
Try it out
www.instagram.com | The Grid. A Digital Frontier | A grid for organizing and playing noughts and crosses. | ['Joanna Liu'] | [] | ['ae', 'cinema4d', 'photoshop', 'planetracking', 'sparkar', 'worldar'] | 19 |
10,493 | https://devpost.com/software/hologram-2ya6q8 | Inspiration
Holograms
What it does
Projects yourself as a hologram
How I built it
Gallery Picker and manipulations of segmented image
Challenges I ran into
Make the recorded video's background transparent
Accomplishments that I'm proud of
Able to deliver this while learning shaders.
What I learned
Playing with materials and shader knowledge
What's next for Hologram
Scifi videos that can e used in reels.
Built With
ar
particle
sparkar
Try it out
www.instagram.com | Hologram | Record yourself and watch yourself as a hologram | ['Jivesh Piplani'] | [] | ['ar', 'particle', 'sparkar'] | 20 |
10,493 | https://devpost.com/software/the-fairylights | Inspiration
There is nothing better than some fairylights to add to the fun and beauty to any frame, so here's a frame with fairylights to help add that little something extra to your reels!
What it does
It puts a frame of fairylights on tapping the screen.
How we built it
We built and optimised the assets using blender and the sparkAR toolkit for blender.
What we learned
We learned a lot of blender in the process and the different add-ons available for the same.
What's next for The Fairylights
We would like to add more colour and animation to make it more fun.
Built With
sparkar
Try it out
drive.google.com | The Fairylights | Illuminate your stories! | ['Ritvi Mishra', 'Keshvi Mishra'] | [] | ['sparkar'] | 21 |
10,493 | https://devpost.com/software/atlantisian-and-the-mermaid | Preview
Atlantisian and the Mermaid is an Augmented Reality based mini-game powered by SparkAR. The game makes use of advanced features like plane tracking, particle systems, worldAR, etc.
In this game, the user helps the Atlantisian collects the waste dumped into the water bodies and gets rewarded with a beautiful mermaid for every five items he collects, as promised by the King Of Atlantis. So, check out this filter and give him a helping hand.
Built With
autodesk
blender
javascript
particle
sparkar
Try it out
www.instagram.com | Atlantisian And The Mermaid | Atlantisian and the Mermaid is an Augmented Reality based mini-game powered by SparkAR. The game makes use of advanced features like plane tracking, particle systems, worldAR, etc. | ['Kartik Gupta'] | [] | ['autodesk', 'blender', 'javascript', 'particle', 'sparkar'] | 22 |
10,493 | https://devpost.com/software/corgi-play-date | Corgi Play Date
Outdoor shot
Outdoor shot near Marina Bay Sands, Singapore
Reset Mode
Play Mode
Follow Mode
Powered by Spark AR Studio
Plane Tracking
Touch Gesture Module
Patch Editor
Animation Module
World & Local Transform
Scripts
Inspiration
When I was alone and wanted to capture a video of a nice place with beautiful surroundings, I realized that the scene was a little 'still' and there was no subject to be filmed. At the same time, I also wished that I can own a dog and bring it to explore the world. With these intentions, I thought why not create a virtual dog where I can film it anywhere I want?
What it does
Corgi Play Date is an Instagram effect where the user can interact and have fun with a Virtual Corgi. It has 4 modes:
RESET Mode. You can use this mode to reset the Plane Tracker. In addition, you can also reposition and change the scale of the Corgi.
FREE Mode. In this mode, the Corgi will decide on its own on what to do next.
Follow Mode. You can walk a Corgi in this mode. The Corgi will try to stay within your camera's view.
Play Mode. Have fun with the Corgi by playing fetch using a frisbee. You can throw it in any direction and it will fetch it back to you.
How I built it
3D Models
The 3D mesh was modeled and animated by
nitacawo, who sells the model in Sketchfab with a Standard License. The model was built natively with Autodesk 3ds Max but I edited the model in Blender by using its fbx file.
I wanted to give the Corgi a smoother look, so I added a Subdivision Surface Modifier. Basically the modifier splits the faces of a mesh into smaller faces, giving it a smooth appearance. I have also added the 'deal-with-it' sunglasses to make it cute and funny.
Next, I used Photoshop to smooth its texture.
Then, I split its all-in-one action sequence into 35 actions, which can be used with Spark AR's playback controller to animate the Corgi. In addition, I've also cleaned up some rotational and offset error due to the fbx import.
With that, I imported the modified 3D model into Spark AR.
Logical Flow Diagram
The following summarises how the effect works:
In a nutshell, I used 7 scripts to handle different parts of the program. The animation is always continuous when the effect is being launched. When the animation has been completed, the Animation Generator will request for the next animation.
Challenges I ran into
1. Reactive Programming
I was new to reactive programming and I faced a lot of issues when I need to use math formulas. For example, to determine whether the Corgi's position is within the Line-of-Sight (LOS), I used the Barycentric coordinate system to determine it's status. To do so, I need to determine the signs generated by comparing the Corgi's position and all 3 LOS vertices' position with the following formula:
(p1.x - p3.x) * (p2.z - p3.z) - (p2.x - p3.x) * (p1.z - p3.z);
However, this formula doesn't work because I'm using Scalar Signals. As a result, the following is the correct code:
Reactive.mul(p1.x.sub(p3.x), p2.z.sub(p3.z)).sub(Reactive.mul(p2.x.sub(p3.x), p1.z.sub(p3.z)));
In my opinion, it is less readable than the style of imperative programming. Having said that, with reactive programming, the value will update automatically without having to re-execute the statement, which is a very powerful feature.
2. Limitation of WorldTransform
This effect relies on WorldTransform so that the Corgi will 'look' at the camera during Play Mode. However, the Y rotation signal value is in the range of [-PI, PI], instead of a full 360 degrees. In other words, there is no way to know if an object has rotation more than 180 degrees. Due to this limitation, when the Corgi runs 'behind' the camera, it will not look at the camera.
3. Fixed Speed for Animation Playback Controllers
I was looking for a way to accurately generate running speed based on running distance, but the speed of the Playback Controller is a constant. As a result, I can only work with 4 levels of speed with 4 different types of moving animation.
4. No Access to Recorded Audio from Script
I wanted to create some basic voice-controlled commands so that the user can interact directly with the Corgi by using voice recognition. Hopefully, this feature will be added to Spark AR soon.
5. Unable to 'Interrupt' Ongoing Animation
I used the subscription of TimeDriver's onComplete method to generate the next action, but I couldn't introduce new animation before onComplete has been called. Well, technically I can interrupt and generate new animation but the previous onComplete method will end the current animation earlier than it should be. Then, if I continue to interrupt more often, the whole animation system will become unstable. Maybe I'm missing out something here. Due to this limitation, there is a slight delay when the user taps to throw the frisbee in Play Mode due to the Corgi's ongoing animation.
Accomplishments that I'm proud of
1. Reactive Programming
It was a challenge but I'm glad that now I can code with Reactive Programming!
2. Continuous Animation System
It was a challenge to design a system that can change mode without breaking the animation flow. I'm also glad that the Corgi exhibits a certain type of character based on the non-uniform random generator that was defined. To achieve this, I re-designed and re-built the system 3 times, from scratch. The first version was built fully with Patches only, and the biggest challenge here is the inability to track current and previous state. I've used the Delay Patch to mimic previous and current state but the overall design was visually complicated. The second version used a simplified Patches with some basic scripts. Then, for the third version, I decided to create most of its algorithm by using scripts.
3. A Fun Effect
Personally, I enjoyed testing and using this effect. I had a lot of fun placing the Corgi in various locations and sometimes it's unintended interaction with the surrounding was hilarious.
What I learned
1. Spark AR
It was my first time using Spark AR. I have also experimented with various mask effects on my face. Overall, it is a fun software to work with.
2. Scripting
I have never coded with Reactive Programming before and I admit that it was very hard to understand initially. Thankfully, most of the scripting object references are very well documented and easily understandable. Well, I said "most" because this platform is evolving rapidly and new updates are released very frequently. Plus, the official tutorial videos on the Spark AR website were very helpful too.
What's next for Corgi Play Date
I will continue to improve this effect's features and performances and hopefully more users around the world can have fun with this effect as well.
Built With
blender
javascript
spark-ar
Try it out
www.instagram.com
github.com | Corgi Play Date | Let's have some fun with a virtual Corgi! | [] | [] | ['blender', 'javascript', 'spark-ar'] | 23 |
10,493 | https://devpost.com/software/winter-coming | 360 panorama image
Inspiration
We, some group of friends was planning for the trip after the examination, and due to the pandemic situation, we couldn't make it. So we're were discussing the plan of the trip as well as of filter making and suddenly the idea got pop up in our mind about the filter with our dream destinations.
What it does
This filter changes your background with night peaceful area.
How I built it
We used spark AR studio and built this. In which we used image segmentation to change the background of the camera image.
Challenges I ran into
As we just started with AR learning, everything is a challenge for us and we are going to learn a lot of things out of it.
Accomplishments that I'm proud of
We learned to make an AR effect in one week and end up making our first AR filter.
What I learn
We learned making filters using Spark AR and explored some of the creative thinking.
What's next for Winter coming.
Winter coming AR filter can be added with some more stuff related to winter, which we are thinking to explore in the near future.
Built With
sparkar
Try it out
github.com | StayHome | In this filter, we added image segmentation | ['sandeep vishwakarma', 'Neha Singh', 'Gaurav Konde'] | [] | ['sparkar'] | 24 |
10,493 | https://devpost.com/software/spacex-astronaut | VR inside view of space filter
AR space portal tracking
spark sudio project view
Inspiration
What it does
this effect give user and VR astronaut in space regarding to visual AR portal
How I built it
i designed 3D model of moon and earth using blender , import spaceship from spark library, and add animation to objects , import high quality 360 photo of space and build this effect using by spark studio
Challenges I ran into
i challenged with unstable plane track in spark studio which i doesn't work and detect currently elements of the objects , so i trying to fix that by some tricks
Accomplishments that I'm proud of
i always trying to learn more , trying more to make everything possible and reality and that's why i always proud of me
What I learned
how to use spark studio and tracking plans and human body
What's next for SpaceX astronaut
trying to update awesome and another space and Planet of the world , using awesome portals and effects and make amaze users
Built With
aftereffects
blender
javascript
liberty-data
photoshop
sparkar
Try it out
www.instagram.com
drive.google.com | SpaceX astronaut | this is the first AR / VR SpaceX astronaut experience in Instagram filters which designed by spark studio | ['Amin Mehraein'] | [] | ['aftereffects', 'blender', 'javascript', 'liberty-data', 'photoshop', 'sparkar'] | 25 |
10,493 | https://devpost.com/software/green-monsters | Inspiration
I have been fascinated by Augmented Reality for a long time now. I really appreciate the work people put into creating something similar to the real-world for the welfare of people. We personally like the monster in Cut The Rope game.
What it does
The effect has the 3D Models of three green monsters who are positioned using a plane tracker in worldAR. Their position is randomly generated. We have introduced a timer in the effect. A person has to use back camera and tap on a monster to kill or make it invisible. Tap on all the monster before the time runs out.
How I built it
1) Thinking about an idea related to Augmented Reality to introduce in our effect was difficult as well as exciting. We took the time to think and progress with it and finally concluded to work on something related to the game. "Who doesn't like games?"
2) We started working on the timer part first. The countdown is an animation sequence of images.
3) We proceeded on choosing a 3D model and then worked on Patch Editor to randomize the position of monsters.
4) We then worked on Object Tapping and connecting it to effect.
5) Lastly, we worked on giving the final touch to the effect.
Challenges I ran into
1) Considering the fact that we are a family and are in the same house, it was easy to work together but still, there were some conflicts in ideas, but we solved them together.
2) We found working with values in Patch Editor a little difficult. It took a lot of time.
3) Going through the documentation and understanding each and every concept was a little time-consuming.
Accomplishments that I'm proud of
1) I am proud that I went through each and every video in the documentation of sparkAR, tried them out with my team and realized how with a host of features and learning guides, it’s a good tool to showcase your creativity.
2) We successfully learnt to develop an effect showcasing our creativity and gaming interest and enthusiasm to work on a project together as a team.
What I learned
1) We learnt teamwork and time management.
2) We learnt how to efficiently use sparkAR to develop creative effects.
What's next for Green Monsters
1) It still does not show the Winning message on tapping on all the monsters. We would like to work on that.
2) Sometimes the monsters are not shown in a plain background. We will work on improving it.
3) We want to improve the effect by introducing background music and a theme.
Built With
3dmodel
planetracking
sparkar
Try it out
github.com
www.instagram.com | Green Monsters | Developed a simple world AR effect using PlaneTracker in sparkAR in a span of few days, about tapping of monsters before the time runs out. | ['Ayushi Panth', 'Rohan Panth'] | [] | ['3dmodel', 'planetracking', 'sparkar'] | 26 |
10,493 | https://devpost.com/software/aesthetic-reality-projector | Inside of flower projection
Beautiful projection artwork
Projection for the light effect
Light deformed on curved surfaces
Inspiration
Indie music video using projector
In the youtube we can see the a lot of music video. Usually famous artists having a lot of money make high quality music videos with various special effects. However many indie artists use projectors for the music video to save money. But we love that raw and vintage vibes. But because of the pandemic it became more difficult to make it. So we decide to make an effect help making their own special video with projector style.
What it does
It helps you make projector style videos using their image/video in gallery. So with this effect, depending on video you have, you can create variety of projection video style. Some people can make their own creative projection mapping music video in their room and some people create beautiful projection artwork like standing in front of a projector.
How I built it
Focused on looking like a real projector effect
Scaled the vertex coordinates depending on image resolution ratio loaded from the gallery for cropping to screen.
Translated the vertex coordinates depending on the strength of red and green color for deforming a light on curved surfaces.
Made blur effect and adjusted color levels, saturation and lightness using render pass to create a light reflection and blurry rough texture of the projection.
Made a glitch effect for slight vibration of the projector video.
To make a realistic shadow, we translated vertex coordinate depending on the face position.
Challenges I ran into
We thought the most important thing was to make it look real. In our experience, We thought the video load from gallery affected more no matter how this effect was well made. So we've done a lot of experiments loading various kind of video from gallery and updated every time to make it look real no matter what video was loaded. And now we thought we made it.
Accomplishments that I'm proud of
Usually, in Spark AR, we often miss the task of enhancing the quality for making perfection because actually it is for fun like hobby for me. But this time, we made a lot of effort to improve the perfection so when we completed it, we couldn't never forget this accomplishments.
What I learned
Develop the shader using render pass.
How to collaborate with creators.
What's next for Aesthetic Reality Projector
To make it user friendly, we chose the 2D effect using 2D coordinate. But after the hackerthon we have a plan to make it world AR effect version using plane tracker.
Built With
javascript
patcheditor
sparkar
Try it out
www.instagram.com | Aesthetic Reality Projector | It can help you make your own video with projector style. | ['seulji Lee', 'Sway Ho'] | [] | ['javascript', 'patcheditor', 'sparkar'] | 27 |
10,493 | https://devpost.com/software/filtar | logo
live demo
workspace
re-sizable AR filter
Inspiration
The opportunity provided by this hackathon was the biggest inspiration to make this world AR effect using SparkAR.
What it does
It actually provides a personal filter augmented in real world where the filtAR actually works as a mirror having filters on it while the user performs in reels.
How we built it
We used sparkAR, its plane detection, segmentation and convolve patch to achieve the result and also the patch editor which helped in implementing the logic of the filter.
Challenges we ran into
It was quite difficult to detect ground plane and to resize the augmented filter in realtime
Accomplishments that we're proud of
We successfully implemented the filtAR where it captures the user's motion and renders in on another canvas in realtime.
What we learned
We learned a lot about sparkAR, body segmentation and also visual programming.
What's next for FiltAR
We aim to include UI in filtAR where user have option of number of AR filters instead of one.
Built With
javascript
sparkar
Try it out
www.instagram.com
github.com
www.instagram.com | FiltAR | With the help of plane tracking, segmentation and convolve patch I've made an AR filter that can be used in Instagram reels while one performs. | ['saurabh chaudhary'] | [] | ['javascript', 'sparkar'] | 28 |
10,493 | https://devpost.com/software/vote | - | - | - | ['Beth Wickerson'] | [] | [] | 29 |
10,493 | https://devpost.com/software/cosmic-swag | And starts dancing herself :)
My 'Cosmic girl' loves it...
Is impressed...
Inspiration
Music videos like Get Lucky by Daft Punk
What it does
Makes you shine like a diamond. Giving your world a funky cosmic flavour.
How I built it
Photoshop, Spark AR, a good concept, creativity, sweat and tweaking tweaking tweaking.
Challenges I ran into
I always wanted to create my version inspired by Chris Price Kira Kira effect. I don't like copying so the challenge was to give it something to make it my own. I like to think I have achieved it by creating big ass solar flares and positioning them to my taste.
Accomplishments that I'm proud of
I like the vibe this effect gives. From my point of view perfect for Reels.
What I learned
A lot of new patchwork.
What's next for Cosmic Swag
Publishing so people can enjoy it.
Try it out
www.instagram.com | Cosmic Swag | I created big solar flares that react to the light parts (back/front). Place and scale in worldview a big solar vortex. The user is placed segmented in front. Giving this effect a nice depth. | ['Chris Pelk'] | [] | [] | 30 |
10,493 | https://devpost.com/software/sketch-ar | Inspiration
Just an idea came up with talking to y u know geeky friend about sci fi and stuff what if we draw whenever we want without any need of pen or paper ,tabs,laptops and etc .
What it does
It does everything it says . it draws the artistic creature hiding inside a human mind
How I built it
I built using spark ar
Challenges I ran into
Umm i was a beginner when i started then found myself looking for the answer but finally started grasping the in depth of spark AR
Accomplishments that I'm proud of
I created an particle effect and an immersive AR exp for the user and i am happy when my users give me some happy comments that's all.
What I learned
I learned quite about the base of AR which i was already learning via unity and vuforia engine it helped me to boost my knowledge about different trackers and it's uses
What's next for SKETCH-AR
Umm i am thinking of it to take this project to the next level using unity and other stuff and i kind of like this idea and think i can do just more than typing this ridiculous question.
Built With
ar
javascript
sparkar
Try it out
github.com | SKETCH-AR | It's an idea about making a canvas with immersive AR. | ['Sankalp Jaitly'] | [] | ['ar', 'javascript', 'sparkar'] | 31 |
10,493 | https://devpost.com/software/tom-let-s-dance-ar-reels-filter | tom lets dance
Inspiration
Tom&Jerry is my favourite cartoon and i have taken tom as dancer inspiration to this generation to entertain them in reels
What it does
This filter contians a 3d animated tom object which makes people dance with tom and entertain in reels
How I built it
I created this filter using the most friendly Spark AR engine and sketchfab
Challenges I ran into
It was first time making an 3d animated filter in Spark ar and i confused making placer and plane tracker and animation but it was amazing experience creating this filter
Accomplishments that I'm proud of
being part of this facebook AR hackathon and improving the reels community
What I learned
placing the 3d object at right surface by adding patches
What's next for Tom Let's dance Ar reels filter
dance with tom&jerry
Built With
planetracker
sketchfab
sparkar | Tom Let's dance Ar reels filter | this is an tom lets dance 3d animated filter for instagram reels augmented Hackathon | ['thisisdivakars Gamer'] | [] | ['planetracker', 'sketchfab', 'sparkar'] | 32 |
10,493 | https://devpost.com/software/dance-for-awareness | Inspiration
Breast Cancer Awareness
is an important concept year round, but especially in the fall. Families typically get "Pinked Out" for football games, and wear Pink Awareness gear proudly to school and work. This year is different. We need technology to help spread the word! A World AR experience focusing on dance and user stories is a great way to spread the Breast Cancer Awareness message and have fun doing it!
What it does
Dance! for Awareness
is a world AR effect in support of breast cancer awareness. Users place an inspirational message and a large pink ribbon into their scene using plane tracking. When recording begins, the Pink Ribbon comes to life with moves of it's own! Users dance along with the ribbon in a viral "dance off". The ribbon also responds to microphone input, and can act as a "puppet".
For an extra treat, a slider controls a color filter that makes PINK gear POP!
How I built it
Plane tracking
- The "World Object" template provided a quick and easy start to plane tracking.
Is that ribbon Dancing?
- The ribbon uses both the power meter patch and the audio analyzer patch to synchronize movement to the music. The audio patches drive rotation and translation transforms on the ribbon. The drop shadow is also synchronized to the ribbon's movement using scale transforms.
Selective Color Filter
- A Native UI slider controls the amount of desaturation to apply to colors outside of PINK. The processing is done using a custom render pipeline using multiple scene render passes.
Select a phrase
- A Native UI picker allows users to select from eight different impactful quotes
Challenges I ran into
All of the examples I have seen for Custom render pipelines are for camera effects, not world effects. It took a bit of playing around with multiple render passes to get the camera feed and 3D objects to display correctly after adding custom render path.
Accomplishments that I'm proud of
The motion of the ribbon is unexpectedly fun and engaging. You never quite know how it will react to a different songs, but the motions synchronize well. This took a lot of tweaking while listening to different songs/genres.
What I learned
This was my first Spark AR effect, so I learned ALOT!!! I am impressed with the capabilities of the patch editor, and the fact that little to no code is required. I am excited to continue building effects.
What's next for Dance! for Awareness
I hope to launch the Effect in time for Breast Cancer Awareness month in October.
Built With
animations
audioanalyzer
colorextractionlut
patches
renderpasses
sparkar
worldar
Try it out
github.com | Dance! for Awareness | Dance! for Awareness challenges users to an augmented reality "Dance Off". Users are paired with a personified pink ribbon with pretty amazing dance moves! | ['Sam Harrell'] | [] | ['animations', 'audioanalyzer', 'colorextractionlut', 'patches', 'renderpasses', 'sparkar', 'worldar'] | 33 |
10,493 | https://devpost.com/software/talk-switch | Inspiration
As a kid, we were really fascinated by talking Tom and as an adult, we found VR to be something really cool. Hence, we decided to combine the two and create our project. We were really driven by the fact that we could bring certain simple characters to life using VR and we really wanted to delve into those possibilities. Our video demo will walk you through the process and what the filter does.
How we built it
Our first task was getting the 3-D figures. We found some inspiration on Maximo and used the figures from there. We then edited them and removed the matte textures to make the models lighter and more portable. The next task was to add audio patches and make the models come to life. We add pitch shifters to each character and use if-else logic to change the characters with screen taps and similarly change their voices with screen taps. The overall application used was Spark AR which really helped us control the program flow.
Challenges we ran into
There were many challenges that we ran into and we had to overcome in order to create the perfect submission. To begin with, as junior and senior undergraduate students, we had a lot of college work that we had to manage. All three of us are from different departments and had different schedules. We had to work remotely and efficiently to be able to complete the project on time. Secondly, most of us being fairly new to Spark AR, had to first learn the basics of program flow in Spark AR and then implement the project idea. There were times when we could not get a feature to work and it took some brainstorming and discussions and we were able to arrive at a solution.
Accomplishments that we're proud of
We are really proud that we could learn and implement new technologies while balancing college classes which are very hectic as well. We were able to work efficiently in a team and figure out tapping, audio and other patches. We came up with many sequential and selection logics before finalizing the current model and executing it. We were dedicated, efficient and collaborative. The creativity just flowed after that.
What we learned
We began by learning 3D modelling and Spark AR workflow. We figured out the concepts behind AR and VR as well as the key differences. We learnt how to effectively work in a team and remotely manage project and ideas. We indulged in healthy discussions and really understood what teamwork is all about. By the end, we were able to identify what distinguishes a good project from a bad one and how we can execute an amazing idea!
Built With
javascript
sparkar
Try it out
www.instagram.com | Talk switch | Let them do the talking for you | ['Muskan Aggarwal', 'Prakhar Rathi', 'abhishek nair'] | [] | ['javascript', 'sparkar'] | 34 |
10,493 | https://devpost.com/software/quizmania | Inspiration
Tell me and I forget, teach me and I may remember, involve me and I learn. Learning never exhausts mind, it only gets boring. So to make learning more interesting and engaging which involves the user and also keep entertained, I created quiz mania. This helps the user to enhance their learning by involving them in solving quiz in much interesting way and engaging them in learning process and knowing their score for further improvement.
What it does
Quiz mania provides quizzes on various subjects where one can select the subject of their interest. There will be questions on your screen and whole lot of answers will there in the real world where you have to search for your correct answer and hit the target answer. If you hit the correct target then you will be rewarded by point and if wrong target point would we deducted. At the end of the quiz you will know your score and can share it on social media for comparison and further improvement. This method will make learning fun and will keep engaging.
What I learned
As learning can never be expressed in words but It was a great experience working with SparkAR. Working on patch editor to create logic's and calculations had improved my logic building skills. Implementing the complete idea in SparkAR had made me more familiar with the software. Creating animations, models and UI has improved my creativity and thinking skills. As the number of projects we build it helps in future for development of ideas and projects and I am sure all the leanings during this period of time will help me further.
What's next for Quiz Mania
In further version will include more levels and trying to make it more interesting and user friendly. Will include some more topics and personalize it according to age and background.##
Built With
blender
photoshop
sparkar
Try it out
github.com | QUIZMANIA | Make learning more engaging by introducing fun | ['Nidhi Suryavanshi', 'Abhinav Bajpai'] | [] | ['blender', 'photoshop', 'sparkar'] | 35 |
10,493 | https://devpost.com/software/deepdive | Inspiration
As, we miss going out because of covid, I wanted to create an effect which gives users some fun and feel which our wanderer soul needs. So, I created this DeepDive effect to have fun as well as get some feel of deep diving sport.
What it does
I created this effect for reels so that, people can have fun with this effect by introducing this AR to our real world.
How I built it
I built this filter using sparkAR. As, this is my first filter as well as first interaction with sparkAR application I learnt total functioning and certain aspects of sparkAR.
Challenges I ran into
As, I didn't know about the sparkAR earlier, so I faced many challenges like while connecting audio to my effect but somehow I managed that by referring to the documentation of sparkar.
Accomplishments that I'm proud of
That I finally interacted with sparkAR, created my first filter and was able to publish it because of this hackathon. From learning patch editor to importing assets from sketchfab. I learnt many things while building this effect. I' ll make many more effects because this really interests me. Also I created some filter like 2d filters but really got excited to learn world AR 3d filter this really interests me.
What's next for DeepDive
we can add some gradient colour for blue like ocean also we can add some more assets and just cover the space for more reality experience.
Built With
sparkar | DeepDive | As we miss going out for our adventurous soul, here is my filter to have some glimpse of deep diving sport. | ['Srishty Chaudhary'] | [] | ['sparkar'] | 36 |
10,493 | https://devpost.com/software/blue-yow39p | blue
Inspiration
I got inspired by the blue color. I have fascination for that color and I want to use it more for my projects. The blue color is very caring but also symbolise creativity
What it does
The Ar filter emerge you and your surrounding in blue light.
How I built it
I used Photoshop to create a blue gradient and then I implemented in Spark Ar and played with animation.
Challenges I ran into
It was challenging to make the colours animate. I was not sure where to go with the filter with after I started it become easier to come up with idea.
Accomplishments that I'm proud of
I am happy with the result it look very visually pleasing and satisfying to look at.
What I learned
I learned that you can start with simple color that could turn in emerging filter. I am learning from tutorial how to accomplish my ideas.
What's next for Blue
I want to create maybe different options for gradients that could change and user can choose.
Try it out
www.instagram.com | Blue | Ar filter that emerge you in blue light | ['Veronika Radenkova'] | [] | [] | 37 |
10,493 | https://devpost.com/software/greenar-world | Inspiration
How often do we realize that we have so much empty space around us which belongs to nature and that we can help revive our mother earth but a small step- Planting more trees and making the earth a little more greener. However, growing a tree is a long process and thus visualizing the final result of it excites us! GreenAR is all about this!
What it does
GreenAR lets you visualize Virtual plants and trees in your Garden/Backyard or elsewhere, you can plant up to 25 trees and 25 plants (5 of each type) at different positions. Thus you can visualize the final results of planting a tree and that certainly excites any nature lover. If you do not have enough space for a large tree, you can visualize smaller pots and plants as well.
How I built it
It started with collecting as well as creating some 3D models. All these 3D Models are extremely low poly and use materials instead of textures so that more and more options could be provided. The UI starts with basic Instructions for User, followed by a canvas option for Plants or trees All this using Object Tap. The first issue was that the plant to grow wherever the user drags the arrow to and stop following it after that. Achieved this using patch editor to initially keep the visibility of all 3d models of plants and trees off and follow the drag. Once button of the plant/tree is pressed, its visibility is turned on, its size is increased from 0,0,0 to 1,1,1 to give a feel of growing and most importantly its screen drag patch is disabled to it stays wherever planted.
A similar patch is created for each and every plant/tree. Thus it enables the user to visualize multiple trees at different positions. This supports up to 25 trees and 25 plants ( 5 of each type) and still not much heavy on the device due to low size. All the trees will have some facts and figures written about them on a separate canvas. Here is a look at the patches arrangement. Credit for 3d models
Challenges I ran into
1.Couldn't Find a way to dynamically instantiate copies of plants 2.Planting a tree and then stop its drag
Decreasing the size of the lens within limits to improve lens engagement
Accomplishments that I'm proud of
My first experience with the Patch editor which I feared for a few months is over, and it is awesome! I am pretty sure i have got a ton of ideas to implement this summer vacation.
What I learned
It was a great experience to learn to implement logics and calculations using patches and improved my 3D modelling skills. While building this lens in the last two weeks I came across a lot of small and big issues which will definitely help me in my future projects on SparkAR.
What's next for GreenAR World
It was a great experience to learn to implement logics and calculations using patches and improved my 3D modelling skills. While building this lens in the last two weeks I came across a lot of small and big issues which will definitely help me in my future projects on SparkAR.
Built With
ar
particle | GreenAR World | GreenAR lets you visualize Virtual plants and trees in your Garden/Backyard or elsewhere, you can plant up to 25 trees and 25 plants (5 of each type) at different positions | ['Anshika Singh', 'Shubhanshi Singh', 'Atishay Srivastava'] | [] | ['ar', 'particle'] | 38 |
10,493 | https://devpost.com/software/fly-9wuhi1 | Inspiration
We wanted to make an effect where the user can seem to fly
What it does
On tapping the screen, the user moves up the screen for six seconds and stays there
How we built it
Used Spark AR and resources from sketchfab and Positlabs
Challenges we ran into
Body segmentation only full works until the waist. If it's the full-body, it becomes blurry despite the increase in mask size and reduction of edge detection. Body tracking isn't available yet in Spark AR and using positlabs segmentation patch is not suitable while moving the object.
Accomplishments that we're proud of
The theory works!
What we learned
Using the patch editor
What's next for Fly
Definitely working on full-body segmentation. The theory does work and in the future, the full concept should work.
Creating a 3D model for the legs below or an environment. Add some wings and movement.
Use a green screen for half the screen.
Built With
sparkar
Try it out
www.instagram.com | Fly | Fly in AR | ['Ian Kinyanjui'] | [] | ['sparkar'] | 39 |
10,493 | https://devpost.com/software/dance-with-me-pikachu | Inspiration
What it does
We can enjoy dancing with pikachu.
How I built it
I simply animated Pikachu model in blender. Then I added some particle effects and sound in spark AR.
Challenges I ran into
It was challenging that to manage sound and animation should be able to play at same time.
Accomplishments that I'm proud of
I'm proud of that I managed to play sound and animation at a same time. And also managed to screen panning and pinching.
What I learned
I learned that how to make screen panning and pinching of objects.
What's next for Dance with me pikachu
I'll add more animation of Pikachu that viewers can enjoy it more. And also will add some screen effect that seems more attractive.^~^
Try it out
github.com
www.instagram.com | Dance with me pikachu | Dance like pikachu | ['Akshay Alone'] | [] | [] | 40 |
10,493 | https://devpost.com/software/corona-away-with-flight | Inspiration
Countries are facing the corona pandemic. Many countries have been ruined by corona. This project is just to enhance enthusiasm of countries toward corona.
What it does
It salutes the corona warriors and improves the enthusiasm of corona warriors.
How we built it
We have used the Spark AR and Blender to complete the project.
Challenges we ran into
Size of object is too large we have done a lot of job to reduce the size. Every time we stuck and then improved.Using blender implementing the sea, clouds and airplane.
Accomplishments that we're proud of
Nowadays, Corona is at peak. Distributing happiness around the world is our main mission.
What we learned
We have learnt a lot in this project.Learnt lots of about Spark AR and Blender.
What's next for Corona Away with flight
Some intuition are small but have a great impact. We will make more AR effects to distribute happiness.
Built With
blender
reactive
Try it out
www.instagram.com
drive.google.com | Corona Away with flight | Many countries have faced the corona pandemic. Finally, some countries have control over it and some don't . With this project, we want to boost the countries enthusiasm with a go away flight. | ['Kishan Baranwal', 'Deepak Chaurasia', 'Siddharth Samanta', 'Amit Chaudhary'] | [] | ['blender', 'reactive'] | 41 |
10,493 | https://devpost.com/software/vertigo-plane-exercise | Inspiration
I do have a friend who working as doctor rehab for vertigo's patient and they don't have a simple way to consult patient for vertigo rehab training exercise. Thus I created a simple plane tracking in Spark AR to allow patient can do their vertigo training exercise in their home
What it does
Open back camera to detect the floor that allow to place the 3D model for the training exercise
How I built it
A simple world object, that I add 3D model with loop animation for distraction in their training exercise
Challenges I ran into
Type of module that can be create for training exercise
Accomplishments that I'm proud of
Complete this project last minute
What's next for Vertigo Plane Exercise
Add a timer that allow doctor and patient's can see their progress
Built With
ar
instagram
particle
Try it out
www.instagram.com | Vertigo Plane Exercise | A simple balancing training for those who have vertigo illness | ['Amir Hamzah'] | [] | ['ar', 'instagram', 'particle'] | 42 |
10,493 | https://devpost.com/software/dance-challenge | GIF
Demo
Inspiration
We were thinking about the kind of videos most people make on Instagram reels and realized that most of them are related to dances and lip-syncing. We then thought how can we make that specific experience more fun and enjoyable for the creators and came to the decision for a dance challenge. It was hard to think about how we could implement such a challenge in AR. Then one of our team members recalled a fun arcade dance game which then we decided to recreate as an AR effect.
What it does
Dance Challenge is an AR effect specially made for Instagram reels creators to accomplish a fun dancing challenge. When the user starts recording, the effect first shows a countdown to let the user get ready, then it randomly shows several simple dance steps for a few seconds that the user needs to remember and then repeat correctly in its respective order.
How we built it
We started developing the filter by first creating the disco-like effect using segmentation and artificial lighting. We wanted to create a virtual dance environment and disco lighting was the best idea, it was also something that people were missing because of the pandemic.
Once that part of the effect was over we moved into the actual challenge interactions using the patch editor. This including showing the countdown when the user starts recording, showing a random dance step for a few seconds, and then hiding everything so that the user can record themselves repeating the moves.
Challenges we ran into
All 3 of us in the team are from slightly different technical backgrounds. It was a challenge to combine all of our skills to create an AR effect to bring about a fun experience
Accomplishments that we're proud of
This hackathon helped the 3 of us team members to work together for the first time on a project. We are proud of the way it turned out and helped us improve our skills
What we learned
We had limited experience in creating World AR effects using Spark. This hackathon motivated us to learn how good experiences can be created using the back camera and helped us learn a new skill.
What's next for Dance Step Challenge
We are still polishing up our project. After publishing if we see it getting traction then we would like to add more steps and slowly make the experience better and challenging by increasing speed and beyond.
If we win this competition then we would like the invest the prize money into making this project more complex by adding pose estimation using Machine Learning so we could validate the user's steps and reward them accordingly
Built With
ar
augmented-reality
sparkar
Try it out
www.instagram.com | Dance Step Challenge | Observe, Repeat, Dance! Can you get them all right? Dance Step Challenge. | ['Harsh Chandra', 'rajas nabar', 'Sayali Karangutkar'] | [] | ['ar', 'augmented-reality', 'sparkar'] | 43 |
10,493 | https://devpost.com/software/gamer-0sb1mf | GamePlay
Inspiration
Thanks to render pass you can create more personalizate filters. Few months ago a creator made a filter that created an animation of a person taking varios screenshoots with an alpha layer and render pass, so I thought, why not make the same adding a game features.
Thanks to render pass and plane tracker where can place the game everywhere creating the perfect atmosphere for creativity.
What it does
Create the animation of the main character, idle, punch, select the enemies, place the game into real real and play.
It could be funny with reels because you can choose you enemies, so It could be a really cool game to choose one thing between 3 or to complain about something that is annoying you, or if you need to choose between the options
How I built it
Spark AR and a lot of patience, my CPU was about to explote..
Challenges I ran into
First, the animation was hard to develop because segmentation does not work with render pass. Segmentation changes over the time, I can not find a way to freeze the frame and the body segmentation.
Second, my mobile phone dies and with an old device was really hard to debug.
Sliders, I wanted to use sliders, because I wanted the people can choose the shape of the enemy but one slider control everything!!.. I know there is a plugin to do that but I did not wanted to use other people code.
Accomplishments that I'm proud of
I have finished the project, at least a first version. A lot of things can be updated.
What I learned
About sliders, this was the first time I used them, even in the final filter version are not available.
As a regular software developer working with patches always is a challenge.
What's next for Gamer
I am not a designer, change all the assets, enviroment, etc..
Fix bugs
Add multishapes
Improve FPS rate
Built With
ar
javascript
particle
Try it out
www.instagram.com | Against the world | Get yourself or whatever you want into the game with custom characters, enemies and animations | ['Laura Álvarez'] | [] | ['ar', 'javascript', 'particle'] | 44 |
10,493 | https://devpost.com/software/pokemonworld | Before this Hackathon, none of us has any experience with AR technologies or visual design of any kind. The process to explore Spark AR and all of its amazing features, therefore, was absolutely challenging. Yet despite the endless confusion, we really enjoyed every tiny achievement on the way - successfully adding audio effects, getting the "Who are you" feature to work smoothly, being able to animate our first-ever 3d figure, etc. As PokemonWorld marks the end of this wonderful journey, it also becomes the motivation for all of us to start building new, more creative, and challenging AR effects in the future.
Built With
javascript
spark-ar
Try it out
github.com | PokemonWorld | An Instagram effect that allows users to summon their own Pikachu and explore different pokemons around. | ['Nhan Nguyen', 'Ha Truong', 'Do Thien An Duong'] | [] | ['javascript', 'spark-ar'] | 45 |
10,493 | https://devpost.com/software/inframundo |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Inspiration
The effect is inspired with the fire filters versions you can tap to change to other white fire version.
What it does
It uses personsegmentation to simulate fire arround people, you can tap to change to white version. There is a small watermark when you use the effect, you can see my video preview when you are recording, but when the video is recorded you can´t see that.
How I built it
I used Spark AR studio with different nodes, with render pass and person segmetation.
Challenges I ran into
It was difficult combine the segmentation and render pass to do that.
Accomplishments that I'm proud of
I combined my effect magma with person segmentation to do this effect.
What I learned
I learned about render passes and combine that with other things.
What's next for Inframundo
The next could be add differents colours and maybe change the person segmentation to be more reallistic.
Built With
sparkarstudio
Try it out
www.instagram.com | Inframundo | The effect uses person segmentation with the new feature: render passes. | ['Sergio Usón Reguera'] | [] | ['sparkarstudio'] | 46 |
10,493 | https://devpost.com/software/lut-filters-with-hipdict-quotes | Inspiration
sometimes I want to share some funny quotes so I build this one.
What it does
Users can tap the screen to change the LUT filter. Also they can pick a favorite quote from pickers and then move, scale and rotate the quote as they want.
How I built it
I used two pitches: FastColorLUT and Drag 2D. FastColorLUT uses for renders different filters and Drag 2D handles the interaction with object, such as scaling, move and rotation.
Challenges I ran into
It's not able to apply all LUT filters, some of which have extra exposure issue, etc. Therefore, I spent time to do adjustment.
Also, I spent time to research the methods of adding dust to filters and interacting with objects.
I didn't know how to change the icon of items of pickers but then finally I learnt and did it.
Accomplishments that I'm proud of
I finally built a AR effect that I want and learnt some implementations which can be used in the future AR effects.
What I learned
I learned some approaches to handle LUT filters, interact with objects, etc.
What's next for LUT filters with HipDict quotes
I will add more funny quotes and nice LUT filters.
Try it out
www.instagram.com | LUT filters with HipDict quotes | 8 LUT filters + pick a favorite Hipdict's quote | ['fm chan'] | [] | [] | 47 |
10,493 | https://devpost.com/software/beatghost | I love music, colour and light and am always looking for ways to combine these in my creations.
The project analyzes the audio input and creates a colourful 'ghost effect' that changes colour, and changes visibility, along with a motion blur effect based on the audio input.
Using Spark AR Studio, I was able to use the audio analyzer patch to separate the audio input into 3 bands, low medium and high. By converting this value into Vector 4 and using it as a colour value, I could mix the result into the texture that creates the motion-blurred image using render pass. This value also works to control the intensity of the motion blur effect.
I am new to Render pass, and understanding shaders. So I had quite a challenge in trying to bring this effect to life, as I had a clear image in my head already. Fortunately, I was able to do a lot of research and get a sense of understanding in this new territory.
I am proud that I was able to get a grasp on an entirely new concept to me and create a great looking effect.
I learned that I still have a great deal to learn, but that it is very rewarding and fullfilling to explore new ideas and concepts.
I hope that users will love using BeatGhost on Instagram Reels, as it will fit perfectly on this platform, especially in dancing videos that are so popular!
Built With
javascript
Try it out
www.instagram.com | BeatGhost | An amazing effect that utilizes render pass and audio analyzing to create a 'Ghost' effect that pulses and changes colour depending on input from the audio. The effect is designed for Reels. | ['Peter Ruppert'] | [] | ['javascript'] | 48 |
10,493 | https://devpost.com/software/beauty-butterfly | A user trying out the Beauty&Butterfly effect
A user trying out the Beauty&Butterfly effect
Inspiration
Everyone is at home in quarantine! everyone want to hang-out with friends, go for hiking or safari. No worry, here comes the Beauty&Butterfly effect, where you are taken to a forest with butterflies
What it does
In this effect, the user is virtually transported to a forest and then what, you can record videos and capture pictures in this effect only your face or body is displayed and the background is replaced with the forest animation sequence in the background layer. And then there are butterflies all around you !
How I built it
I built it using the Spark AR Studio v96, I used various things like multiple textures to create a animation sequence to create the animation of images. I also added sounds in it. Patch editor helped very much . I also used animation sequence to make the particle system.
Challenges I ran into
for the First time , I had made something more than just using a 3d model.I wanted to use 3d models in my project but due to the file size constraints , I couldn't used 3d models and I used 2d textures and images. I have been working on this almost 120-150 hrs, most of the time at late nights as I am a full-time student. Challenges were many and I surely hope that I would win, else I would try again to be the best. I really don't knew how to upload my project to git-hub, so I took some time researching , how to use git-hub.
Accomplishments that I'm proud of
I'm proud of myself , as I have completed this project on my own without help of any friends and teachers, I learned new things and also to use Spark AR Studio more efficiently. Time-management was also a thing that I learned , Giving time for my daily studies, lectures , courses and also working on my dream hackathon project.
What I learned
I have learned to use Time wisely and work efficiently, thanks to Spark AR Studio, to provide patch editor and pre-defined templates, unlike other apps which I usually make in Unity Studio. I have learned how to use particle system consisting of other shapes, by using animation sequence as their material.I also learned how to use git-hub.
What's next for Beauty&Butterfly
I will try making something related to face mesh next time.
Built With
spark-ar-studio
Try it out
www.instagram.com
github.com | Beauty&Butterfly | Everyone is at home in quarantine! everyone want to hang-out with friends, go for hiking or safari. No worry, here comes the Beauty&Butterfly effect, where you are taken to a forest with butterflies | ['Prathmesh Rane'] | [] | ['spark-ar-studio'] | 49 |
10,493 | https://devpost.com/software/invito | Inspiration
Wanted to try something creative
What it does
Makes you invisible or Vanishing
How I built it
Using Render Pass facility in spark Ar
Challenges I ran into
Learning render pass and was stuck with acads soo hardly any time for the hakathon
Accomplishments that I'm proud of
Finally Able to submit for hackathon
What I learned
Time management and creativity
What's next for Invito
Alot ... Because of lack of time couldnt do anything much Will see about background and more about invisibility how i can make it better.
Try it out
www.instagram.com | Invito | Want to become Invisible or have a Vanishing feel then Invito will do it for you. | ['Saumya Pai'] | [] | [] | 50 |
10,493 | https://devpost.com/software/disco-moods | A user trying out the Disco-Moods effect
A user trying out the Disco-Moods effect
Inspiration
Everyone is at home in quarantine! everyone want to hang-out with friends and go and dance in disco, No worry, here comes the
Disco Moods
Instagram effect.
What it does
In this effect, the user is virtually transported to a disco and then what, you can sing and dance as in this effect only your face or body is displayed and the background is replaced with the disco animation sequence in the background layer.
How I built it
I built it using the Spark AR Studio v96, I used various things like multiple textures to create a animation sequence to create the animation of images. I also added sounds in it. Patch editor helped very much . I also used animation sequence to make the particle system.Since, my effect has not been approved yet, I'm providing the test link.
Challenges I ran into
for the First time , I had made something more than just using a 3d model.I wanted to use 3d models in my project but due to the file size constraints , I couldn't used 3d models and I used 2d textures and images. I have been working on this almost 150-170 hrs, most of the time at late nights as I am a full-time student. Challenges were many and I surely hope that I would win, else I would try again to be the best. I really don't knew how to upload my project to git-hub, so I took some time researching , how to use git-hub. I have made this effect but unfortunately due to some unsupported hardware in my phone, I could not use this effect in my phone, One of my friend helped in recording the effect in her phone and so that I could make necessary changes to it.
Accomplishments that I'm proud of
I'm proud of myself , as I have completed this project on my own, I learned new things and also to use Spark AR Studio more efficiently. Time-management was also a thing that I learned , Giving time for my daily studies, lectures , courses and also working on my dream hackathon project.
What I learned
I have learned to use Time wisely and work efficiently, thanks to Spark AR Studio, to provide patch editor and pre-defined templates, unlike other apps which I usually make in Unity Studio. I have learned how to use particle system consisting of other shapes, by using animation sequence as their material.I also learned how to use git-hub.
What's next for Disco Moods
After creating Disco Moods, I had also created
Beauty&Butterfly
Instagram effect, which is based on background segmentation and takes the user to a forest scene, where there are a variety of butterfly , all around you! . Thanks for giving me this opportunity.
Built With
spark-studio
Try it out
www.instagram.com
github.com | Disco Moods | In quarantine and missing hanging out with friends in disco and disco-lights?? No worry! here comes a Disco Moods effect | ['Prathmesh Rane'] | [] | ['spark-studio'] | 51 |
10,493 | https://devpost.com/software/alien-ufo-abduction | GIF
Abduction!
Out of this world!
What's that in the night sky??
GIF
Alien fun in World SPACE!
They got you!
Enjoying my final form :)
Inspiration
It’s no secret that space and aliens have been trending for years - Space is an immeasurable expanse of the unknown, a place where everything you can imagine can exist. Aliens are an entity that anyone can relate to, regardless of their race, gender or body type. So for this effect I decided to combine my two favorite things into a captivating effect that entices curiosity.
What it does
This effect takes place in space where the user is segmented in front of a spacey world AR background featuring multiple point systems of animated stars. The user is prompted to tap the screen which initiates a sequence of events occurring in under 15 seconds. A 3D UFO zooms in above the user and releases its rainbow beam which begins to slowly abduct the user up into the UFO and changes thier opacity until they disappear. The UFO then deposits the user back into the scene, but they do not look the same! They have been altered to look like a freaky green alien!
How I built it
I approach every effect the same in SPARK AR. Outline it as basic as possible to make sure the process is as straightforward as I was thinking initially and then begin building it out.
I first purchased a 3D model online of a UFO that was in line with my artistic style - cartoony and fun! In SPARK AR I customized the UFO by adding animation sequences to the lights to make them flash, using a matcap for the glass dome and altering the materials to appear metallic.
A screen tap begins the >15 second sequence of events, using an offset patch to begin the timer and using greater than or less than patches to trigger and reset different events.
I used animation and transition patches to control the UFO’s flight, the beams rotation and the users position and scale.
I created the beam in Procreate and added a moving rainbow texture onto it with patches in Spark.
To create the final alien look I used alpha to hide the face texture except for an eye. Then I moved the eyes down to create the final 6 eyed alien I had envisioned. I just added the color green to the person texture to make them green. This look combines lots of my favorite techniques. The spots use a matcap base with glitter (normal) highlight. The eyes use black matcap overlay with cartoonish 'light' animations on them.
Challenges I ran into
The first challenge I ran into was making the beam actually look like it was coming out of the hole in the bottom of the 3D UFO. I used a layering technique where the beam is actually two of the same beams, one behind the user and one in front of the UFO. The front beam needed to be alpha’d precisely to make it look like it was fading out of the ufo and also be more transparent so the user behind it was visible.
Another challenge was the final alien look. I know deformation has been kind of a taboo topic for SPARK AR so I decided to go a less obvious route to make the person look like an alien without relying on a simple deformation of the head. I thought it would be really cool to give the user multiple eyes that blinked with the main eyes. I didn’t know how to achieve this effect at first glance but once I broke it down I realized that I could simply alpha away the rest of the face to segment an eye then move the eye.
The last challenge I ran into was in the patch editor, making the logic behind the sequence. The amount of moving parts and components to this effect quickly made a lot of complicated patch sequences that were disorganized and made changing the time stamps on the whole effect to make sure it was under 15 seconds difficult. I used the ‘comment around’ feature for the first time to organize my sequences. It helped and I would use it again!
Accomplishments that I'm proud of
I’m really proud of this effect. It is probably the most intense and in depth effect I have made so far, combining all of the skills I have learned making different effects starting in December 2019 into one effect. Overall, the accomplishment I am most proud of is the progress I have made in the AR space since beginning to learn the software in December. I relish the opportunity to solve problems, and love nothing more than when my brain is obsessively thinking about possible solutions even when I am not actively working at the problem. This is why learning SPARK AR has been one of my favorite intellectual journeys and I am very happy that my skills in illustration and my technical background in math and engineering have been able to compliment each other so well and allow me to create happy effects like this!
What I learned
I learned how to make space in world AR. Normally I would have just made stars a local background but since this effect is for reels I decided to challenge myself to create space in world AR. I did this by surrounding the user in a box of plane point systems and setting their warmups to 30 seconds with minimal particle movement. I was able to create the illusion of space from all angles.
I learned how to control the opacity of a material, which was something I was looking to do for months before this. I’m sure this feature is something that is coming to the software but for now I have a patch that can do it!
I learned how to effectively layer around a 3D object to create a desired look and not have the materials interfere with each other and compete for visibility.
I learned how to alter the pitch of the users voice, which happens when they become an alien and are then given a squeaky voice.
What's next for Alien UFO Abduction
I love this effect because it tells a story. I can see this effect being used by lots of people to express their weirdness! One of my favorite things about being a digital artist on Instagram is my ability to express myself through my art and to encourage others to highlight their strangeness that makes them unique. I think social media can often create a toxic environment for users in that it places societal pressures on people to be just like the influencers they see online. I think Reels is a great addition to Instagram because it encourages people to express themselves, to be funny and quirky and not worry about what others think.
Built With
procreate
sparkar
Try it out
www.instagram.com | Alien UFO Abduction | Ever wanted to be abducted by aliens? Now’s your chance with this effect which abducts you into the unknown and leaves you looking out-of-this-world! | ['Karina Bland'] | [] | ['procreate', 'sparkar'] | 52 |
10,493 | https://devpost.com/software/hopscotch-vc19ht | Inspiration
When was the last time you played hopscotch? We all remember it being fun in childhood! And that is exactly why we wanted to bring a game of hopscotch to the AR world: to have some fun :)
What it does
You can place a virtual court on the ground and play hopscotch wherever you want. You can record yourself doing incredible jumps and share it in your Reels. Gather some friends and play together. You can choose from several difficulty levels: from the classical 10x layout to harder, randomly generated ones.
How we built it
We used a build-in plane tracker to place a court on the ground. We also use segmentation to separate the legs from the background and make them appear on top of the projected court to help effect appear as in real life.
Challenges we ran into
The biggest challenge was to make the court appear under the legs when jumping, as this breaks the illusion and makes the whole filter unusable.
To make it work, we decided to make a make-shift "shadow" that will make an area right under the camera translucent. That proved to be harder than it sounded. Spark AR's simulator has information about the device's position in the space, and that seemed exactly what was needed. But on real devices that information turned out to be limited just to device's rotation and not position. Well, that didn't stop us. We've moved the "shadow" object to the camera space, and with some magic rotations, we were able to position it to always appear under the camera.
The next issue was with translucency. We were planning to use occluders to make court semi-transparent, but as it turns out, with occluders, it's either completely visible or completely hidden, no state in between. So this is what we ended up with: small, semi-transparent circle right under the camera that completely hides the court. Surprisingly, it works quite well and makes the illusion immersive.
An unexpected better solution came up when looking through the filters gallery on Instagram. One of the filters, that was replacing the background, worked with the back camera while pointed at the feet. That is exactly what we needed. As per documentation, segmentation works only for the upper half of the body, but in practice, it works with legs as well, at least when wearing shorts.
Since segmentation works very differently on different phones, we decided to keep shadow mode as an option and add a UI button to switch between modes.
Accomplishments that we're proud of
We are very happy with the result. Looking at all the challenges on the way, we are even a bit surprised how good the filter turned out. Hopscotch is actually playable with this filter, and that is amazing.
What we learned
We learned how to go around the limitations of the software, be creative with the solutions, and keep trying new approaches until one of them works.
What's next for Hopscotch
We have some ideas on how we can use shaders to help make shadow semi-transparent. With the further development of the AR capabilities of Spark AR, we are hoping to make the experience even more immersive, better leg separation, better plane tracking, obstacle detection, and maybe even a shared world with friends around you.
As for the current functionality, we are planning on adding more challenging levels and more customization for the game: more visual and play styles.
Attribution
Background photo created by
jannoon028 from freepik
Fretless by Kevin MacLeod
License
Built With
javascript
sparkar
Try it out
www.instagram.com
github.com | Hopscotch | Hopscotch game with AR | ['Alexander Golubev', 'Regina Siminderova'] | [] | ['javascript', 'sparkar'] | 53 |
10,493 | https://devpost.com/software/tiger-is-here | Inspiration
I love to see animals especially tiger. And when we cannot go to see the animals, I thought of bringing the tiger at our place. And hence I made this effect.
What it does
It places a walking tiger in our surrounding using plane detection.
How I built it
I built it using Spark AR Studio and its plane detection and JavaScript and the patches in Spark AR Studio.
Challenges I ran into
Applying the correct animation.
Accomplishments that I'm proud of
Walking animation was correctly applied.
What I learned
I learnt using the patches and the plane detection in Spark AR Studio.
What's next for Tiger is here
I will add multiple Animations like eating, attacking etc. and will also add different animals.
Built With
javascript
patch
patches
sparkar
Try it out
github.com | Tiger is here | Get 3D Tiger at your place in one click. | ['Saumya Bathla'] | [] | ['javascript', 'patch', 'patches', 'sparkar'] | 54 |
10,493 | https://devpost.com/software/floating-tesseract |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
a snapshot of object tracker
Inspiration
I was watching the Avengers movie and I was fascinated by the Tesseract cube. I wanted to create a 3d world filter with a plain tracker to bring my idea to life and thus created this project.
What it does
The main camera of the device is used by this filter on Instagram to project a hologram of a floating tesseract cube with plain and object tracking capability.
How I built it
I used spark ar to make this filter and I used a free to use 3d tesseract cube model from sketchfab under creative commons license.
Challenges I ran into
The main challenge was to make a script that runs all the stuff which I wanted to cover using this filter, I user youtube videos to guide me through the challenges, and the project completed successfully. This was my first hackathon and I am a solo participant so I had to figure out my mistakes and make my own way around the bugs and challenges.
Accomplishments that I'm proud of
I completed this whole project on my own.
What I learned
How scripts work in spark ar filters
What's next for Floating Tesseract
I might add some color filter effect to it like a canvas or something.
Built With
sparkar
Try it out
www.instagram.com | Floating Tesseract | I was watching the Avengers movie and I was fascinated by the Tesseract cube. I wanted to create a 3d world filter with plain tracker to bring my idea to life and thus created this project. | ['Rajdeep Majumder'] | [] | ['sparkar'] | 55 |
10,493 | https://devpost.com/software/wdkna | plane tracker
plane tracker
plane tracker
plane tracker
plane tracker
plane tracker
plane tracker
Inspiration
To make the reels feature more interesting to the audience while engaging with reels and making their moves
What it does
It places a virtual 3d person in the frame and gives a space to other users to match the moves of 3d person and moreover it gives a partner while dancing
How I built it
I built this on the top of spark-ar studio and imported models by making from blender and the animation for the models was imported by mixamo.
Challenges I ran into
The biggest challenge was working on a 4gb ram and i3 processor and opening multiple tabs. and managing the assets to meet the requirement sizes.
Accomplishments that I'm proud of
I'm proud that in the end I successfully completed 3 filters;2 plane tracker and 1 segmentation.
What I learned
I learnt the whole bunch of great stuff of spark ar.kudoos.....to those writers who wrote the documentation crystal clear. and it helped from a noob in spark ar to an intermediate level. I spent days together reading the articles and documentation of spark ar.
What's next for dance-with-me;
so there would be a lot of great things to do next like implementing the object tracker and implementing the two plane trackers in the scene if it supports in the future and last to add more cool stuff to facial parts and segmentation.
Built With
ar
blender
mixamo
particle
plane
segmentation
tracker
Try it out
github.com | Dance-with-me | As instagram announced the reels feature for it's users,where people can dance for fun.so my idea is about there would be even more fun to people while dancing with a 3d person. | ['Johnny basha'] | [] | ['ar', 'blender', 'mixamo', 'particle', 'plane', 'segmentation', 'tracker'] | 56 |
10,493 | https://devpost.com/software/cyber-city-ig29ce | Inspiration
We were experimenting with render pass and wanted to make something where people can see themselves with different face filters at same time, we got this idea from a vfx video where user were seeing himself in the monitors.
What it does
It put user in a cyberpunk theme, user will be surrounded by ship containers, scrolling holographic data and floating monitors showing users face with different filter like gas mask and Iron-man HUD.
How I built it
We draw many sketches for layout and positioning of assets. Made assets in Photoshop and Blender, some of the assets were downloaded and modified according to our needs.
Challenges I ran into
The main challenge was to make it look more futuristic and realistic for which added preset to whole render scene.
While using render pass we wanted to show render scene on the floating monitors.
On screen tap user can capture a shot which then show on screens with some random glitch image sequences but we can't do changes to texture so we double the setup of monitors and set visibility on/off on loop which then show frame shot for a second and then glitch texture for a second.
Accomplishments that I'm proud of
We are happy with our AR effect, and only wished we had more time to add more fascinating features we had in mind. We are really proud of ourselves working as a team despite not being in the same country. We are very happy to have managed to develop such AR theme effect.
What I learned
We learned so much with Spark AR Studio.
How we can add preset to whole render pass to make it more realistic.
How animation works and how they can be created in Spark AR with 2D texture transform.
How delay frame works.
What's next for Cyber City
We have more AR theme base plans like portals where user can travel from real world to AR world with more features and objects.
Built With
blender
photoshop
sparkarstudio
Try it out
www.instagram.com | Cyber City | Cyber City with floating monitors and holographic data | ['Aditya Agarwal'] | [] | ['blender', 'photoshop', 'sparkarstudio'] | 57 |
10,493 | https://devpost.com/software/being-kind | EXAMPLE
Inspiration well the inspiration was to use a tagline and promote positivity and kindness.
What it does it a very simple filter, there's a fixed text that rotates above the user's head as in like a banner.
How I built it built it using SparkAR tools provided in the software, incorporated a head occluder and using a png image for the text
Challenges I ran into well the biggest challenge was to set up the occluder, the timings and the entire rotation.
Accomplishments that I'm proud of i am very proud of what I've made, I feel it stands for something really nice and hopeful that many people would like using.
What I learned setting up rotation timings, inserting a png for the text and so much more!
What's next for BEING KIND I seek to release many more filters typically of the same base as this one but different quotes.
Built With
ar
javascript
particle | BEING KIND | It is a very basic filter that has text rotation over the head, text reads "BEING KIND IS NOT A WEAKNESS" in order to promote kindness, hopefully spread some positivity in these tough times of COVID19 | ['MANISHA GUPTA'] | [] | ['ar', 'javascript', 'particle'] | 58 |
10,493 | https://devpost.com/software/let-s-explore-the-space-0tpbwi | The space effect
Working in Spark AR
Inspiration
I am inspired by the space and admire at its beauty only in pictures. So, I wanted to create it via Augmented Reality where we experience the real world in my imagination.
What it does
It gives a feeling that you are glittering in space and the moons, asteroids are revolving around us. It gives a unique touch.
How we built it
I built it using Spark AR studio. It gave me many feature to do with. We tried segmentation,face tracking, null objects under it 3D objects
Challenges we ran into
The biggest challenge was to reduce the scale of the 3D objects to place it in a place. Other than that it went smooth.
Accomplishments that we're proud of
We are really proud that atleast we have tried to an extent to give a real world feel. Though we are new to Augmented Reality we did our level best. We proud that we participated in this hackathon rather being an spectator.
What we learned
We learned some basics and common knowledge of AR and we learned the backend process of it. I was a new experience.
What's next for Let's explore the Space
Hoping to explore a lot and learn new in Augmented Reality.
Built With
sparkar | Let's explore the Space | The place where you can feel space. The effect which I used gives a virtual space feeling. | ['SRI RAKSHITHA A K', 'PRAGADHEESAN K'] | [] | ['sparkar'] | 59 |
10,493 | https://devpost.com/software/manga-portal-ar | This is the effect that shows when one interacts with the portal!
This is the effect that shows before one interacts with the portal!
Inspiration
We were inspired to make a filter that can help people make interesting choreographed dance videos utilizing an on-screen portal that dramatically alters the users appearance when interacted with. Many users love to make videos that make use of dramatic transitions and showcase their creativity through contrast, so we built an effect that can help them accomplish that on Reels. We took inspiration from the anime art style popular in online culture today, and hoped that users could make use of our segmentation and appearance modification to build videos that resonate with their interests.
What it does
The filter starts with an on-screen portal, which disappears when the user interacts with it by moving into it. Then the world is replaced, the user is transported into a manga, and their facial features are replaced with the large eyes, mouth, and cheek features typical in anime media.
How we built it
Development Platform
We built the project in SparkAR.
The Portal
The portal model was imported off sketchfab. Bingyu Li wrote javascript that keeps track of how close the user is to the portal. This is accomplished by measuring the distance between two null objects: one in the portal itself, and the other on the users head. When the distance is less than or equal to zero, the portal disappears and the scene effects become visible.
In the patch editor, this was accomplished by linking the script with a patch that measures less than or equal, and then creating visibility patches for each of the objects in the effect itself. When the value measured is less than zero, visibility is toggled "on".
The particle system utilizes a cherry blossom texture to make it appear as if cherry blossoms are emerging out of the portal.
Scene Effects
The face objects were designed in Maya, utilizing reference images from the internet. These were then painted in Substance Painter, while the object itself was modified using the SparkAR toolkit in Blender. These objects were then assigned a material in SparkAR, and textured using the Substance Painter export.
The effect employs a face tracker patch, which is linked to eyeball, nose, cheek, and chin patches. The respective objects are linked by position and rotation to the tracker patches.
Challenges we ran into
We initially wanted to employ plane tracking to place the portal wherever the user desires, but found that it made the effect confusing to use. Perhaps future iterations will allow the user a more intuitive version of this idea.
Another serious challenge was the difficulty of being unable to correct object origins in SparkAR. We had to employ the Blender SparkAR toolkit to correct this, and then export from Blender. The user really should be able to modify object origins in SparkAR.
Accomplishments that we're proud of
For the most part we were able to stay on schedule and work together to produce an interesting and fun SparkAR effect for Instagram Reels. We were happy to learn about the SparkAR development platform and develop a piece of entertainment technology. We were also able to utilize javascript in our effect, which greatly increased our ability to produce a highly functional filter. And it looks cool, too! : D
What we learned
The most important thing we learned was how to utilize SparkAR to develop AR filter effects for Instagram. This development platform is powerful and offers a highly functional environment capable of producing complicate AR effects.
What's next for Anime Portal
The next direction for this effect is to improve and expand its functionality. The filter should be able to represent a range of emotions, eye colors, and react to the user closing and opening their mouth and eyes. We would also like to implement custom placement of the portal, perhaps even utilizing multiple portals to provide a range of effects. While we did not have enough time to implement all these features in this iteration of the filter, we hope to be able to expand its functionality in the future.
Built With
3dpaint
blender
instagram
javascript
maya
sparkar
substancepainter
Try it out
github.com
www.instagram.com | Anime Portal | Jump through a portal into the world of Anime! | ['Austin Stanbury', 'Bingyu Li', 'Ines Said'] | [] | ['3dpaint', 'blender', 'instagram', 'javascript', 'maya', 'sparkar', 'substancepainter'] | 60 |
10,493 | https://devpost.com/software/smokeyeyes-for-reels |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Smokey Eye Effect created using Spark AR Studio
Inspiration
𝐍𝐚𝐯𝐢𝐠𝐚𝐭𝐢𝐧𝐠 𝐀𝐜𝐫𝐨𝐬𝐬 𝐒𝐜𝐫𝐞𝐞𝐧 𝐅𝐢𝐥𝐭𝐞𝐫𝐬 𝐈 𝐟𝐨𝐮𝐧𝐝 𝐨𝐮𝐭 𝐢𝐧𝐟𝐥𝐮𝐞𝐧𝐜𝐞𝐫𝐬 𝐚𝐫𝐞 𝐩𝐨𝐬𝐭𝐢𝐧𝐠 𝐭𝐡𝐞𝐢𝐫 𝐜𝐮𝐫𝐚𝐭𝐞𝐝 𝐟𝐢𝐥𝐭𝐞𝐫𝐬 , 𝐛𝐞𝐢𝐧𝐠 𝐚 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐈 𝐰𝐚𝐧𝐭𝐞𝐝 𝐭𝐨 𝐜𝐫𝐞𝐚𝐭𝐞 𝐚 𝐬𝐦𝐚𝐥𝐥 𝐦𝐚𝐤𝐞𝐮𝐩 𝐅𝐢𝐥𝐭𝐞𝐫 𝐧𝐚𝐦𝐞𝐝 𝐬𝐦𝐨𝐤𝐞𝐲 𝐞𝐲𝐞𝐬 𝐚𝐧𝐝 𝐬𝐨 𝐈 𝐝𝐢𝐝 .
What it does-
𝐒𝐦𝐨𝐤𝐞𝐲 𝐞𝐲𝐞𝐬 𝐢𝐬 𝐚 𝐪𝐮𝐢𝐜𝐤 𝐦𝐚𝐤𝐞𝐮𝐩 𝐨𝐧 𝐲𝐨𝐮𝐫 𝐟𝐚𝐜𝐞 , 𝐩𝐥𝐚𝐜𝐢𝐧𝐠 𝐭𝐡𝐞 𝐥𝐮𝐬𝐭𝐫𝐨𝐮𝐬 𝐞𝐲𝐞𝐥𝐚𝐬𝐡𝐞𝐬 𝐡𝐚𝐯𝐢𝐧𝐠 𝐚 𝐞𝐲𝐞𝐬𝐡𝐚𝐝𝐨𝐰 𝐨𝐧 𝐞𝐲𝐞𝐥𝐢𝐝𝐬 𝐚𝐧𝐝 𝐚𝐥𝐬𝐨 𝐝𝐞𝐟𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 𝐥𝐢𝐩𝐬 . 𝐌𝐚𝐤𝐢𝐧𝐠 𝐲𝐨𝐮 𝐬𝐞𝐥𝐟𝐢𝐞 𝐫𝐞𝐚𝐝𝐲 .
How I built it-
𝐈 𝐛𝐮𝐢𝐥𝐭 𝐢𝐭 𝐮𝐬𝐢𝐧𝐠 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐬𝐭𝐮𝐝𝐢𝐨 𝐮𝐬𝐢𝐧𝐠 𝐀𝐬𝐬𝐞𝐭𝐬 𝐚𝐬 𝐛𝐞𝐥𝐨𝐰
=>𝐌𝐚𝐭𝐞𝐫𝐢𝐚𝐥𝐬 𝐥𝐢𝐤𝐞 𝐟𝐚𝐜𝐞 𝐑𝐞𝐭𝐨𝐮𝐜𝐡, 𝐓𝐞𝐱𝐭𝐮𝐫𝐞𝐬-𝐟𝐚𝐜𝐞 𝐭𝐫𝐚𝐜𝐤𝐞𝐫 ,
=> 𝐄𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭 𝐦𝐚𝐩, 𝐦𝐚𝐤𝐞𝐮𝐩
𝐚𝐦𝐛𝐢𝐞𝐧𝐭
𝐨𝐜𝐜𝐥𝐮𝐬𝐢𝐨𝐧 ,
=>𝐌𝐚𝐤𝐞𝐮𝐩
𝐜𝐨𝐥𝐨𝐫
𝐦𝐚𝐬𝐤 .𝐜𝐫𝐞𝐚𝐭𝐞𝐝 𝐭𝐰𝐨 𝐛𝐥𝐨𝐜𝐤𝐬 𝟏) 𝐌𝐚𝐤𝐞𝐮𝐩 𝟐)𝐄𝐲𝐞𝐥𝐚𝐬𝐡 𝐂𝐫𝐞𝐚𝐭𝐞𝐝 𝐁𝐥𝐨𝐜𝐤𝐬 - 𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐢𝐧𝐩𝐮𝐭𝐬 𝐚𝐧𝐝 𝐎𝐮𝐭𝐩𝐮𝐭𝐬.
=>𝐓𝐞𝐬𝐭𝐢𝐧𝐠 𝐨𝐧 𝐝𝐞𝐯𝐢𝐜𝐞𝐬 𝐚𝐧𝐝 𝐀𝐩𝐩𝐬 𝐟𝐨𝐫 𝐛𝐞𝐭𝐭𝐞𝐫 𝐩𝐥𝐚𝐜𝐞𝐦𝐞𝐧𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐦𝐚𝐤𝐞𝐮𝐩𝐬.
=>𝐓𝐡𝐞𝐧 𝐂𝐫𝐞𝐚𝐭𝐞𝐝 𝐭𝐡𝐞 𝐩𝐚𝐭𝐜𝐡.
=>𝐒𝐡𝐚𝐫𝐢𝐧𝐠 𝐭𝐡𝐞 𝐓𝐞𝐬𝐭 𝐋𝐢𝐧𝐤𝐬 𝐟𝐨𝐫 𝐜𝐮𝐫𝐚𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐛𝐞𝐭𝐭𝐞𝐫 𝐫𝐞𝐬𝐮𝐥𝐭𝐬.
=>𝐉𝐨𝐢𝐧𝐞𝐝 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐇𝐮𝐛, 𝐚𝐧𝐝 𝐩𝐮𝐛𝐥𝐢𝐬𝐡 𝐭𝐡𝐞 𝐄𝐟𝐟𝐞𝐜𝐭 𝐚𝐧𝐝 𝐬𝐮𝐛𝐦𝐢𝐭𝐭𝐞𝐝 𝐭𝐨 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐡𝐮𝐛 𝐚𝐧𝐝 𝐢𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦 𝐟𝐨𝐫 𝐚𝐩𝐩𝐫𝐨𝐯𝐚𝐥𝐬.
Challenges I ran into-
𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐚𝐛𝐨𝐮𝐭 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐬𝐭𝐮𝐝𝐢𝐨, 𝐅𝐚𝐜𝐞 𝐓𝐫𝐚𝐜𝐤𝐞𝐫𝐬, 𝐂𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐁𝐥𝐨𝐜𝐤𝐬 𝐚𝐧𝐝 𝐓𝐞𝐬𝐭𝐢𝐧𝐠 𝐀𝐜𝐫𝐨𝐬𝐬
𝐃𝐞𝐯𝐢𝐜𝐞𝐬 𝐚𝐧𝐝 𝐩𝐮𝐛𝐥𝐢𝐬𝐡𝐢𝐧𝐠 𝐭𝐡𝐞 𝐟𝐢𝐥𝐭𝐞𝐫 𝐨𝐧 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦
Accomplishments
𝐈'𝐦 𝐩𝐫𝐨𝐮𝐝 𝐨𝐟 𝐚𝐟𝐭𝐞𝐫 𝐚 𝐰𝐚𝐢𝐭 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦 𝐚𝐩𝐩𝐫𝐨𝐯𝐞𝐝 𝐦𝐲 𝐬𝐮𝐛𝐦𝐢𝐬𝐬𝐢𝐨𝐧 𝐚𝐧𝐝 𝐩𝐮𝐛𝐥𝐢𝐬𝐡𝐞𝐝 𝐭𝐡𝐞 𝐟𝐢𝐥𝐭𝐞𝐫𝐬 . 𝐍𝐨𝐰 ##𝐈 𝐡𝐚𝐯𝐞 𝐚 𝐅𝐢𝐥𝐭𝐞𝐫 𝐨𝐧 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦.
What I learned-
L𝐞𝐚𝐫𝐧𝐞𝐝 𝐚𝐛𝐨𝐮𝐭 𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐫𝐞𝐚𝐥𝐢𝐭𝐲, 𝐚𝐧𝐝 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐬𝐭𝐮𝐝𝐢𝐨 . 𝐅𝐚𝐜𝐞𝐭𝐫𝐚𝐜𝐤𝐞𝐫, 𝐨𝐜𝐜𝐥𝐮𝐬𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐦𝐚𝐤𝐞𝐮𝐩,
𝐌𝐚𝐭𝐞𝐫𝐢𝐚𝐥 𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐨𝐧𝐬
𝐂𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐩𝐚𝐭𝐜𝐡𝐞𝐬
𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐧𝐠 𝐓𝐞𝐱𝐭𝐮𝐫𝐞𝐬
𝐇𝐨𝐰 𝐭𝐨 𝐂𝐨𝐥𝐨𝐫 𝐦𝐚𝐬𝐤
𝐂𝐨𝐥𝐨𝐫 𝐄𝐧𝐜𝐨𝐝𝐢𝐧𝐠
𝐂𝐫𝐞𝐚𝐭𝐞𝐝 𝐁𝐥𝐨𝐜𝐤𝐬 - 𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐢𝐧𝐩𝐮𝐭𝐬 𝐚𝐧𝐝 𝐎𝐮𝐭𝐩𝐮𝐭𝐬.
𝐓𝐞𝐬𝐭𝐢𝐧𝐠 𝐨𝐧 𝐝𝐞𝐯𝐢𝐜𝐞𝐬 𝐚𝐧𝐝 𝐀𝐩𝐩𝐬 𝐟𝐨𝐫 𝐛𝐞𝐭𝐭𝐞𝐫 𝐩𝐥𝐚𝐜𝐞𝐦𝐞𝐧𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐦𝐚𝐤𝐞𝐮𝐩𝐬
𝐒𝐡𝐚𝐫𝐢𝐧𝐠 𝐭𝐡𝐞 𝐭𝐞𝐬𝐭 𝐋𝐢𝐧𝐤𝐬 𝐟𝐨𝐫 𝐛𝐞𝐭𝐭𝐞𝐫 𝐫𝐞𝐬𝐮𝐥𝐭𝐬
𝐀𝐧𝐝 𝐅𝐢𝐧𝐚𝐥𝐥𝐲 𝐒𝐮𝐛𝐦𝐢𝐭𝐭𝐢𝐧𝐠 𝐄𝐟𝐟𝐞𝐜𝐭 𝐭𝐨 𝐒𝐩𝐚𝐫𝐤 𝐀𝐑 𝐇𝐮𝐛
What's next for CTF For Reels
𝐈 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐝𝐞𝐬𝐢𝐠𝐧 𝐬𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠 𝐨𝐧 𝐨𝐛𝐣𝐞𝐜𝐭 𝐦𝐨𝐝𝐞𝐥𝐢𝐧𝐠 𝐚𝐧𝐝 𝐦𝐚𝐤𝐞𝐮𝐩 𝐟𝐢𝐥𝐭𝐞𝐫𝐬.
Built With
spararstudio
Try it out
www.instagram.com | Smokey Eye Filter | I have created an SmokeyEye filter using SparkAR on instagram | ['Anshu Toppo'] | [] | ['spararstudio'] | 61 |
10,493 | https://devpost.com/software/raining-comida | Inspiration
Wanted to have something that was food related
What it does
The effect makes it look like there is food raining from the sky on screen whenever you open your mouth and changes color filter whenever you raise your eyebrows.
How I built it
I built it using Spark AR
Challenges I ran into.
None
Accomplishments that I'm proud of
Using the particle effect for the food.
What I learned
How to use the particle effect, how to change the color on screen with raising your eyebrows, and how to have a background whenever you smile.
What's next for Raining Comida
This will eventually be turned into a game.
Built With
ar
particle
Try it out
www.instagram.com | Raining Comida | An effect to have food raining from the sky! | ['Antonio Mendieta'] | [] | ['ar', 'particle'] | 62 |
10,493 | https://devpost.com/software/jurassic-world-ykp9hj | Inspiration
The jurassic world and the T rex follow me anyplace I go
What it does
portal with Dinos who one of them will follow (look at the face 1), and trigger (roar) when you open your mouth.
and also the segmentation.
How I built it
patch, blender to model and animated, sharp 3d to model the portal, substance painter for the textures.
edit the audio with adobe audition.
Challenges I ran into
Accomplishments that I'm proud of
this challenge
What I learned
well need to get better the SDK because the FPS runs to 2.6 to 0.6
What's next for Jurassic World
Roar
Test Link
https://www.instagram.com/ar/309813823641684/?ch=MTczMDdhYzYyY2ViZTQ3MWQ5NjhlYThiMWY5ODg1NTc%3D
Built With
sparkar
Try it out
github.com
www.facebook.com
www.facebook.com
www.facebook.com
www.instagram.com | Jurassic World | This filter you will enter in the little portal of jurassic world is really interactive, use the plane tracker and tap screen to trigger the portal then step forward and open your mouth to roar. Enjoy | ['Sacha Lujan'] | [] | ['sparkar'] | 63 |
10,493 | https://devpost.com/software/singers-filter | Karaoke Mic
Modern
Retro
THE FILTER
Ideation and execution
As an individual who loves live music, I find that I have many videos with singers and musicians on my recommended pages. I love watching people perform with beautiful filters. They feel like mini-shows. It wasn't until later that we noticed that there weren't many filters for musicians. This brought up the problem; there are many singers and musicians on platforms such as reels, ut not enough catered content. As a result, we began to ideate ways for Reels to create am AR filter that can promote these small artists and give them a platform to create freely.
This filter is available to anyone that wants it. When you open the effects feature on Reels, you will be able to find this filter. When Selecting, you can find three categories. The first being "Karaoke," "Modern," then "Retro." you will find that each filter has a unique mic. Depending on your stylistic choices, you can use a specific mic. These filters are made to be used over and over again, not just once or twice. They are highly interactive and versatile. Next, each filter provides a Reverb effect that will allow your voice to carry its full effect. It is sublet in the Modern and Retro filter, but prominent in the Karaoke Filter. Lastly, each filter features a colored filter over it. The last one being the most overt, with a grey filter. This is to provide to "old school/ jazz" effect that many singers love.
We began building the filter with Spark Studio. We took Assets for sites such as Polly and Story blocks then imported them into the application. From here, we build the rest of the filter in After effects. This allowed us to create video and audio/visual effects.
Neither of us has used Spark Studio before; as a result, it was a sharp learning curve. However, since we both have experience in Code and design software, we could adapt quickly. I look forward to using this software and improving in this field, but I would have loved to see what I could have done with another week of using the software. Secondly, I apologize for my singing (lol) I am not a singer by any means, but I hope that people can embrace it no matter their skill level.
Personally, I am proud of taking steps to learn something completely outside of my comfort zone! It was a fun and new experience, and I look forward to trying something similar again!
The next steps fo this filter is to make the Microphones myself in a 3D software. I hope to use this in the future but would like to customize it further and clean up the design/audio aspect of it. Lastly, I would love to find someone that is a great singer to do a demo of all of the filters, When I do that, I will recreate the video for a fuller experience. Lastly, I regret not discovering this hackathon earlier.
Built With
particle
polly
story-blocks | Singers Filter | There is a significant opportunity for platforms to help musicians show off their skills. The goal is to have singers use this filter as a tool that will give inspiration. | ['Chloe Remley', 'Audrey Easley'] | [] | ['particle', 'polly', 'story-blocks'] | 64 |
10,493 | https://devpost.com/software/fizzy | Patch editor
Fizzy hand
Inspiration
I created this effect to make a visualization of the air we breathe. Every second we breathing the air, but we don't care about its quality. I like water style filters so I decided to make it this way.
What it does
This effect simulates underwater space with a lot of different bubbles on the user's body and around it.
How I built it
In the begining I was using Kevin Kripper's dripping technique. It based on render pass delaying and blurring. In fact we blur the texture and make it sharp again which makes texture drippy. In my version there are two kinds of texture used: body segmentation and rendered particles texture. Both are passed through frame delay, blurred and went through smooth step patches to make sharp bubbles. As the additional part there's a light blue colorizing to achieve underwater style. This effect uses Plane Tracker to let bubbles located around user, but it works on user's body as well (by using segmentation).
Challenges I ran into
The most difficult part was to find the balance between bubbles size, their lifetime and speed.
Accomplishments that I'm proud of
Actually I'm proud of all this filter because every step of making was really difficult.
What I learned
Now I can work with shaders much more better, espessially with smooth step patch.
What's next for Fizzy
I would like to more water style reflections, light leaks etc.
Built With
photoshop
sparkar
Try it out
www.instagram.com | Fizzy | Thinking more about the quality of the air we breathe can help save the planet for our children. | ['Denis Korobov'] | [] | ['photoshop', 'sparkar'] | 65 |
10,493 | https://devpost.com/software/invisible-alien | Inspiration
I got inspired from my personal story of being ignored throughout my years in America feeling invisible and like an alien since I’m not from America. I have the option to go to my country any time and leave everything I’m working for, but I’m not quitting. That’s why I added the dancing, because I might not be noticed, but I’m still working and doing my art.
What it does
The filter creates a scenery of an alien dancing under its spaceship. This makes us think of many scenarios and stories we can create with the project.
How I built it
To create the world object experiance for reels I used Blender and SketchFab. I worked on the rest of the project on Spark AR to bring this project into reality.
Challenges I ran into
Keeping the project under 4mb was challenging. Another hard part was deciding if I should add more specifications to the project or just leave it as It is. Creating a project that is meant to be used for reels is challenging because you need to think “what would people use, how are they going to play with it, and why?”
Accomplishments that I'm proud of
I am proud of the rainbow gloss on the alien which makes it invisible, but reflecting some light to make it look more mysterious. The whole setup of the alien and the spaceship I think is great combination to create stories and probably dances with the alien.
What I learned
I leaned that creating a filter for a competition is basically competing with your own best ideas. I’ve created many projects and learned that to create something you need to test it multiple times, by many people, and to ask questions. I got reviews from my friends which made me much more confident in my choices as a creator. I want to make people use it INDEFINITELY, that’s why I had to ask my friends who use reels if that is a filter they would use. After many reviews I was left with a strong mind of WHO I AM, why I’m creating INVISIBLE ALIEN, and what the project should finally look like.
What's next for INVISIBLE ALIEN
I want to get more reviews from people and decide if I should add more. For now people loved it and everyone felt confident that this is an invention worthy of being used in reels. For the future I will definitely add falling particles from the sky with a long screen tap.
Built With
3d
ar
blender
design
dimension
facebook
filter
instagram
photoshop
reels
sketchfab
spark-ar
worldobject
Try it out
www.instagram.com | INVISIBLE ALIEN | As an immigrant from Bulgaria on a visa the current immigration situation has left me feeling invisible. However, I remain optimistic. The dancing invisible alien represents the despair and the hope. | ['Hari Tahov'] | [] | ['3d', 'ar', 'blender', 'design', 'dimension', 'facebook', 'filter', 'instagram', 'photoshop', 'reels', 'sketchfab', 'spark-ar', 'worldobject'] | 66 |
10,493 | https://devpost.com/software/uf-2020-filter |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Logo
First screen
Open mouth to enable popup
Let the confetti rain
Inspiration
I graduated from the University of Florida one year ago and I was super grateful to have friends and family come and support me during my time of achievement. Unfortunately, that cannot be said about the graduates this year due to COVID-19 so I decided to make a filter to make it just a little more special.
I have been making a hobby out of making random Instagram filters during this quarantine and thought it would be fun and helpful to make a UF Graduating Filter for the class of 2020 to celebrate this year’s graduates a little more. To all those that are graduating, CONGRATS and GO GATORS!
What it does
Tap the screen to change the color of your tassel and open your mouth to be cheered on and confetti-ed on.
How I built it
Spark AR, Blender, and a lot of Youtube and Google-ing. Thank you Maru Studio and the Spark AR community on Facebook.
Challenges I ran into
I had no experience in Blender so that was quite a learning curve even with a simple design
Accomplishments that I'm proud of
I am proud to have a filter that celebrates the graduating class of 2020 and made within 5 days at that.
What I learned
Basic modeling on Blender and some new tricks on Spark AR
What's next for UF 2020 Filter
Maybe make a filter to celebrates all graduates, no matter the school
Built With
ar
particle
sparkar
Try it out
www.instagram.com
www.instagram.com | UF 2020 Filter | UF Graduation Filter 2020 | ['Yu Liu'] | [] | ['ar', 'particle', 'sparkar'] | 67 |
10,493 | https://devpost.com/software/yondu | yondu filter testing
Inspiration
Yondu Udonta or simply "Yondu" is a fictional character from Guardians of the Galaxy. Yondu is the leader of outlaw mercenaries called the Ravagers. He is so skillful at controlling his "drone-arrows" actually called as ( Yaka Arrow which is linked to his cybernetics and is able to levitate and fly when he whistles. His badass attitude and our interest toward marvel character (sci-fi ).
What it does
User will be able to fly arrow "Yaka" when they whistle or produce sound.
How we built it
We used energy meter and audio analyzer and built audio reactive filter.
Challenges we ran into
Building 3d model and audio analyzing as we were not familiar with audio reactive filter and positioning. Also, Instagram Reels is not available in our country.
Accomplishments that we're proud of
One things that really makes us proud is that we were able to transform our filter into "Yondu" character.
What we learned
We learned important concepts and features of Spark AR like audio analyzer, spark ar new addons for blender,
energy meter and audio player,
What's next for Yondu
We want to add native UI and make it reel friendly.
Built With
analyzer
audio
javascript
Try it out
www.instagram.com | Yondu | Audio analyzer based filter | ['Dhiran karki', 'Pratik Gautam'] | [] | ['analyzer', 'audio', 'javascript'] | 68 |
10,493 | https://devpost.com/software/wonders | Inspiration
Since due to covid ,everyone is staying at home ..Plans were cancelled .trips were postponded..and Everything came to end so i got the idea why not travel the world by staying at home during this pandemic condition.
What it does
It has two spheres which takes you to the Colosseum and Machu pichu.
How I built it
I built using the 3d sphere and the plane tracker with the pictures of Colosseum and Machu pichu.
Challenges I ran into
Fixing the sphere was tough but i solved it.
Accomplishments that I'm proud of
I proud to make this effect for the world from whom these destinations might beyond their reach.
What I learned
I learned many things how to move in the 3d environment and many more.
What's next for Wonders
Next i 'll try to put all the 7 wonders in the effect.
Try it out
github.com | Wonders | A sphere that takes you to the Colosseum and Machu Pichu. | ['Midhat fatima'] | [] | [] | 69 |
10,493 | https://devpost.com/software/ironmanar | Transform yourself into Ironman
See yourself with Jarvis Interface
Lock on the target
Fire the laser
Inspiration
Have you ever thought about wearing the Iron man's suit?
Will we be able to become the Ironman if we’re wearing the Ironman suit?
If so, how could the suit augment your abilities in the actual world?
Those sorts of questions led us to imagine wearing the Ironman suit through AR, and we wanted to give opportunities that users gain the Ironman’s superpower in their daily lives and have fun moments with their friends.
What it does
IronmanAR is a face effect that users are able to shoot a magical laser to another's face. Using face recognition on the face-facing camera, you are able to wear the Ironman’s suit that includes the shooting ability.
Afterward, you need to use the backside camera by switching a camera in order to shoot the magic laser to another’s face.
When you tap and hold on the screen, the laser will be launched on the detected face of the cute daily enemy.
The embedded face landmark will recognize the detected face to apply magical effects through AR laser.
How we built it
As for the suit-wearing scene, we have imported a green screen processed png sequence and replaced its face part with the user's face which is animated along the sequence. Next, we used body segmentation and made the background very dark in order to show the user's face is inside of the suit. Lastly, we added a face tracked HUD (head-up display) and synchronized the rotation with the user's face.
The laser of the back-facing camera is triggered by long-pressing anywhere on the screen and made with the built-in particle system. For the face dividing effect when the laser is hitting the target, we previously divided a typical face mesh into the top and bottom part in Blender and exported so that we ended up animating each part of the head differently and added supporting png sequences for the explosion effect.
Challenges we ran into
Since we were running hundreds of images and audio files like wearing the suit, HUD, and explosion, putting everything in a limited space was a big deal. We had to try using different PNG image optimizers several times for converting the PNG images into the smallest size and finally succeeded to modify the color bit precisely.
Accomplishments that we are proud of
Our team has been focusing on the storyline in a certain context, unlike the other AR function. Beyond the facial effect, our idea has a context that is broadly being used and the generally happening situation in our daily lives. We believe that those attempts make this service a more user empathic AR service with familiar hero characters. This sort of storytelling driven AR effect will facilitate users to use the service more enjoyable and richly with others in various situations.
What we learned
With the first time of developing a Spark AR project, we have learned most of the basic features of the software such as managing patches, grouping and layering, animating assets and sequences, handling custom instructions, scripting Javascript and bridging with the patch. 3D editing skill is a plus.
What's next for IronmanAR
For the next step, we are aiming to not only add various hero characters, weapons, and effects but also deal with the problems of human beings or our planet in a humorous way.
Next characters: Captain America and Batman
Built With
javascript
sparkar
Try it out
www.facebook.com
github.com | IronmanAR | IronmanAR is a humorous Instagram filter that enables users to wear the Ironman’s suit and shoot the laser. | ['www.yunochoi.com', 'Yun Ho Choi', 'www.vincentnskim.com', 'https://www.linkedin.com/in/vinskim/', 'Vince Kim'] | [] | ['javascript', 'sparkar'] | 70 |
10,493 | https://devpost.com/software/mysterious-wings | Inspiration
When I received the invitation to join the Facebook Hackathon from Tan Duong, we agreed to team up and build a project with Spark AR. That day we got so inspired by a clip with a fairy flying around and then we thought why not make a camera effect that lets people turn into winged creatures.
We started Mysterious Wings with the intention of creating an AR world effect that’s optimized for an exciting new product — Instagram Reels. The idea is to let the user choose their favourite wings sets and act, playing with it.
What it does
Mysterious Wings is an augmented reality effect that runs on Instagram cameras. Players will choose wings sets by changing their facial expression, pick the background from their gallery and finish them with an act and poses! The main goal of the effect is focusing on the users’ experiences, helping the users have a fun acting, posing experience, also creating viral insta/facebook stories.
We provide 3 types of wings with corresponding environment for each type. Users can:
Take video of each wing
Mode automatically switches wings type when face detection (this effect is great for performing character transitions)
Touch screen mode to take pictures: the image will be retained, and the user can record video on the background of that image (which helps to make surreal videos)
Add background photo mode from gallery
How we built it
We discussed and divided the work from the beginning. I do the assets stuffs and Tan Duong does the patches work. Sometimes we switched and helped each other out.
We started with the flow of user experience, references of wings and then some concepts ideas.
From the 2D concepts and the refs, we build the models, rigged and animated them in Maya.
Texture was painted in 3Dcoat and edited in Photoshop.
The whole modeling, texturing process was done back and forth to have the right look and unified in style. Models were set at same scales, rigged with enough bones to deal with and same bones position so our teammates won’t get frustrated.
Animations in each wings set was splitted in 2 clips (start and idle) exported into 1 FBX file.
Later on, they were all assembled and tested in Spark AR. Bugs, missing points were adjusted in this step.
Next, we test and adjust the materials to fit the environment.
After that we add a few particles to fit the style of each wings set (Sparks with demo, Feathers with Angel, Sparkle with Fairy).
Last step was working on the patch system to get them changed when people change their expression. Add the ability to browse the gallery and pick background. Then add the new function “Reels” that give users the ability to turn into winged creatures and fly away.
We made certain steps that the user will follow
1.Steps
Stand in front of the camera and posing, act.
Hit play button to freeze and change act.
Using facial expression to change wings sets.
Browse the gallery and choose background.
Fly.
2.Build the effect
The work schedule is updated every day.
For Design:
We spent a lot of time to calculate the number of models, bones, materials and textures so the file size is not too large, how to compress the texture to ensure the best image quality with smallest file size.
For Dev:
We start with add and aligning the object layers to work well in all circumstances.
Add texture and texture to look exactly what was being built with 3D software.
Make changes to the respective wings, head, and environment, using the native ui picker to do the transformation.
Add interactions as users use.
Adjust lighting and colors for the final time before submitting.
3.QA
When the patch was completed, we once again assembled them and tested the accuracy of the logic
We have tested many times. The process took us lots of time, but fortunately there were not too many issues and we were still in good control
Challenges we ran into
Building visuals and users’ experience, user-centered to creativity and act, posing.
Working time: We are full-time employees for the company, we are afraid of the deadline because it seems that both Tan Duong and I are very busy with our own work.
For Design:
We have to adjust a lot so the assets unified in style and fit the real world
For Dev:
The way of assembling the wings on the human body is a bit difficult because the camera cannot capture the face at a long distance.
The combination of background and freeze modes is imported from the gallery.
Accomplishments that we're proud of
Our first project was completed despite many difficulties.
Learned how to fit the objects with whole body.
We were very busy, but still on schedule.
Exceeding the production plan is not only a filter for Facebook but also for Instagram.
Making all the functionality work properly without sacrificing any.
What we learned
In the process of making filters, we have been learning many useful lessons. Fortunately, we agreed on solving the root problem including ideas, scripts, and how to proceed.
Learn how to plan and work remotely with a filter project.
Need the most detailed brief and script (including logic and time spent on animation) to be able to understand the project most clearly.
Not surrender, cause we got so little times to worked on this (knowing the deadline was too late and I still not get any good result) and almost quit the project but then we choose sleepless night and finish it.
What's next for Mysterious Wings
We will continue to update some new contents for filter, such as:
More wings sets to choose from.
Ways to customized it.
Built With
adobe-illustrator
filters
maya
photoshop
sparkar
Try it out
www.instagram.com | Mysterious Wings | Mysterious Wings | ['Tan Duong Ngoc', 'Đinh Tuấn Nghĩa'] | [] | ['adobe-illustrator', 'filters', 'maya', 'photoshop', 'sparkar'] | 71 |
10,493 | https://devpost.com/software/dream-state | Dream State Spark AR
PIFuHD project and Blender
Tilt Brush - Bird
Inspiration
I've always wanted to represent my most recurring dream in an AR Filter. In this dream I see people and birds floating around and where only my voice is heard.
What it does
Connect with other users who have similar dreams to achieve a common interpretation.
How I built it
I used PIFuHD project (Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization) to create a 3D clone of me. All the clones has a color effect. Also I used Tilt Brush to paint a bird and I optimized it in Blender to improve shapes. Once I exported all the 3D elements into Spark AR, I multiplied them and I added a voice effect.
Challenges I ran into
The most challenging thing was conceptualizing the dream. The filter should show that we are in a dream world and that there are humans floating.
Accomplishments that I'm proud of
I am proud to combine various tools and knowledge that I have gained from creating filters. I think the great potential of augmented reality is mixing ideas and platforms to achieve something powerful.
What I learned
I learned how to combine many skills such as VR Paint (Tilt Brush) and optimize 3D objects to create filters in Spark AR with less than 3MB.
What's next for Dream State
I want to generate more filters related to dreams and not only show them with plane tracker, but also with face tracker.
Built With
blender
pifuhd
sparkar
tiltbrush
Try it out
www.instagram.com | Dream State | This filter is an artistic expression of one of my weirdest and most recurring dreams. I think that filters are a window to spread essential human themes such as fears and the subconscious. | ['Emilio Vegas'] | [] | ['blender', 'pifuhd', 'sparkar', 'tiltbrush'] | 72 |
10,493 | https://devpost.com/software/butterfly-swarm-vlksqh | Inspiration
The chaotic nature of butterflies has always amazed me; seeing how many creatures interact with each other without any central coordination is mesmerizing to me.
We are now living through times when going outside isn't possible for many of us so, I decided to recreate this as an Augmented Reality experience.
What it does
It allows users to experience being surrounded by butterflies or, see many of these flying around the scene with the use of the plane tracker, depending on the camera that's used. This duality is important for Reels as it can function correctly and be used with either camera giving more flexibility to the user.
How I built it
I started by researching about emergent behavior - complex behavior arising from the relationships between the individual parts of a system - and watching butterfly footage.
Eventually, I decided to use Craig Reynolds' boids algorithm to simulate emergent behavior.
The algorithm is mostly controlled by three simple steering behaviors:
Alignment: Individuals will try to align themselves with the center of the
local
flockmates.
Separation: Individuals will steer away from nearby
local
flockmates.
Cohesion: Individuals will steer toward the average position of
local
flockmates.
With the keyword being
local
, as in nature creatures don't have an accurate and complete picture of their environment, but rather have a limited perception of it.
After implementing the algorithm I focused on the 3D model that acts as the "body" of the entities. I built, rigged, and animated the model in
Blender
. To reduce the size of the project, there's only a single color texture that get's slightly randomized for every butterfly.
Challenges I ran into
Efficiency
The algorithm has a Big O notation of O(n)² which means that the number of operations grows exponentially with the number of boids, not only that but, these operations get run every frame - 33 milliseconds!
To improve efficiency and add some useful functions, I created my own
Vector
class that handles all the vector operations and was easy to modify and adjust to the project's needs.
All the functions were thoroughly tested and most can be run up to 25.000 times per frame or more.
Keeping the butterflies from flying away
Most implementations and simulations that use Craig Reynolds' algorithm take place in a 2D or 3D environment with solid bounds from which the boids can bounce or wrap around but, in an AR experience walls would be a huge distraction (and look horrible) so instead, I created an "invisible force field" bound to a user's face that constrains the butterflies' movement. In the front camera, it also has a lower distance limit that prevents the 3D objects from crossing right through the middle of the person segmentation mask which would also look bad and take a user away from the experience.
Patterns and cycles
Even though randomness is involved in many parts of the system over time some patterns start to emerge, distracting from the experience. To solve this the parameters had to be tweaked until no cyclic behavior eventually appeared in the following scenarios:
A static attractor (face).
An attractor (face) that moves back and forth in an oscillating manner.
An attractor (face) that quickly changes positions - for example, if the face of another person starts being tracked.
Difference in computing power across devices
Across brands and generations phones' CPU power changes significantly, trying to come up with a fixed number of boids that all devices in the market can handle would be impossible so instead, a "lag monitor" was implemented that monitors the user's experience. If there is on average more than 40ms between frames - 25fps - the number of boids is reduced and the ones that aren't part of the calculations are hidden.
Accomplishments that I'm proud of
How light the filter is, it's is less than 200kb so it can quickly be downloaded even on a slow wifi network.
How much I've learned in the process, I gained a deeper understanding of Javascript and learned a lot about 3D objects which I previously had zero experience with.
All the challenges I overcame.
What's next for Butterfly Swarm
As I gather feedback from the community and people who try the filter I may make small revisions to perfect the algorithm's parameters, the model or, the textures. Also, a lot of the knowledge gained during the making of this effect will make its way into effects I have in the making.
Resources:
Boids Algorithm by Craig Reynolds
Built With
blender
javascript
photoshop
spark-ar
Try it out
www.instagram.com
www.github.com | Butterfly Swarm | Experience a flock of 3D butterflies that behave just like the real ones | ['Antonio Pietravallo'] | [] | ['blender', 'javascript', 'photoshop', 'spark-ar'] | 73 |
10,493 | https://devpost.com/software/holou | Example
Inspiration
Anyone can do it! Get virtual people to anyone!
What it does
Select your template, change the textures from the one you like, EXPORT to Instagram!
How I built it
Recompile Instagram Filters to customize the content.
Challenges I ran into
Texture files used by Instagram are complex. Multi device compilation. Testing!
Accomplishments that I'm proud of
Few clicks and you got a virtual person!
What I learned
Instagram filters, arSpark, Graph API to upload resources, OpenGL textures.
What's next for HoloU
Customize textures, take a picture of your body and scan it, version for mobiles too!
TRY IT OUT in my AWS:
link
Built With
amazon-web-services
blender
javascript
particle
react
Try it out
www.instagram.com | HoloU | You don't need to be an expert to add virtual people to your videos! | ['Luciano Culacciatti', 'lukitaz_21', 'Gustavo Negri'] | [] | ['amazon-web-services', 'blender', 'javascript', 'particle', 'react'] | 74 |
10,493 | https://devpost.com/software/world-hop-2121 | Table mountain floating object found within Wanderhop 2049
Nzuzo Patch Sender. Custom Patch Group I Created To Maintain State Within My World Effect.
Back Camera State Management Solution I Created Which Allows Me To Control The Visible State Of 3D Objects Within The Scene
WanderHop 2049 Front Camera Effect. (1)
WanderHop 2049 Front Camera Effect. (2)
Inspiration
My inspiration for this project is three fold:
The first time I saw a visual depiction of teleportation was in the video game series portal, the game was filled with puzzles and caused players to think in a unique way.
I had played many games before but it amazed me how many unique experiences being able to teleport created in a game.
After registering for the hackathon I had to learn everything from scratch and did a few days of research to try get my bearings with the Spark AR developement software. I already had the intention of doing something portal related and found the project
Portal Sphere
on GitHub which inspired me to implement floating sphere objects into my design. The project itself seemed limited in its experience but I was determined to learn from it and make something more fun from what I had learned.
I was also inspired by the binary machine language, and thought it would be fun the give the user the experience of being part of the data flowing through their devices in some of my world effects.
What it does
Back Camera
Wanderhop 2049 is a World AR effect using the back camera to put the user into a scene surrounded by 3D floating objects.
Within this scene there are 14 floating spheres, 6 billboards, and a floating 3D model of Cape Town's beautiful table mountain.
The Spheres are 3D objects are encapsulated by double sided images to create the effect of a floating world, when tapped they expand to envelope the whole cameras view space so the users can pan their phone around to explore the new world they have been teleported to. Each sphere offers a completely unique experience and some have audio which compliment their own experience. There are 3 spheres related to binary code which form part of a puzzle, this was my attempt to gamify the effect. One of the three spheres contains an ASCII table which allows the user to decode another sphere which is full of binary values. when decode the sphere spells out Spark AR.
The mountain object shown in the effect is used to offer a different type of portal experience in comparison to the spheres. In its normal state it hovers below a world called Cape Town Night Sky which demonstrates the beauty of my home town in of its self but when the mountain is tapped it expands by a factor of 3 and causes all the other 3D objects to become invisible. This experiences allows the user to closely examine a objects, walk around them, look over an under them, and appreciate their full slender in detail without being completely enveloped by the expanded object.
The billboards stand tall and large to preview the world effects awaiting the user before they tap the spheres and teleport to another scene.
Front Camera
The front camera allows the user or content creator to take a highly stylized selfie. The filter creates a color shifting glare which smooths out any rough edges made by the page segmentation of the body and flashing binary numbers are used as a background to add liveliness to the image.
How I built it
Week 1
The first week was purely research driven in this time i focused on:
Installing Spark AR.
Reading through Spark AR documentation.
Brainstorming on how to implement my inspirations into an interesting filter that can be used by content creators.
Watching Spark AR tutorials on YouTube and practicing my skills by making a diverse array of small practice effects.
Learn what other skills I would need to learn or harness to achieve my goal.
Week 2
The second week was all about taking what I had learnt and implementing my ideas into a tangible beta version of my effect, in order to get to reach that milestone I had to:
Import appropriate assets from the Spark AR library; namely 3D objects, textures, audio files.
Import appropriate assets from royalty free providers from the web, namely
Unsplashed
for texture images and
freesound
for audio files.
Set all 3D objects as children of a plane tracker, from there I set the plane tracker to be readjusted with a 2D coordinate given by
a screen tap pulse.
Format the 3D layout of my world effect.
Create a front camera effect that implements page segmentation to take stylish selfies that can be used for content creators. This will give users the added option of just taking a stylish selfie if they don't have time or the creativity to explore the full world effect. I figured the more functionality the effect has the more suitable use cases content creators would have.
I used Photoshop cc5 to compress all my image files to retain as much quality as possible.
I used Blender with the (GIS plugin)[
https://github.com/domlysz/BlenderGIS
] to create a 3D model of the famous landmark
Table Mountain
. File size was quite large so I converted the GLB file into a gltf with seperated binary/image files then compressed the file using an open source
GTLF to GLB
converter I found during my research.
Repeatitive testing was done in the Spark AR development environment.
I managed to reduce my project size to under 3 Mbs considering it uses a large number of assets.
Week 3
This was the final stage of development, I had dedicated this towards finishing the final edits of my demo video and to testing my effect on as many different phones
as humanly possible within my reach. I phoned family and friends and got an overwhelming amount of data. This led me to make several changes based on the bugs people had
informed me about.
After tapping the floating spheres to trigger the portal effect some people noticed wholes in the spheres as seen in this video I posted
Spark AR portal effect test
. I recognized that this issue occurs more regularly when the world
size is set to larger than 1000 in the x,y and z planes. This also happens when the world is placed to far away from the recording device. I significantly reduced the occurrence of this bug by adding a transition patch which animates the sphere closer to the device as it expands. A simple solution for a nasty problem.
I also found that the plane tracker worked more effectively on Apple phones than older android phones (my personal phone), with this insight I create an instruction and a patch which allows the user to reposition the effect to help older phones have an easier experience navigating the 3D space.
Challenges I ran into
This hackathon came with a variety of unexpected challenges for me and it was very fulfilling to overcome them and learn from mistakes I made along the way.
Here are some of the areas in which I faced challenges in.
Learning From Scratch
I previously had no experience with Spark AR or blender before registering into this hackathon, I spent 3 weeks working on my project and spent almost all of the first week learning how to use Spark AR and familiarizing myself with the environment. I made small simple effects at first, then increased their complexity with the patch editor.
I continuously abstracted my effects while following countless YouTube tutorials and reading the spark ar documentation until I felt completely competent in my ability to create an effect that was completely unique.
Compression
One of the hardest things I grappled with was creating a mind blowing effect then later realizing its file size is far too big to be published or even tested on your phone effectively. I had to teach myself to use Photoshop and other online resources for compressing images, 3D object files, and audio files.
Device Testing
Testing on multiple devise was essential as the simulator on Spark AR is not perfect. Effects on different phones can perform in unexpected ways compared to how they would on the Spark AR simulator.
I got hold of members of my family and friends to help me test my effect
Limited Time for Large Ambitions
As a full time university student I found it challenging to balance my studies and this hackathon, I had to compromise on larger aspects that I would have liked to implement
but for the most part I am quite satisfied with the project I have produced with my own personal constraints put into consideration.
Accomplishments that I'm proud of
I found that there is an exponential difficulty when creating more than 5 indexed 3D objects that send data through the option sender to set their visible boolean property.
I had to get creative and uses my mechatronics engineering knowledge from my studies to devise a multiplexer like patch which uses the Add patch to filter through tapped selections. This approach solved the problem of managing the state of a large amount of objects and can be applied to any number of 3D object instances and is an approach I would like to keep working on.
I made it into a group(custom patch) and called it Nzuzo's Sender Patch.
What I learned
This is the first time I have made an effect so every aspect of this project that was built in the 3 week development process was a first for me.
I had some basic knowledge of Photoshop but never used it in a professional capacity.
This hackathon also taught me that time management is important above all else.
What's next for WanderHop 2049
I believe the applications are limitless, but here are some ventures I believe Wanderhop 2049 could be used in more specifically than others
at the moment those would be:
Real Estate, virtual touring of houses to international clients.
Travel Agencies, Virtual tours of different countries destinations to international clients.
Content Creation, Scenic videos displaying different worlds that compliment the creators environment.
Festival Marketing, The architecture of this effect could be applied to market different stages at a large music festival
Education, The world view layout could be used to tell stories about history in a very creative way making sure
that learners are captivated by the lessons they learn by immersing them in different educational scenes.
Built With
ar/vr
blender
davinci-resolve
particle
photoshop
sparkar
Try it out
github.com
www.instagram.com | WanderHop 2049 | WanderHop 2049 is a teleportation device, this version consists of scenes based in the real world, animated spaces, and digital spaces which computers communicate through. | ['Nzuzo Malinga'] | [] | ['ar/vr', 'blender', 'davinci-resolve', 'particle', 'photoshop', 'sparkar'] | 75 |
10,493 | https://devpost.com/software/the-video-of-the-reels | Inspiration
It occurred to me while playing on the computer with the same image on the double screen, the idea that the user appears duplicated on another screen.
What it does
It repeats the actions of the user but on a console screen with a different background and in which the time and date can be seen, simulating a console menu.
How I built it
Editing first the image of the console to cut out the parts that I needed and empty the background, then in Spark AR in the background mode separate the user from the background and adjust them to the line with the image of the console and put the texts.
Challenges I ran into
Manually crop the image so that only what is necessary remains.
Accomplishments that I'm proud of / f## What I learned
Ir mejorando poco a poco con el photoshop e ir aprendiendo cada día nuevas funciones anque no hicieran falta para este proyecto.
What I learned
What's next for The video of the reels
Employ a different way in which the user's reels are seen and attract users who love video games who may find the view of the reels curious with this effect.
Built With
photoshop
spark-ar
Try it out
www.instagram.com | The game of the reels | Create the feeling that the real video and the effect is inside reels. | [] | [] | ['photoshop', 'spark-ar'] | 76 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.