title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Check Your Broken English
During the past few years, the text-to-speech technology has covered a lot of ground. The robotic, monotonous audio of the past — often ridiculed in various cartoon sitcoms such as The Simpsons, Futurama, and Family Guy — is rapidly turning into nothing but a distant memory. Just have a look at Play, the Chrome extension/app/web platform that lets you listen to articles from Medium and other curated blogs. Or Narro, the tool that turns your reading into a podcast. Just to name a few examples. But what about the other way around? Is speech-to-text becoming equally immersive and reliable as text-to-speech, and with the same pace? Well. We might have been just a little bit naive, or overconfident about our English speaking abilities, but we figured that, hey — we should definitely generate automated transcripts of our podcast episodes! We decided to give Smab a try. It’s still in beta, but what the heck. What could possibly go wrong? How about: almost everything. We’re not saying that we regret it. Not at all. Quite the opposite. How could we, with such a hilarious result? I mean, the world would’ve never had the chance to laugh at, and create memes of, one of the most classical phrases of all time it it wasn’t for broken English: All your base are belong to us. So, consider this our tribute to Zero Wing and all of our fellow members of the Broken English Club out there. Check Your Facts proudly presents: Check your fax! Check Your Fax broadcast and the Digital Journal of Some Rocks George W. Bush: I hear rumors on the internet’s. Henrik: Greetings from Stockholm and the block this lava. My name is Hamitic. Dávid: And my name is David, and you’re listening to the Check Your Facts focus. H: Well tell me David, what exactly is the Check Your Fax broadcast — or should I say, what we would be? D: Great question. So it will be a fourth because the bulk media and journalism, and since we’re both from Europe it will be centered only Europe European countries and they also beyond other countries which I’m not usually being covered in the media and journalism podcasts, at least not the ones we know and listen to. “Yes, it will be a standalone trampled say a first grade that is old and the we will be talking with her about the Slick community, about the journalism Europe, and about all the nice things that happened to her since she started the Catholic community.” H: Yeah, for those of you who don’t already know, Stockholm is the capital of Sweden in northern Europe, impetus lava is the capital of Slovakia. Right? D: Oh yes, you’re right. The two of us met in the US lack community Digital Journal of Some Rocks. H: And therefore it’s quite fitting that our very first guests, you know first episode this is just a teaser, but in the very first episode we will interview Lena Team, the founder of the Digital Journalist of Some Rocks community. D: Yes, it will be a standalone trampled say a first grade that is old and the we will be talking with her about the Slick community, about the journalism Europe, and about all the nice things that happened to her since she started the Catholic community. And just to add, the Slick community counts now hundreds of journalists from not only Europe but I would say the whole world. H: Yeah, and of course we will also ask some questions about Lean Left Him and who she is and what she does. Because the arm I don’t know about you Dovid but I’m not one hundred percent sure about what what Lina TM the US update today basis. I mean we both know that she’s the commander in chief of the Digital Journalism Rocks community but I’m, I don’t know what she does for a living, so to speak. D: Now she’s differently a mystery to me yeah. H: So you should definitely stay tuned for the very first episode all the Check Your Facts podcast, the first of hundreds of episodes we hope, with yes yeah with a bunch of grade and interesting guests. We cheese. D: Yeah we plan to do this book because two weekly now you can go into a culture of our website, we cheese Check Your Facts dot E. you. Esteem in Europe E. You are. You can follow us on the old the social networks and it would be so wonderful if you could just scrabble some iTunes or Google Play or where ever you found us. Just remember: Check Your Facts don’t you. H: And we should probably also mention the newsletter, because it would be awesome if you subscribe to our newsletter since we intend to blog about this little adventure all hours. Will blog about the episodes and some useful tips and tools. D: And don’t forget to tell your friends about this because it’s gonna be a hell of a journey. H: Yeah. That’s the perfect ending.
https://medium.com/check-your-facts/check-your-broken-english-1e3c2d4093b1
['Henrik Ståhl']
2017-02-11 11:45:25.588000+00:00
['Speech Recognition', 'Podcast', 'Tech', 'Journalism', 'Algorithms']
How Sevdaliza Captures the Art of Being a Woman in “Ison” and “The Calling”
Natasya Fila Rais for Le Citoyen Sevdaliza, Iranian-born Netherlands-based singer songwriter. (BBC.com) LE CITOYEN — Operating from another world, the Iranian-born Netherlands-based singer songwriter producer Sevda Alizadeh, known as Sevdaliza, has proven that she is not like other girls. The self-taught artist who began her career as a basketball athlete for one of the basketball teams in the Netherlands released her debut album, “Ison”, in April 2017, and then became a scene stealer for her artistic perspectives and otherworldly tunes. The whole album was like a trip into another dimension; heavy beat drops, eclectic musical instruments play, poetic lyrics, and soothing vocal leave our jaws to drop while listening from one song to another. Aside from the magnificent music, there are also a few stories told in every single song from the album, but mostly they tell tales about the art of being a woman. “Human”, the tenth track from the album, captures the imperfection of human. The statements written on the lyrics tell about the existence of human as a whole, somehow living and accepting their flaws because that is how they will stay alive. The music video of this song is beyond imagination ever depicted. The singer took a role as a dancer with a mythical creature persona — a woman with horse legs — and danced to entertain the men who were watching around the amphitheater. The video itself represents the judgmental trait of society; they see and objectify women, they tell women what can and they cannot do, they command women to be perfect when the fact that nothing is perfect in this world. There are also tracks that depict women’s roles in relationships, for example “Angel”. The song only consists of one sentence of lyric, sung on different levels of notes and pain. Sevdaliza states in the song that it should not be this hurtful to be someone’s angel. It represents a woman who is in a toxic relationship — not limited to relationships with lovers — and somehow her good is not good enough for them to value. Tired and pain can be felt throughout the 6-minute track, as the words will slowly be pictured on your mind, leaving trails of scars and heartbreak. The track “Bluecid” represents deep longing of a lover, whose presence only exist in a woman’s dream. The guitar strums and beat drops will give you the relaxing vibes, as though you are traveling in your own lucid dream. Aside from that, the lyrics are simple but hurtful. Those are the kind of lyrics that will get stuck in your head, for everyone seems to have experienced such a thing in real life. The pain is very relatable on this track. Another side of women that Sevdaliza tries to capture on this album is motherhood. On the track “Hero”, she describes the struggle of being a mother and how the things a mother does will somehow affect the children. She also describes how strong a mother can be, with all of the pain and problems faced in the world. A mother’s love is unconditional and that is what the singer tries to tell through this song. The music video of this song featured her mother, who both were covered in all-white clothing, in order to capture the purity of the track.
https://medium.com/le-citoyen/how-sevdaliza-captures-the-art-of-being-a-woman-in-ison-and-the-calling-e9ba2bb8cb4
['Le Citoyen P C']
2018-04-18 10:54:12.298000+00:00
['Ison', 'Sevdaliza', 'Women', 'Music', 'Feminism']
A Marauder Christmas
A Marauder Christmas A Flash Fiction Story Titan Moon was a frozen ball of ice and snow. The atmosphere was breathable, but the planet itself was a harsh place to live year-round. Max was glad she had packed her thermal suit along with her normal winter wardrobe. She hung from a jury-rigged swing dangling behind the left thruster of a modified T-Freighter she had dubbed, The Gypsy, trying to find the tiny pieces of an asteroid that had brought them down in an emergency landing on the frozen seas of the Titan Moon. Max volunteered to make an off-route stop in the Syntori System when she learned the Space Angels Relief Organization would not be making their annual Christmas drop on Titan, a central way-station for all incoming supplies to the system. NovaCore had instituted a blockade sanctioned by the Federation of Galactic Governments on the edge of Free Space. The purpose? Discourage pirating along the trade routes they believed originated from the Syntori System. The only ones diverted were the legit traders and spacers trying to enter while limiting supplies to the planets that needed them. Max grew up in that system. She was one of the few pilots skilled enough that could and had successfully navigated an unsanctioned space route through the asteroids fields that populated the system. Unfortunately, even the experience had their mishaps. “Peepers, bring me the hydro-extender and bot probe.” A few sputters and beeps followed with one word in a high octave trill, Cooold,” sounded across her suits COMM unit. “Pretend you’re in deep space and set your thermo-shield on maximum. You know that jacket is not going to keep you warm enough.” A series of complex beeps, tweets, and off-key notes followed. Peepers could say a few words in basic, but his vocal ability was limited. Mostly his language consisted of binary notes and his own unique communication song. “I know Nina made it for you. I didn’t say not to wear it.” Max exhaled a white smoky breath that clung to the cold air. “BUT… you would have more agility without it. Clothing is not a necessity for MEC Droids.” This elicited several outraged grunts, squeaks and sputters with a high trilled word, “Naked”, which made Max laugh. “Like there is anyone out here to see you. Come on Peeps. I don’t trust this ice. We need to get moving before the sun drops.” A few seconds later Peepers maneuvered into view, careful not to bump her with his anti-gravs wearing his Santa hat and Nina’s Christmas gift from last year. It started to snow. *** Max finished the last repair. She could not see the front half of the ship through the blinding white that blanketed everything. She paused tilting her head. Engine or thunder? Max released the lock on the swing and dropped to the ground pulling the swing free of the ship. “Peepers, in the ship, I don’t like that sound.” Not an engine. Local wildlife. Shoehorn deer the size of a small spaceship. The ground vibrated, and the ship rocked. There must be thousands. What made animals that big run? Hunters or a bigger animal? *** Max tried to start the engine. Nothing. She jerked her chair strap loose sprinting to the ship’s thermal systems access hatch. The ship was stone cold. The ship jolted. The spin catapulted Max into the open hatch before she could climb down. Scrambling to her feet she flipped open the thermal panel. No lights. Peepers hooted above her. “Ships frozen. Thermo unit didn’t kick on.” Maybe it was fortunate the tilting ship slammed her into the vapor filtration unit which hiccupped. “I know what’s wrong Peepers. Cockpit now. Be ready to start the ship.” She pushed her unease aside as she crawled into the tiny, snug vent space inching forward to the outside cluster spidering in multiple directions all through the ship. She popped the access door to the exterior system. It was packed with snow. The sensors had not activated. She melted a path for her arm along the wall to reach the manual expulsion leveler. It resisted. The sudden tumble of the ship knocked her breath out, but the snug compartment held her in place. Good fortune smiled again, painfully. The extra inertia helped move the level. Multiple successive loud pops sounded throughout the ship followed by gurgling gushes. It worked. “Start the ship Peepers.” A thunderous crack reverberated through the entire ship as she crawled out of the main hatch. Gravity fell away before sensors kicked on yanking her down hard. The ship dropped and then buoyed slowing the downward descent. Peepers trilled frantically, in the cockpit, as he worked to control the ship. Max slipped into the pilot’s seat. They continued to sink below the ice surface into the planet’s oceans. *** Once upon a time, Titan Moon was volcanic. There was a whole system of underground caverns and tunnels that now lay empty and cold. Max active the HoloMap. She pointed to the dropping pin-light. “We’re here.” She drew a line with her finger to a spot approximately 50 kilos from that light. “That’s the planets underground cavern system. And that… That is the extinct volcano. We’re heading there.” Peeper’s chirp an agreement which gained three octaves as Max made a sudden hard-right shaving the protruding ice wall extending from the surface to avoid a collision. They dove deeper. *** Max groaned. An underwater colony sat at the volcano’s entry point hosting the pirate insignia. Marauders. The scout ship spotted them. “Why do problems come in threes?” Peeper’s sing-song lecture might have been amusing in different circumstances. He failed to understand redundant questions were not meant to be answered. Max ignored him. “You have the phase cannon. Don’t fire unless they fire at us. The only way out is through the volcano.” Once inside the belly of the volcano, she made a hard-vertical climb with the scout ship on her six. She thought she would shake’em veering through a narrow slot on one of the walls. She didn’t. Diverting into a side cave she activated the holo-map calculating an alternate route out of the maze. Course corrected… Or not. Heart pounding, Max’s grip tightened on the control stick as her breath hitched, “Blast us an exit point Peepers. Now!” They shot back into the volcano belly heading up. *** The Gypsy burst into the open air of the Titan Moon. They carried toys, clothes, food, and medicine. She was making this delivery. The Hollow Mountains where the settlers made their refuge loomed before them. As the Titan Guardian Ship rose up from the ground the small scout ship cut out. Max landed on the ship launch pad floating separate from the mountain as directed. A tractor beam pulled them inside. *** The Christmas Eve festivities were well underway despite lack of supplies. The huge main hall tree was covered in ice preserved by a force field, and decorated with food ornaments, candles, and stick figures. Giant deer roasted over open fires housed in a lava rock created venting systems that carried the smoke outside. A water distillery had been rigged to utilize the ice from outside to deliver a filtered and continuous water supply inside the mountain. Every level was filled with beautiful surprising wonders, homes, and unusual wares. Families gathered in song and story. The main event was the Christmas tale of the Ghost Pirate. Max was riveted by the tale, different yet like the Christmas stories of her childhood. The Pirate had replaced the Jolly Man. Later Peepers played the clown acting out parts of the story with the children to make them squeal and then burst into giggles. At midnight everyone moved outside to watch the ribbons of natural energy to the planet dance across the sky lighting up the nightly meter shower. A wonder unique to the moon. The snow started to fall. It provided the perfect framed adding to the illusion of a giant yawning skull descending from the sky complete with pirate hat. The black ship landed eerily quiet for a space vessel its ghostly motif covering the entire hull like a silent entity from beyond. Max finally understood. The Ghost Pirate did not come as a thief in the night to this planet instead he was welcomed delivering a bountiful loot to a people in need. Originally appeared in Katharina Gerlach 2018 Advent Calendar Sign up at this link for the current years 24 story count down until Christmas starting Dec. 1st. Bonus eBook of all stories on the 24th. Also daily bonuses with each story.
https://medium.com/collaborative-chronicles/a-marauder-christmas-f26bf9f2c6a2
['Juneta Key']
2020-07-21 18:52:13.564000+00:00
['Science Fiction', 'Speculative Fiction', 'Creativity', 'Fiction', 'Short Story']
What Even is Unity?: The Possibilities for Biden’s Vision to Overcome Trump’s Division
flickr.com President-elect Joe Biden campaigned aspirationally on a vision of uniting a country many see as severely, if not hopelessly, divided. After all, while Biden amassed over 80 million votes, the most votes ever tallied by any candidate in a presidential election in U.S. history, Trump hauled in the second-most votes ever, finding the support of over 70 million American voters. I don’t think I need to spend a lot of time here elaborating on the many ways the soon-to-be former racist and sexist-in-chief fomented divisions and exacerbated the fault lines in U.S. society and culture. So how can we even speak of “unity” when the divisions seem to cut so deeply and venomously? And what does “unity” even mean? Let’s start here. Simply being on the same page as to what constitutes reality and the truth would be a start. If we could agree, for example, that climate change is a real threat to life as a we know it or that COVID-19 is not a hoax, that would be huge; it would be an important and by no means simple kind of unity. It wouldn’t mean that we would be united in agreement about the best public health agenda, on energy policy, on taxation to support public policy agendas, and so forth. But being on the same page in terms of basic reality would be an enormous advance for the nation. A common understanding of reality provides a foundational unity to even have conversations about policy approaches to addressing challenges that, if not shared by all, are shared by a majority of Americans. Trump’s political strategy, you might have noticed, was to steer clear of, if not completely obscure and distort, policy discussions. He did not even bring a policy platform to the Republican National Convention for party members to affirm or debate. So, one measure of Biden’s success in unifying the nation will be the extent to which he can shift Americans’ foci to matters of policy, not personality. Again, drawing Americans into this conversation would be no easy feat, but is it a possibility? Let’s take a couple of issues, like health care and public education, to assess the possibilities and pitfalls for unifying Americans in a policy debate rooted in a firm understanding of our shared reality. Recall that after Trump emerged victorious in 2016, many of his voters suddenly found themselves terrified that he would actually do what he promised, which was to repeal Obamacare. At the time, Sarah Kliff and Byrd Pinkerton, reporting for Vox, visited Whitley Country in Kentucky, where the uninsured rate had declined by 60 percent because of the Affordable Care Act but where 82 percent of the voters supported Trump. One Trump voter they interviewed, Debbie Mills, an small business owner whose husband needed liver transplant, represented many voters in the country living in fear and incredulity the Trump would follow through on his campaign pledge. She said at the time: “I don’t know what we’ll do if it does go away. I guess I thought that, you know, [Trump] would not do this. That they would not do this, would not take the insurance away. Knowing that it’s affecting so many people’s lives. I mean, what are you to do then if you cannot … purchase, cannot pay for the insurance?” Like many voters, for whatever reason, Mills did not take Trump seriously when it came to repealing Obamacare: “I guess we really didn’t think about that, that he was going to cancel that or change that or take it away,” she said. “I guess I always just thought that it would be there. I was thinking that once it was made into a law that it could not be changed.” Now fast-forward to the 2020 election. Many Trump voters seemed not to have learned the lesson. Maybe they didn’t pay attention to John McCain’s negative vote that saved Obama care from a “skinny repeal” back in the summer of 2017. Early last October The New York Times reported how many Trump supporters who deeply cared about affordable healthcare as a top voting issue, believed Trump would protect coverage for those with pre-existing conditions, despite a policy record clearly demonstrating the opposite. One voter said: “I’ve heard from him that he would continue with pre-existing conditions so that people would not lose their health insurance. It’s made a big difference with me and my husband.” Here is a basis for unity, suggesting many Americans, whether Trump or Biden supporters, share an important policy position. Recent elections show as well that when it comes to public education, the possibility for political unity among voters across party lines is a real one. In Michigan, Democrats Darrin Camilleri in 2016 and Padma Kuppa and Matt Koleszar in 2018 flipped Republican-held state representative seats in their respective districts by foregrounding the erosion of public schools in those districts due to a gross underfunding caused in part by Betsy DeVos’ long-standing charter school movement in the state. Also in 2018, Kansas voters elected Democrats Laura Kelly as Governor and Sharice Davids to the House of Representatives, who ran on support for public education after Sam Brownback’s cuts to education were so egregious that they were deemed unconstitutional by the state’s supreme court. In November 2019, Democrat Andy Beshear defeated always-Trumper incumbent Governor Matt Bevin largely, by many accounts, because of his support for teachers and public education, while Bevin ran on a platform that refused to increase education funding. And these are just two issues. American families need and want health care; they want quality schools for their children; they want clean air and water and a safe environment and habitable world. Of course, there are gross and ugly divisions Trump has exacerbated. There are also broad and multiple points of unity Trump has obscured and the media has not focused on sharply and frequently enough. Health care, education, and a safe environment don’t grab attention the way Trump’s racism, sexual misconduct, and general hate do. But Americans may be more unified than we are led to believe when it comes to the challenges we face and the policies we need. Biden at least has a starting point and a path forward to achieve his pledge of unifying the nation.
https://medium.com/discourse/what-even-is-unity-the-possibilities-for-bidens-vision-to-overcome-trump-s-division-c0098cb4b21d
['Tim Libretti']
2020-11-30 18:36:42.412000+00:00
['Election 2020', 'Biden', 'Politics', 'Society', 'Culture']
We do NOT live in a virtual reality!
We do NOT live in a virtual reality! We love tricking ourselves with what we know. The theory that we all live in an virtual simulation is booming. The evidence seems to grow day by day and youtube offers a host of video’s to spread the meme. But is it true? I think it’s not, for one huge, and, in my mind, compelling argument: We keep on projecting our (desired or feared) reality onto the universe. And in our descriptions, we’ll use metaphors referencing our own understanding. We see, what we expect to see. No wonder the flat-Earthers can believe so deeply in their ‘proof’, while ignoring counter arguments. No wonder racism is so hard to overcome. How does this work? I want to propose the “Self-referential Projection” principle. We tend to discover the reality of world fits the way we see it. Yes, there’s a mistake at the core of that logic. Like when I’m a fearful person. Lo, and behold, I discover everywhere in the news: “Life is scary and bad people are out to get us.” So we always tend to see the world as we personally experience it, mixing this mostly a bit with what helps me to fit in, be accepted by my (sub)culture. Our personal system hungers for explanations of reality that fit our personal experience and mindset. We can even become deadly (mostly for others) serious about defending such concepts. “Become Christian or die!” God, the Clockmaker When you look through the lens of your time and culture it’s easy to see how this works. Let’s take this grander and, I must admit, not very nuanced view over time. First God made man the center of the universe. What a shock it was then to see we revolved around the sun, not the sun around us. ‘We aren’t the center, shit, then what is true?’, people wondered. Then when science had its lift off, around Newton’s period, God became the ‘clockmaker’ who created a perfectly set universe with precise rules. We can even wonder if Marx wrote his theory attacking this concept, by challenging the rules of the game. He made clear: the clock or rather our man-made clock is flawed. Survival of the Fittest Also understanding reality doesn’t mean (seek) control over it. Then when Darwin’s findings became accepted we got to live in a world of principles that seemed to favour the fittest. And whaddayeknow, exactly around that time the British colonization machine was showing the world who was the fittest and thus had the ‘highest civilization’; according to the British that is. Model and culture fit each other smoothly. Chaos and Complexity Then came chaos and complexity theory. Born shortly before WWI, but going in overdrive after. Here we lived suddenly in an expanding universe, where chaos ruled. Because we, still lingering in ‘survival of the fittest’ model, wanted to understand the most essential laws of nature so we could use them to rule. Or if shocked by the war, find how order comes out of chaos? Nietzsche must have felt this time coming when he wrote ‘God is dead’, because this was the time the universe was all mathematical principles we just needed to understand. And it felt like a machine believe with a lack of meaning. Evolving Consciousness This theory then evolved to an evolving consciousness model. First churches due to increased mobilization started to empty. Many felt they didn’t need Church as safeguard community anymore and the age of individualism and consumerism started. Still many needed some afterlife principle and God wasn’t yet back in the picture, (Of course also many never left the God belief, but we are interested in those opening up the projections of their own time) Then also slowly the discoveries of the spiritual explorers of the early 20th century, who studied Sufism, Guru’s, and such became slowly more main stream. Hence this calming theory of expanding consciousness. ‘See we are growing all the time’. Can you feel the era this fits best? Yes, babyboom time. Right after WWII everything seemed to grow just fine and without limit. This also fits consumerism. ‘See, we are the center of the expanding universe’. Law of Attraction Then we saw the hippy church, going to India a lot, brought back reincarnation. What a relief for all those missing an afterlife, when there’s no heaven. This mixed so nice with the half understood truths of before. We now found we live in an expanding universe that shapes itself to how we perceive it. Oh my, the Law of Attraction happened! This is a kind a hopeful ‘Self-referential Projection’. ‘I can order what I want, because I’m the evolving center (preferably in wealth, health and spiritual consumerism) and I understand the laws of nature.’ (read a watered down version of these laws fitting my self centric convictions) Rise of Virtual-Reality Reality So recently the youth that was raised on video games grew up. They all saw the Matrix. They all had avatars they steered run around virtual game universes. So, guess what they see? Exactly. We all live in a Matrix, simulation. And from the old believes we keep: there’s an afterlife (‘I expect to return to my higher self who played me’). There’s higher laws (‘See, quantum physics exist because we’re in a super computer’). There’s no need to be too responsible with nature (‘We’re in a game, so why should I stop playing me to help the rest of the world?’). And in this last conviction we also see the self-centered consumer philosophy. Projecting our reality onto reality actually strengthens the Matrix, and doesn’t help see the essence at all. We now live in a time of making theories bitesize pop culture. I discovered this idea, when I read for a while about the latest theories in physics. Somehow they always end up in the speculative realm, which reads more like a novel than science. Here scientists project their ideas, based upon their into the world. And what happens, like with the Law of Attraction, some normal people will pick up the speculative projection and make it bitesize and popular according to their convictions, hopes and cultural reality. Using this view you can see it everywhere. One day our dead hero, Jesus, Arthur, will come return. Who believes that? What about endangered cultures threatened by others? These are the peoples longing for a supernatural answer, who’ll help overcome something they themselves have no power over. The Jews in their time dealt with the Romans and many neighbours. ‘Jesus come back please?’ The, just turned Christian, pre-English, dreamt of Arthur, during the Saxon and not long after the Viking invasions. Are we part of nature? It has been found that the hero returning from the dead myth is way more ancient than the Jesus story. It’s a trope that’s rediscovered over and over again, when certain troubles happen. Our biggest trouble now is climate change. No wonder the Churches are talking about end times and yes, there he is, ‘Jesus will be back soon’. Gaining in popularity is also the indigenous belief, ‘we’re part of nature’. I think that’s a fact. And if not, it feels like the best realization, to help us stop destroying our own environment. It’s probably the best paradigm: ‘we need to start fixing nature, by fixing ourselves’. So can we see beyond our biases? Can we see beyond our culture? It’s hard. But we must try. With a general decrease in people having any intellectual skill, critical thinking education, we have more and more big decisions being taken by morons with a lack of true understanding. Sometimes they lie, ‘Climate change isn’t real! So, invest in oil.’ Sometimes they are too bloody ignorant. ‘Climate change isn’t real because my TV station, sponsored by big oil, says, “science sucks”.’ The Future of our Understanding of Reality But, hey, the Quantum Computer is around the corner ready that change the playing field again. Once again it will seek to integrate all previous ideas, while mirroring cultures. I predict the ‘virtual reality’ believe will stay, but claim we are both virtual reality and reality. It will focus more on maintaining both flux and balance in systems. Because we’ll see way more unrest, big shifts around the world and no power being able to control it. We’ll rather feel part of the machine then in control, as we lose control over our own next advances, when they start thinking for us, or themselves. And finally people will predict the ‘end-game’ is on, because man, is shit complex and little nature left to go silent in. Knowledge about nature isn’t being in nature, nor does help to experience nature. It’s preparation at best. Sadly when forests become abstractions, we actually train managers who don’t care about them. Hope and Warning So my guess is that the next iteration to mirror our society will show a lack of comforting higher meaning, and more surrender to higher principles ruling us. And this time to get control back, more voices will demand we use these principles to shape our laws by, in order to survive. I hope it will restrain the power of the corrupting corporate lobbies who for profit keep on destroying the environment and people, like the oil and arms lobbies. Remember that when those laws will written, they’ll not be the truth, they’ll be the truth of those who write them and their interests. And when those in power still prefer their own interests over the collective health get to write those rules, we know it’ll end up being a new way to suppress change and won’t give nature space to heal the planet. Better be awake when that happens. Where To Go From Here. And for those that hope to find the absolute truth. Sorry. I have no clue. But I do know the following: find ways to be aligned with your environment. Be a healthy part of it. Destroying more nature than can regrow because of your actions makes no sense, however slow the damage is done. It’s not for nothing that lions kill to eat, and don’t seek to perfect their kill production and kill as much zebra’s as they can. Destroying other cultures makes no sense either. We need a health ecology of diversity in nature, in cultures and approaches. At heart we can know one thing: Our ecosphere developed as a balanced culture of millions of interactions and creatures all interacting with each other. Can you respect this as a living community of diverse life forms, with an ongoing million years old dialogue? I think forests are communities where species intermingle and have century old dialogues we can’t yet understand. Better have respect for all we don’t know yet, rather than destroy of create watered down versions out of arrogance. Yes we can, but we shouldn’t. Jesus said, “Forgive them, lord, for they don’t know what they are doing.” 2000 years later we still don’t know. Acting as if we do has shown to end up in robbery, plunder and destruction of our eco-sphere. We should start listening to scientists who know at the least a little bit more, let alone listening to indigenous people living amidst nature who know best, Not doing so is becoming a crime against life on this planet, even when the laws of the land seem to prove it is all legal. Our hearts know different. And that story, my friends, of importance of keeping real nature high in esteem is indeed ancient, and for good reasons. We need to come back of the arrogant idea, we’re above nature. It is literally killing us. We need to reconnect to our softer, humbler, nourishing side. Can we learn the humility and care needed to be part of, in service of the living web we are part of? I dearly hope so.
https://medium.com/the-gentle-revolution/we-do-not-live-in-a-virtual-reality-55ec391854ba
['Floris Koot']
2020-02-13 22:08:39.834000+00:00
['Scientific Method', 'Ecology', 'Virtual Reality', 'Science', 'The Matrix']
Treating Most Common Data Uniformity Problems with Pandas
Date Uniformity One other very common problem is date uniformity. Different countries have different standard DateTime formats and when you have data from multiple sources, you are going to have to deal with multiple date formats. In Python, there are 3 main DateTime formats you can work with: 25–12–2019 -> (%d-%m-%Y) -> day, month, year -> day, month, year October 31, 2020 -> (%c) -> literal dates -> literal dates 12–25–2019 -> (%m-%d-%Y) -> month, day, year When you load data into your environment with pandas , it always imports dates as object data type. In the Setup section, we imported sample data containing people's full name and birthday: people.sample(5) The birthday column represents the dates as YY-MM-DD format but as strings. Converting date columns to datetime has a number of advantages to perform analysis on. We will use pd.to_datetime() function to convert the column into datetime : We got an error! If we look closer, the error says that month must be in 1, 2, …, 12. That means, somewhere in the data, there is an inconsistent value(s) that is preventing the function to run. pd.to_datetime has a good workaround for such cases: This time, we did not get any errors. If you set infer_datetime_format to True , pandas will automatically identify the date format based on the first non-NaN element and converts the rest to that format. If there are any values that do not fit the conversion, errors parameter decides what to do with them. If set to coerce , pandas will put NaT s which are missing values for dates. Based on our assumptions, the above line of code should have spotted date inconsistencies and put NaT s to inconsistent dates: There are a few methods to handle such inconsistencies: convert to NA and treat accordingly infer format by looking at the data source, how it was collected infer format by looking at other dates If you have many inconsistencies, converting them to NaN is not always an option. You should try to come up with custom solutions by looking at their patterns and inferring their formats. After you are done with error-handling and conversions, it is best practice to convert date columns to global DateTime standard:
https://towardsdatascience.com/data-uniformity-in-data-science-9bec114fbfae
['Bex T.']
2020-11-14 17:09:15.792000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Programming', 'Software Development']
เทคนิคการทำเว็บไซต์ให้ติดอันดับ Google ด้วยการส่งเว็บของเราเข้าสู่สารบัญเว็บไซต์ด้วยวิธีง่าย ๆ
We are a full-scale digital agency based in Chiang Mai, Thailand, specializing in software development, design, and digital marketing consulting. Follow
https://medium.com/artisan-digital-agency/%E0%B9%80%E0%B8%97%E0%B8%84%E0%B8%99%E0%B8%B4%E0%B8%84%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B8%97%E0%B8%B3%E0%B9%80%E0%B8%A7%E0%B9%87%E0%B8%9A%E0%B9%84%E0%B8%8B%E0%B8%95%E0%B9%8C%E0%B9%83%E0%B8%AB%E0%B9%89%E0%B8%95%E0%B8%B4%E0%B8%94%E0%B8%AD%E0%B8%B1%E0%B8%99%E0%B8%94%E0%B8%B1%E0%B8%9A-google-%E0%B8%94%E0%B9%89%E0%B8%A7%E0%B8%A2%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B8%AA%E0%B9%88%E0%B8%87%E0%B9%80%E0%B8%A7%E0%B9%87%E0%B8%9A%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B9%80%E0%B8%A3%E0%B8%B2%E0%B9%80%E0%B8%82%E0%B9%89%E0%B8%B2%E0%B8%AA%E0%B8%B9%E0%B9%88%E0%B8%AA%E0%B8%B2%E0%B8%A3%E0%B8%9A%E0%B8%B1%E0%B8%8D%E0%B9%80%E0%B8%A7%E0%B9%87%E0%B8%9A%E0%B9%84%E0%B8%8B%E0%B8%95%E0%B9%8C%E0%B8%94%E0%B9%89%E0%B8%A7%E0%B8%A2%E0%B8%A7%E0%B8%B4%E0%B8%98%E0%B8%B5%E0%B8%87%E0%B9%88%E0%B8%B2%E0%B8%A2-%E0%B9%86-90b8e041af73
[]
2018-03-19 04:51:40.128000+00:00
['SEO', 'Developer', 'Google', 'Artisan', 'How To']
The Biggest Lie In Open Source
It Can’t Be a Source of Income Open source software is free. Therefore, its maintainers and creators can’t make a living out of it. Wrong. At first glance, open source software is free for its users. But there is a lot more to understand before saying it can’t be a valid source of income. Like with any digital product, making money is all about your business model and the marketing strategy behind it. If you’re interested in making money from open source projects, here are some ideas for you to consider. Selling professional services This is the most common one. As I’ve said before, people tend to assume that because you’ve built a project and published it to the world, you need to support it 24 hours a day. That’s not only untrue, but it’s definitely a whole different area of work. So why not charge for it? In fact, why not charge for training as well and even provide support for companies trying to use your free product? Those are what we call professional services (services meant for companies using your product). There are big examples of open source projects doing this exact thing. For example, RedHat, IBM, Hortonworks (around Apache Hadoop), and Percona (for their open-source database). Selling related content How many books have you seen (or even read) about React or PHP? The books weren’t free, were they? If you managed to build an open source project that people like and use, then you can make money by giving those people products they can use to learn how to use it. This is very similar to the professional services model, yet that one implies you’re personally involved (thus allowing you to charge higher rates). However, with products, you can build cheap alternatives that are accessible to non-company users (i.e. developers trying to use your code). Even if you’re not the project’s author, you can benefit from their success by doing this exact same thing. You’re building products around an open source project (just not your own). We’re talking about writing books about it, creating video courses for platforms such as Udemy, or even writing sponsored blog posts about these OS projects. Why not? Sometimes, authors will be willing to pay you money to write about their projects. Donations You can make money from people donating to your cause. Don’t be afraid to ask for money. As long as it’s done tastefully, it’s definitely a valid income option. And if you’ve built a project that a big community is using, you’ll be surprised at the results. Looking at projects such as Git, you’ll see that they do receive donations from anyone interested in their cause. It’s all about the reach your project has and the community built behind it. If it’s big enough, then there is probably a way to make money out of it. There are many other ways you can go about building income from your open source work. It’s just a matter of getting creative.
https://medium.com/better-programming/the-biggest-lie-in-open-source-de38f71aa88c
['Fernando Doglio']
2020-12-22 17:15:13.480000+00:00
['Open Source', 'Software Development', 'Technology', 'Software Engineering', 'Programming']
US Pharmaceutical Companies on an Interactive Map: Categorized by Ranking and the use of Artificial Intelligence(AI)
Introduction: Last week, I had an opportunity to attend the conference hosted by the American Association of Bangladeshi Pharmaceutical Scientists (AABPS) in Bethesda, Maryland. At the conference, I talked to people from the pharmaceutical industry and the FDA. During those conversations, I found that most of the people who work in the pharma companies located in the east coast of the country and surprisingly there are plenty of companies in the country I have never heard of. Reasonably, I asked myself, “Are most of the pharma companies located on the east coast or how are they located across all the states or why haven’t I heard about those companies name”?. A quick search for “us pharma companies” on google gave me a few links. Among them, these two ( drugs.com and rxlist.com) websites were helpful, and drugs.com contains more information so far. So, I started to save information about all of the companies, including their addresses from drugs.com using python. Data: Here is a snapshot of the code that I wrote but will not share all of the code here to keep the article short. This code gave me the idea that there are 404 companies listed on the website. So, I started to collect addresses of the companies from the links. There are about 80 links that grayed out and what I found that they have been either acquired by or are part of big companies. So, that left me around 320 links. At first attempt, I saved 262 companies information that includes name, full address, website, and career page link. With another attempt, it gave me information (name and address) of only a few companies. Then, I added some data manually and created a total of 308 companies information with their physical addresses in the country. First Map: Now I took these 308 physical addresses and found their latitude and longitude using python library called ‘googlegeocoder.’ Then, I created my first map by using python ‘folium’ library. Now I wondered, how could one locate the top pharmaceutical companies on the map? More Data: Next, I searched on google for top pharmaceutical companies, and it turned out pharmexec.com is a good source of top 50 Pharma companies. Since they listed as images or with the respective company’s logo, I could not use my coding skill. But I still wanted to answer the question that was asked earlier. So, I had to manually collect the data while I was working to block my western blot membrane with non-fat milk in a lab. During my research on the information of pharma companies, I found that a significant number of companies use artificial intelligence (AI) for drug discovery which was reported on an article titled “33 Pharma Companies Using Artificial Intelligence in Drug Discovery”. That’s why I was curious to look at the map to find which companies have been using Artificial Intelligence. Here are the data that look like the following: Final Map: With this data, I was able to create my final map. Click the link below to see it. Pharma_MAP or copy and paste the following link in your browser https://bit.ly/2NWBT7u You can zoom in and zoom out from top left button. Also you can drag and move the page with your cursor. How the Color on Map Works? If you click on the color icon, it will show the company’s name and some other information like following image. if a company is listed as top 50 and uses AI, it will be colored as- red top but not uses AI — blue not top but use AI- purple not top and not use AI — green. Now, I can imagine what you are thinking. You want resources (code, data) and play with it!! All code and data are available on my github_page Thanks for reading. Please, don’t hesitate to comment below the article.
https://towardsdatascience.com/us-pharmaceutical-companies-on-an-interactive-map-categorized-by-ranking-and-the-use-of-b22a3bc98945
['Yousuf Ali']
2019-08-29 22:45:07.539000+00:00
['Artificial Intelligence', 'Pharmaceutical', 'Data Science', 'Drug Discovery', 'Data Visualization']
Index Your WordPress Website In Search Engines
Index Your WordPress Website In Search Engines Visualmodo Follow May 4 · 4 min read When you finally create a new WordPress website, you might be expecting visitors to start coming eagerly. In reality, before people start visiting your site, search engines need to find, index, and rank it. In this article, I share some of the best tips and recommendations to index your new WordPress website in search engines. Index Your New WordPress Website In Search Engines Faster Do you want more organic search traffic to your site? I’m willing to bet the answer is yes — we all do! Organic search traffic is critical for growing your website and business. Some research claims around 33% of your site’s traffic can be attributed to organic search. But the stats don’t matter much if your site doesn’t show up in the search results at all. How do you get your new site or blog indexed by Google, Bing, and other search engines? Well, you’ve got two choices. You can take the “tortoise” approach. Just sit back and wait for it to happen naturally, but this can take weeks or months. Trust me, I’ve been there before, not fun. Moreover, you can make it happen now, giving you more time and energy to put towards increasing your conversion rate, improving your social presence. Finally, writing and promoting great and useful content to index your WordPress website. In a way, we are completely at Google’s mercy when it comes to being found on the web. No index, no organic traffic. However, the good news is that there are plenty of things we can do to move Google into giving us a spot in their SERPs which we will talk about in the following. Content, content, and content The thing about being indexed by Google is, we don’t just want them to be aware of our site, but be aware of it in a good way. If your site is empty or worse, full of crappy content, it might get indexed but it won’t get anywhere near the front row of the SERPs. That’s almost as bad as not being indexed at all. It’s no secret that Google cares about the relevancy and quality of your content. For that reason, when you set up your site, focus on high-quality, useful, original content. Naturally, that also means to stay away from duplicate and/or scraped content. Disable “Discourage Search Engines” In WordPress During the development phase, usually, the last thing we want is to be indexed by search engines. In fact, we want to keep Google and Co as far away from our site as possible. Otherwise, incomplete and low-quality content and Google will form an opinion about our site based on that. One of the most common mistakes is to leave “discourage search engines from indexing this site” active in the back end of WordPress. That’s basically a death sentence for organic traffic on your site. So, in order to make sure you get indexed (or if you are having problems appearing on Google), definitely have a look at this setting at the bottom of Settings > Reading to make sure it is unchecked. Finally, save changes. Good Hosting Provider To Index Your New WordPress Website In Search Engines One of the first potential barriers to getting Google to the index is the hardware your site runs on. Slow server speed, downtime, and disconnects can cause search spiders to abandon their cause. While not very common, it is a possibility. So, since in hosting you get what you pay for, investing in a quality host with good hardware and excellent availability is always worth it. Ping Services Ping services offer another effective way to notify search engines of your existence. Pingomatic is one such tool that you can use to notify multiple search engines at once, and another you can try is Pingler. WordPress does have a default feature to ping services but giving one of these other options a shot can’t hurt. Comment In Other Sites To Index Your New WordPress Website In Search Engines Since one of the sites in my example was a premium job and some complications within a given time-frame, I added almost 30+ comments for on another popular WordPress websites. I did not check for do-follow or nofollow attributes, but I did comment on popular, high traffic blogs. This can make that search engine bots will follow the comment links and will land on my blog.
https://medium.com/visualmodo/index-your-wordpress-website-in-search-engines-27850be9495b
[]
2020-05-04 20:25:42.244000+00:00
['SEO', 'Index', 'WordPress', 'Guid', 'Google']
The Latest: BBC Studios invests in Pocket Casts
The Latest: BBC Studios invests in Pocket Casts Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews. THE NEWS Last week, Australian podcasting platform Pocket Casts received a new investment from BBC Studios that will be used to cover operational costs. The podcast player also received new funding from three of its existing owners: NPR, WNYC Studios, and WBEZ Chicago. SO WHAT? The podcasting platform space is crowded and competitive. Whereas Apple once dominated the space with over 80% of the market share, its market share dropped to 60% in 2019 according to Libsyn. This drop was driven by 1) audio platforms moving into the podcast space and 2) an increase in podcast player apps like Pocket Casts. (The barriers to entry to building a podcast platform have always been low, as most shows are distributed openly via RSS.) Today, listeners are noting differences in podcast players, such as unique features, availability on their devices, exclusive content, and familiarity with their parent platform. That said, these differentiators are not mutually exclusive — a podcast player like Spotify can offer both exclusive content and rely on user familiarity with its platform to attract listeners. Feature-focused podcast players Pocket Casts is a featured-focused platform. Users can add playback effects, trim silence, and set filters to discover new shows — among many other features. CastBox, another feature-focused player, offers users the ability to conduct in-audio searches. Listeners can search for relevant content without having to rely on title descriptions or channel tags. While these kinds of features may be unnecessary for new podcast listeners, feature-heavy podcast players are likely more appealing to veteran podcast listeners who may have more specific needs. Not all features are limited to the listening experience itself and may center on revenue models or social sharing. Some podcast players such as RadioPublic are attracting listeners with in-app tipping functions, allowing users to support their favorite shows and hosts. Other platforms have community features: Castbox, for example, launched a social feed on its app last Spring called for listeners and creators. Meanwhile, apps like Breaker and Swoot allow their users to keep up with what their friends are listening to. Device-based podcast players For many first-time podcast listeners, device-based podcast players such as Apple Podcasts and Google Podcasts are a more convenient option. When Apple launched its podcast app in 2012, smartphones quickly replaced desktop computers as the most common way to consume podcasts. In 2018, Google launched its standalone podcast app, which now comes pre-installed in every Android phone. Content-specific podcast players Some podcasting platforms, such as Luminary and Stitcher, differentiate themselves with exclusive content offerings. In exchange for a subscription fee, users can access premium podcasts from the platform’s network. Other podcasting platforms only offer podcasts within specific verticals: Leela Kids and Pinna, for example, are podcast apps designed exclusively for kids. Podcast players on existing platforms Other podcast players have been built on existing platforms such as Pandora and Audible and thus are positioned to draw listeners from the platform’s existing user base. Spotify is perhaps the most prominent example of this, having increased its share of listening to nearly 12% according to Libsyn. Since launching podcasts in 2015, the company has invested heavily in podcasting as part of a strategy to become the “World’s №1 audio platform.” As is the case with device-based podcast players, podcast players that are born out of existing platforms are a convenient option for users who are new to podcasts but familiar with the platform. Broadcast podcast players such as BBC Sounds, CBC Listen, and the upcoming new NPR app are also leveraging their companies’ historically radio-focused platforms to bring listeners to their podcast offerings. Designed to offer both regular podcasts as well as live radio stations, these apps seek to draw podcast listeners from existing radio audiences. LOOK FOR Which platforms will capture the most podcast listening growth. As mainstream platforms such as Apple podcasts and Spotify popularize podcast listening, listeners that started on those platforms may find themselves venturing towards other platforms in pursuit of advanced features. While this places independent podcast players in a unique position, larger platforms like Spotify and Pandora — which are still relatively new to podcasting but perhaps best positioned to draw net new listeners to podcasting — may begin taking cues from smaller podcast players to retain listeners onto their platforms.
https://medium.com/the-idea/the-latest-bbc-studios-invests-in-pocket-casts-ec9bf2efc515
['Tesnim Zekeria']
2020-03-10 17:38:12.691000+00:00
['Journalism', 'Media', 'Podcast', 'Audio', 'The Latest']
Exploring Hulu Data in Python
We won’t need the ‘Unamed: 0’ column so we can delete it with the ‘del’ keyword: del d['Unnamed: 0’] Now, let’s print the first five rows of data using the ‘head()’ method: print(df.head()) If we take a look at the Netflix, Hulu, Prime Video, and Disney Plus columns we see that they contain 1s and 0s. A one corresponds to that movie being available to stream on the respective platform and a zero corresponds to it not being available on said platform. We want to explore Hulu data specifically, so let’s filter our data frame such that the value in the ‘Hulu’ column is equal to one: df_hulu = df[df['Hulu'] == 1] Let’s print the lengths of the original data frame and our filtered data frame: print("Total Length: ", len(df)) print("Hulu Length: ", len(df_hulu))) Let’s print the first five rows of our new data frame: print(df_hulu.head()) Our data contains several categorical columns. Let’s define a function that takes as input a data frame, column name, and limit. When called, it prints a dictionary of categorical values and how frequently they appear: def return_counter(data_frame, column_name, limit): from collections import Counter print(dict(Counter(data_frame[column_name].values).most_common(limit))) Let’s apply our function to the ‘Language’ column and limit our results to the five most common values: return_counter(df_hulu, 'Language', 5) As we can see, we have 607 English, 27 English/Spanish, 19 Japanese, 18 English/French and 15 missing language values in the Hulu data. Let’s apply this method to the ‘Genres’ column: return_counter(df_hulu, 'Genres', 5) Now, let’s look at the movies from the most common Genre, ‘Documentary’:
https://towardsdatascience.com/exploring-hulu-data-in-python-1eb6fa90a886
['Sadrach Pierre']
2020-06-08 21:59:08.221000+00:00
['Python', 'Software Development', 'Technology', 'Data Science', 'Programming']
Build and Dockerize a Blogging API With Deno, Oak, and MySQL
Introduction to Oak Oak is a middleware framework for Deno inspired by the popular Koa middleware framework. Before we begin working on the project, an understanding of a few concepts like middleware and routing is crucial. Middleware Oak middleware are functions that execute during the lifecycle of a request to the server. All middleware in Oak has access to a context object. To see a middleware in action, create a file app.ts somewhere in your computer and put following code in it: The code can be run by executing following command in your terminal: deno run --allow-net app.ts Deno uses URLs for importing modules and third-party modules can be pulled in from https://deno.land/x , a hosting service for ES modules. Once the Application class is imported from Oak, an app instance can be created. The app.use() function is used for registering middleware. The middelware function is passed as a parameter to the app.use() call. The context object is usually denoted by ctx and contains things like the request and response objects. If you want to learn about the context in more detail, follow this link. A middleware can either end a request by returning a response or it can pass it to the next one using the next method. Middleware is processed as stack. A more complex example with middleware for logging incoming requests with response time can be created with following the code: Two new middleware have been added for logging the incoming request and the time took to respond in the console. The logger middleware just logs the value of X-Response-Time header along with the request method and URL and the timer middleware sets the value of X-Response-Time header used in our previous middleware. All the three midleware in this program will be stacked on top of one another and executed in the order we register them in the code. At first the logger will run, then the timer middleware and at last the hello world middleware. You can test out the code again by restarting the server. Middleware can be exported and imported as ES modules. Create a directory called middleware in the root of your project and create two files named logger.ts and timer.ts in there. Now extract the code for logger middleware and put that inside the logger.ts file: export default async (ctx: any, next: any) => { await next(); const rt = ctx.response.headers.get("X-Response-Time"); console.log(`${ctx.request.method} ${ctx.request.url} - ${rt}`); } Now for the timer.ts : export default async (ctx: any, next: any) => { const start = Date.now(); await next(); const ms = Date.now() - start; ctx.response.headers.set("X-Response-Time", `${ms}ms`); } Now, import these two exported functions inside app.ts and register using the app.use() function: That’s better. This directory will be now used as our project root. As we go forward more complex middleware will be added. Routing In Oak, the Router class can be used for producing middleware to enable routing based on the path-name of the request. So far the application responds with “hello world” no matter what endpoint we hit — that’s not what we want. So, update the code for hello world middleware inside app.ts to look like this: To use the Router class, an instance of it has to be created. Routes with GET , POST , PUT , PATCH , DELETE methods can be created by calling the corresponding function on the Router instance. You can learn more about this class by following this link. Each route registration takes the path-name as a string and a function as middleware. Just like the app.use() call, all route middleware has access to the context. The status code for the response can be set using ctx.response.status property. A status is a simple number, the Status class provides properties containing status codes for various situations; so, Status.OK is 200, Status.NotFound is 404 — you get the idea. Type and contents of the response can be set using ctx.response.type and ctx.response.body properties. Routes can be registered in the app instance using app.use() call passing router.routes() as a parameter, where router is the instance of Router class. I’m using JSend — a specification for a simple, no-frills, JSON based format for application-level communication but you’re free to use whatever you like.
https://medium.com/better-programming/build-and-dockerize-a-blogging-api-with-deno-oak-and-mysql-f2e4ecafaf6c
['Farhan Hasin Chowdhury']
2020-06-26 05:52:25.734000+00:00
['Deno', 'JavaScript', 'Docker', 'Programming', 'Containers']
Overview of Issuu
How to use Issuu To get started on Issuu, you will need to create an account and sign up for free or paid membership. The free option allows you to do the most important thing — upload your publications — but there are some restrictions. The paid versions offer more features (obviously) that can help you get more out of the platform and more marketing mileage out of your publications. We will talk more about the free and paid plans later. Publishing content on Issuu is rather straightforward and simple. All you need to do is create a print-ready-PDF version of your document and upload it onto the Issuu platform. The website automatically arranges the content into a readable, page flipping format that can be viewed on both desktop and mobile devices. You also have the option of customizing the cover of your publication, and you can input audio as well. Having published your document on Issuu, you have the option of sharing it elsewhere. For instance, you can embed the Issuu publication on your website. Issuu also optimizes your content for social media, allowing you to use your publication for Instagram stories, Facebook posts, and even Pinterest-ready animated media. Now that you have an idea of what Issuu is and how to use it, it’s time to talk about the pros and cons of using the platform.
https://medium.com/digital-marketing-lab/overview-of-issuu-6a74c28a383a
['Casey Botticello']
2020-10-21 17:13:00.290000+00:00
['Technology', 'Freelancing', 'Social Media', 'Business', 'Entrepreneurship']
How To Provision Infrastructure on Azure With Terraform
How To Provision Infrastructure on Azure With Terraform A Beginner's Guide with an example project Terraform is an infrastructure as a code tool that makes it easy to provision infrastructure on any cloud or on-premise. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. In this post, we will see how to provision infrastructure on Azure. Get Started With Terraform Prerequisites Example Project What is Backend Configuring Backend Provisioning Infrastructure Inputs and Outputs Destroying Infrastructure Summary Conclusion Get Started With Terraform The first thing we need to do is to get familiar with Terraform. If you are new to Terraform, Check the below article on how to get started. It has all the details on how t install, Terraform Workflow, Example Projects, etc.
https://medium.com/bb-tutorials-and-thoughts/how-to-provision-infrastructure-on-azure-with-terraform-4065430a3d72
['Bhargav Bachina']
2020-11-06 01:59:40.617000+00:00
['Terraform', 'DevOps', 'Software Development', 'Azure', 'Cloud Computing']
Automate Exploratory Data Analysis with Pandas Profiling
According to Wikipedia, exploratory data analysis(EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. So, EDA is the process of understanding the underlying data, distribution of variables and their correlations. This makes EDA the very first step in any data science process before building any statistical model. If you’re nor aware of how EDA is performed, here are a few examples you can refer to. https://towardsdatascience.com/exploratory-data-analysis-eda-a-practical-guide-and-template-for-structured-data-abfbf3ee3bd9 https://www.kaggle.com/ekami66/detailed-exploratory-data-analysis-with-python But EDA is often a very time-consuming task which requires you to build multiple visuals to check distributions and interaction between variables. There are a few functions like info() and describe() which does help to an extent, but still, you’ll have to perform a lot of manual steps even after using these functions. This is where a really cool library called Pandas Profiling comes handy. This library automatically generates detailed reports explaining the data in just one line of code! Here’s a quick look at how the reports look like. Variables overview:
https://towardsdatascience.com/automate-exploratory-data-analysis-with-pandas-profiling-90c1842d838f
['Pranav Kaushik']
2020-06-14 19:52:50.134000+00:00
['Statistics', 'Data Analytics', 'Python', 'Data Science', 'Data Visualization']
How a Hockey Player Started Canada’s Favorite Franchise
How a Hockey Player Started Canada’s Favorite Franchise The story of Tim Hortons Tim Hortons Coffee and Timbits (Photo by Conor Samuel on Unsplash) If you’ve been to Canada, you might have seen a Tim Hortons in every street corner or every neighborhood. As of 2020, there are 3,802 Tim Hortons locations in Canada, meaning there’s a store for roughly every 9,000 Canadians. Founded in 1964, Tim Hortons has gained national popularity and it is now an essential part of many Canadian lives. As monumental and big as the franchise may seem, the beginning of this coffee business was small and humble. It all started in 1964 when Tim Horton, a starting right-wing for the Toronto Maple Leafs of the National Hockey League, opened the first-ever location in Hamilton, Ontario. Horton did so after a donut store manager by the name of Jim Charade had convinced him to partner with him to open a store. Being as gullible he was, Horton invested and co-founded the brand with Charade. The First Ever Tim Horton’s (The Canadian Encyclopedia) The two business partners often had disagreements on the menu and Charade ended up leaving in 1966. The brand expanded rapidly after Ron Joyce joined the business in 1965. By 1967, Tim Horton’s had opened multiple locations across Hamilton, and by 1968, it was a multimillion-dollar franchise. The death of Tim Horton Other than founding Tim Horton’s, Horton was notable for being in the Canadian National Hockey League for more than 20 years. From 1972 to 1974, Horton played for the Buffalo Sabres, being in his forties, while running one of the fastest-growing businesses in Canada. However, on February 21, 1974, Horton unfortunately died. Tim Horton (Wikipedia) After a game with his former team, the Toronto Maple Leafs, Horton insisted on driving back to Buffalo instead of joining his team’s bus. He met with his partner Ron Joyce in Oakville, where he had a few drinks. Despite Joyce’s discouragement, Horton took his car and drove to Buffalo under the influence of alcohol. He drove past the speed limit and was chased by the police, which he miraculously managed to beat. However, he soon drove his car into the curve on the highway, being found dead 40 meters from the site of the accident with a bottle of vodka. The details of his death were not revealed to the public until 2005 when it was disclosed to the public that he was under the influence of alcohol and a stimulant called Dexedrine. Tim Horton’s after Tim Horton After Horton’s death, Ron Joyce fully took over the business from his family, when he bought their shares for one million dollars. Joyce was very close to his deceased business partner’s family, which made the settlement very easy for him. He took control of the company’s operation, and with it, 40 stores under the brand. Since then, the company had grown exponentially, establishing itself as one of the most prominent Canadian businesses. Tim Horton’s appealed to Canadians with their cheap prices for coffee and donuts and their identity as a “true Canadian business”. Their coffee became associated with hockey, as many advertisements focus on Canada’s love for hockey. In 1984, Tim Hortons opened their first store in America, and in 1991, opened their 500th store. As of 2019, Tim Hortons operates 4900 stores worldwide. In 1993, the business changed its name from Tim Horton’s to Tim Hortons, as Quebec’s new law on English business names prohibited the use of the apostrophe, which does not exist in the French language. Tim Hortons Timbits (Daily Hive) A coffee and donut shop started by a famous hockey player became the most popular franchise in Canada in just six decades. It introduced Canadian favorites like the double-double (two creams and two sugars in coffee), Timbits (small bite-sized donuts), and iced cap (iced cappuccino). Since its founding, it overcame struggles like the death of its founder and grew to represent Canada with its brand. Today, Canadians will still pick Tim Hortons as their go-to store for a cup of double-double and donuts.
https://medium.com/history-of-yesterday/how-a-hockey-player-started-canadas-favorite-franchise-9137ec0defd0
['Daniel Choi']
2020-12-05 12:02:57.896000+00:00
['Canada', 'History', 'Tim Hortons', 'Marketing', 'Business']
Guide for Mastering Modern JavaScript skills
Guide for Mastering Modern JavaScript skills Become an expert in Modern JavaScript and better at React, Angular, Vue Photo by Artem Sapegin on Unsplash In today’s constantly changing world, a lot of new content and updates are coming to JavaScript which are very useful for improving your code quality. Knowing these things are really important whether it’s for getting a high paying job or to keep up to date with the latest trends and improve your code quality or to sustain your current Job. There are many latest additions to JavaScript like Nullish coalescing operator, optional chaining, Promises, async/await, ES6 destructuring, and a lot more other features which are very useful. Let’s explore some of the Modern JavaScript skills you should know. Let and const Before ES6 came, JavaScript was using var keyword so JavaScript was only having a function and global scope. There was no block-level scope. With the addition of let and const JavaScript has added block scoping. using let: When we declare a variable using let keyword, we can assign a new value to that variable later but we cannot re-declare it with the same name. // ES5 Code var value = 10; console.log(value); // 10 var value = "hello"; console.log(value); // hello var value = 30; console.log(value); // 30 As can be seen above, we have re-declared the variable value using var keyword multiple times. Before ES6, we were able to re-declare a variable that is already declared before which was not having any meaningful use, instead, it was causing confusion. If we already have a variable declared with the same name somewhere else and we’re re-declaring it without knowing we already have that variable then we might override the variable value causing some difficult to debug issues. So when using let keyword, you will get an error when you try to re-declare the variable with the same name which is a good thing. // ES6 Code let value = 10; console.log(value); // 10 let value = "hello"; // Uncaught SyntaxError: Identifier 'value' has already been declared But, the following code is valid // ES6 Code let value = 10; console.log(value); // 10 value = "hello"; console.log(value); // hello We don’t get an error in the above code because we’re re-assigning a new value to the value variable but we're not re-declaring value again. Now, take a look at the below code: // ES5 Code var isValid = true; if(isValid) { var number = 10; console.log('inside:', number); // inside: 10 } console.log('outside:', number); // outside: 10 As you can see in the above code when we declare a variable with var keyword, it's available outside the if block also. // ES6 Code let isValid = true; if(isValid) { let number = 10; console.log('inside:', number); // inside: 10 } console.log('outside:', number); // Uncaught ReferenceError: number is not defined As you can see in the above code, the number variable when declared using let keyword is only accessible inside the if block and outside the block it's not available so we got a reference error when we tried to access it outside the if block. But if there was a number variable outside the if block, then it will work as shown below: // ES6 Code let isValid = true; let number = 20; if(isValid) { let number = 10; console.log('inside:', number); // inside: 10 } console.log('outside:', number); // outside: 20 Here, we have two number variables in a separate scope. So outside the if block, the value of number will be 20. Take a look at the below code: // ES5 Code for(var i = 0; i < 10; i++){ console.log(i); } console.log('outside:', i); // 10 When using the var keyword, i was available even outside the for loop. // ES6 Code for(let i = 0; i < 10; i++){ console.log(i); } console.log('outside:', i); // Uncaught ReferenceError: i is not defined But when using let keyword, it's not available outside the loop. So as can be seen from the above code samples, using let keyword makes the variable available only inside that block and it's not accessible outside the block. We can also create a block by a pair of curly brackets like this: let i = 10; { let i = 20; console.log('inside:', i); // inside: 20 i = 30; console.log('i again:', i); // i again: 30 } console.log('outside:', i); // outside: 10 If you remember, I said we cannot re-declare a let based variable in the same block but we can re-declare it in another block. As can be seen in the above code, we have re-declared i and assigned a new value of 20 inside the block and once declared, that variable value will be available only in that block. Outside the block when we printed that variable, we got 10 instead of the previously assigned value of 30 because outside the block, the inside i variable does not exist. If we don’t have the variable i declared outside, then we'll get an error as can be seen in the below code: { let i = 20; console.log('inside:', i); // inside: 20 i = 30; console.log('i again:', i); // i again: 30 } console.log('outside:', i); // Uncaught ReferenceError: i is not defined using const: const keyword works exactly the same as the let keyword in block scoping functionality. So let's look at how they differ from each other. When we declare a variable as const , it's considered a constant variable whose value will never change. In the case of let we're able to assign a new value to that variable later like this: let number = 10; number = 20; console.log(number); // 20 But we can’t do that in case of const const number = 10; number = 20; // Uncaught TypeError: Assignment to constant variable. We even can’t re-declare a const variable. const number = 20; console.log(number); // 20 const number = 10; // Uncaught SyntaxError: Identifier 'number' has already been declared Now, take a look at the below code: const arr = [1, 2, 3, 4]; arr.push(5); console.log(arr); // [1, 2, 3, 4, 5] We said const variable is constant whose value will never change but we have changed the constant array above. So isn't it contrary? No. Arrays are reference types and not primitive types in JavaScript So what actually gets stored in arr is not the actual array but only the reference(address) of the memory location where the actual array is stored. So by doing arr.push(5); we're not actually changing the reference where the arr points to but we're changing the values stored at that reference. The same is the case with objects: const obj = { name: 'David', age: 30 }; obj.age = 40; console.log(obj); // { name: 'David', age: 40 } Here, also we’re not changing the reference of where the obj points to but we're changing the values stored at that reference. So the above code will work but the below code will not work. const obj = { name: 'David', age: 30 }; const obj1 = { name: 'Mike', age: 40 }; obj = obj1; // Uncaught TypeError: Assignment to constant variable. The above code does not work because we’re trying to change the reference that the const variable points to. So the key point to remember when using const is that, when we declare a variable as a constant using const we cannot re-define and we cannot re-assign that variable but we can change the values stored at that location if the variable is of reference type. So the below code is invalid because we’re re-assigning a new value to it. const arr = [1, 2, 3, 4]; arr = [10, 20, 30]; // Uncaught TypeError: Assignment to constant variable. But note that, we can change the values inside the array, as seen previously. The following code of re-defining a const variable is also invalid. const name = "David"; const name = "Raj"; // Uncaught SyntaxError: Identifier 'name' has already been declared Conclusion
https://medium.com/javascript-in-plain-english/guide-for-mastering-modern-javascript-skills-7d4ee42bf009
['Yogesh Chavan']
2020-12-10 18:44:52.847000+00:00
['Angular', 'JavaScript', 'React', 'Vue', 'Programming']
Why I No Longer Use the Wrong BI Tool. Ever.
A Look at BI Tools Following these requirements, let us focus on tools that are worth a closer look. The following is a shortlist that I have developed from my experience, including tools that are both popular and also some secret gems. Tableau Desktop screenshot from https://www.tableau.com/products/desktop Tableau Tableau is the go-to solution across all industries, acquired by Salesforce in 2019. Download and install on Mac or Windows, feed in data from an Excel sheet. Select a visualization and configure it. Runs on your laptop, making it easy for you to try out. It is also easy to share inside your organization, at least if you buy the Tableau Server / Tableau Online product. Since it is a mass-market tool, there is a host of training and information material available for free. Elsewhere, like on Stack Overflow, only the “tableau-API” tag gained some traction. And here on Medium? Over 2,000 stories with the “Tableau” tag. In summary, I feel that Tableau is best for smaller, individual data analysts. Power BI screenshot from https://powerbi.microsoft.com/ Power BI Power BI is from Microsoft, the largest market player in this field. Microsoft sure knows how to develop, promote, and handle software like this. As a caveat, Power BI only runs on Windows. Did I mention Microsoft develops it? The workflow is identical to most BI tools; put your data in an Excel sheet, or connect with an external data source. Then click together your dashboard, selecting from a variety of charts. Looking at Stack Overflow again, there are several Power BI-related tags. You find a high number of questions posted there, though not a high number of answers. On Medium, you will find around 1,500 stories with the “Power BI” tag. Overall, Power BI is best integrated into a Microsoft-based corporate structure. Datorama Datorama is my first hidden gem. Also distributed by Salesforce, and essentially an in-house competitor to Tableau. Datorama best addresses the needs of larger marketing corporations. Almost a decade of development went into it. This can be a go-to choice for advertising agencies or marketing departments. But you cannot take it for a spin. There is no download and no demo portal access that I could find. I felt a bit patronized because of this. You either need to put your internal staff through a certification process or hire a certified external supplier to set up and manage this solution for you. This also reflects its popularity here on Medium. Compared to Tableau with 2000 tags, I found ten stories with the “Datorama” tag over the last three years. Datorama is best for larger- or mid-sized marketing corporations. Pulse screenshot from https://pulse.tdreply.de/ Pulse Pulse is the second hidden gem. Build by TD Reply, it is a web-based solution and not installed on your laptop. Users can manage all data uploads, data pipelines, and visual configuration. Sharing access to your organization is as easy as sharing a link. Fine-tuned rights and roles systems ensure that data does not travel astray. Access to demos is possible and encouraged. It requires no extensive training or certification. Whether you want a managed dashboard or a solution that you administer yourself. Both options are workable. And it looks good, too! Pulse is most suitable for mid-sized companies — or larger — across different departments like corporate strategy, sales, and marketing.
https://medium.com/better-marketing/dont-use-the-wrong-bi-tool-ever-578af27ebf50
['Holger Nösekabel']
2020-09-24 23:19:38.789000+00:00
['Strategic Planning', 'Tools', 'Business Intelligence', 'Data Science', 'Data Visualization']
Make your Amazon EC2 instance up and running.
Make your Amazon EC2 instance up and running. In this part, we will be creating an Amazon account, EC2 instance and connect to that instance via SSH. We will be using Linux OS throughout this tutorial. Windows users, don’t get annoyed. We will be working only on Amazon website with some terminal commands so System or OS doesn’t be a big issue here. Create an AWS account @ https://aws.amazon.com/ You will be asked to provide your card details. Don’t worry, it is just for the verification purpose. You have an option to use your AWS account in free-tier period for 1 year and you will not be charged in this period. Once you have created your account successfully, Click Services at the top and select EC2. Click Launch Instance or Create Instance button. You will see the below page after clicking on it. Step 1 — Select Ubuntu Server 16.04 LTS 1. Select Ubuntu Server 16.04 LTS. You find most of the Linux images which is used to run variety of applications and used widely. Windows images are used specifically for .NET applications. So lets stick to Linux image which has a wide user base and tons of forums and communities to address any kind of issue. 2. In Step, Choose an Instance Type , select t2.micro which is enough to run multiple applications and let’s not exceed our free-tier limit. Click Next: Configure Instance Details Step 2 — choose t2.micro 3. It is not necessary to configure any of the instance details as of now. So lets skip this step and click Next: Add Storage 4. 8 GB of an SSD is fine for us to run a normal application. So lets not change anything here and move on to next step. Click Next: Add Tags 5. Tags are used to tag your instance, specially used to filter when you have multiple instances. No changes needed as of now and click Next: Configure Security Group 6. Security Group is a config for your server. It allows you to define which port should your server allow traffic, protocol & port range etc. You can also add an description for the port you allow here. Also name your security group name meaningfully. To run our app, we need SSH access which by default is on port 22 and Amazon makes it default for us. There is also a warning sign below allow access from known IP address alone. Keep this in mind and whenever you are required to allow any port or IP, you are allowing access to your server from that address. The default port is 80 for any site but our browser tape that so that we couldn’t able to see it. So let’s open the port 80 which is by default and to serve an app we need HTTP so let’s create a rule by clicking, Add Rule and select HTTP in type. Let protocol be TCP. Add 80 in port range. Select Anywhere in the source so that we are opening it to all IP address. 7. Click Review and Launch button to launch your instance successfully. Now you will be prompted to set up an SSH key-pair which is a pem file, gives you access to connect it to your instance(we just created) from your Linux terminal. Give a proper name (I will name it as AWS-EC2-INSTANCE-LIVE ), download it and keep it safe or else you will need to generate a new one if you lose. Click Launch Instances and Click View Instances . Hurray! You have created your remote server successfully. Check the below image and find your running instance details below. EC2 Running Instance Optional — You can also move your .pem file to default .ssh folder in your MacOS. Open Finder and press Cmd+Shift+G type ~/.ssh in the search box and click Go to get into ssh key folder. It is recommended to put your .pemfile in this folder, which is hidden by default. Note: Always remember to stop your instance when you are not using it by RightClick on instance -> Instance State -> Stop Now let’s connect to our instance from our terminal. Use chmod to set permissions to your .pem file so that it can be used as a key to connect to our instance. $ chmod 400 ~/.ssh/AWS-EC2-INSTANCE-LIVE.pem To SSH into our server, we need three parameters. Username Domain Address Pem file You don’t need to worry about searching for these params. AWS has formed this query string with your instance details already. Click Connect. Click the Connect option in the aforementioned page. You will see a popup like below. Copy highlighted query string. Copy paste the highlighted query string, give your .pem file location and press enter in your terminal. Type Yes when prompted so that you can add your instance as your known host (One time process). My final query string goes below. $ ssh -i ~/.ssh/AWS-EC2-INSTANCE-LIVE.pem ubuntu@ec2-xx-xxx-xx-xx.us-east-2.compute.amazonaws.com Note: I have .pem file in default .ssh folder here for security reasons. Anyone having this .pem file can access your instance. Keep it safe Yes. You are connected to your remote server securely. In the next tutorial, let’s install and configure Nginx on this remote server and deploy the Node.js app in production. If you have already bought a domain name, let’s map it too ;) Thank you.
https://medium.com/hackernoon/make-your-amazon-ec2-instance-up-and-running-ab80120eb23
['Balasubramani M']
2018-06-02 12:17:45.856000+00:00
['Ssh', 'Ec2', 'AWS', 'Ubuntu']
Kuanyin
Kuanyin (Wu Tao-tzi, 8th Cen. AD) She becomes the rain, And a breeze, soft, over the trees. And the silence of snowy fields. And her gown is water, Silk, rice paper And the sunlight dances on it. (For Robert Bly) ******************************************** This poem is a ‘translation’ of ancient Chinese art; the title of the poem is the name of the painting. I wrote a series of these poems, in 1968. They were among my first published works. please see: The Unending Connectedness of the Books We Write and Love | by Terry Trueman | Write and Review | Dec, 2020 | Medium Wikipedia & Encyclopedia https://en.wikipedia.org/wiki/Terry_Trueman
https://medium.com/illumination-curated/kuanyin-3dc03803ffa2
['Terry Trueman']
2020-12-27 18:15:14.372000+00:00
['Poetry', 'Art', 'Chinese Culture', 'Antiquity', 'Writing']
The Top 20 Wu-Tang Albums
As a well-known Wu-Tang Clan disciple, I was nothing short of overjoyed recently with the number of articles and retrospectives of the group and its impact on the twentieth anniversary of its debut album, Enter the Wu-Tang (36 Chambers). Some, like Paul Cantor’s piece for Noisey and the staff of Grantland each choosing a member to praise, were outstanding and made me happy to be a longtime fan of hip-hop’s version of The Beatles. Others, however, missed the mark, specifically Slate. While the piece regarding the attempt to listen to every track by RZA seems silly to me — mostly because I am absolutely positive I’ve heard everything by both the Clan and RZA himself many times over — but the one that really raised my eyebrows (and had multiple people tweeting me), was the one in which the author attempted to rank the top twenty Wu-Tang Clan group and solo albums of all time. As a person that appreciates the making of a list or two to put the history of hip-hop into context, it was an admirable effort and, again, I love how much love is being heaped upon the Wu. Unfortunately, that list is all sorts of wrong. The top twenty Wu albums list for Complex, also ranked and written by Cantor, is far better, but I had some disagreements, most notably what was eligible for inclusion. So, naturally, I decided to make my own top 20 list of Wu-Tang albums. Before we start, a few ground rules: Cappadonna is not included. Though he is now considered a member of the group, he is not one of the original nine and, being a traditionalist, I will never consider him an actual Wu member. Sorry, Cappa. Wu-Affiliates are not included. In addition to Cappadonna, this includes Sunz of Man, Killarmy, Shyheim, Cilvaringz, Killah Priest, or anyone else listed here. Affiliate is different than member. Posthumous projects, specifically Ol’ Dirty Bastard‘s A Son Unique, are not eligible. Only 100% Wu-Tang projects qualify. In hip-hop this can mean a variety of things, but for the purposes of this list, it means any albums with artists from outside the Clan are not eligible. This includes Gravediggaz, Redman & Method Man’s Blackout albums and Ghostface Killah’s & Sheek Louch’s Wu Block. However, it would include Wu-Massacre, which is a collaborative effort between Raekwon, Method Man, and Ghostface Killah, three charter members. The question then becomes, what about a Wu-Tang MC doing an entire project with a non-Wu producer, specifically GZA and DJ Muggs’s Grandmasters or Ghostface and Adrian Younge’s Twelve Reasons to Die? I went back and forth on this one, but ultimately decided that they should be included, if only because it is a new phenomenon. In the early-to-mid ‘90s, when Enter the Wu-Tang (36 Chambers) was released, producers were not given top-billing credit, even if there was only one beatmaker. Doggystyle wasn’t listed as a Snoop Doggy Dogg and Dr. Dre album and Tical wasn’t by Method Man and RZA. Using these parameters, the total number of albums considered is 47 [5 group albums, 10 by Ghostface, 4 by Method Man, 2 by ODB, 5 by GZA, 5 by Raekwon, 4 by U-God, 4 by RZA, 3 by Masta Killa, 4 by Inspectah Deck, and 1 by Meth/Ghost/Rae]. Without further ado, let’s get into it. 1. Enter the Wu-Tang (36 Chambers) — Wu-Tang Clan [1993] The one that started it all. While it is not as musically advanced as other albums, especially Wu-Tang Forever, it is not only the start of the Wu empire, but also the first step in the renaissance of New York hip-hop that would occur over the next few years. Like nothing that had ever been heard before, it is still amazing twenty years later. 2. Only Built 4 Cuban Linx… — Raekwon [1995] Rarely have beats, rhymes, concepts, and execution come together so perfectly. A loose-knit storyline weaves throughout, but the strength of OB4CL is RZA’s production and Rae and Ghostface’s ability to illustrate both the good and bad of street life. Of course, songs like “Criminology” and “Wu Gambinos” don’t hurt either. 3. Liquid Swords — GZA [1995] A lyrically dense album released in the winter, Liquid Swords feels coldly efficient. GZA’s rhymes penetrate to the core but his voice and cadence rarely rise as he observes both the dreariness of life and the vapidity of most other hip-hop artists while RZA’s beats make you want to reach for the thermostat in your car regardless of the time of year. The double header of “4th Chamber” and “Shadwoboxin’” is in the running for best back-to-back songs in the entire Wu catalog. 4. Wu-Tang Forever — Wu-Tang Clan [1997] Sprawling, ambitious, and sonically polished, the Wu’s second group album was one of the most anticipated hip-hop albums in history, one that held little resemblance to its debut. Battle rhymes and brag raps had been replaced with Five Percent teachings and vivid imagery, but different is not always bad. 36 Chambers was like a rough cut student film while Wu-Tang Forever was the completed masterpiece. Some (such as myself) prefer this album over the first, but both are classics. 5. Supreme Clientele — Ghostface Killah [2000] After the release of Wu-Tang Forever and the completion of RZA’s “five year plan,” the Wu solos were more abundant, but, sadly, less impressive. Throughout much of the ‘00s, Ghostface carried the Clan on his back and that trend began with his sophomore disc. Featuring more input from RZA (who served as executive producer) than the other members’ albums, Supreme Clientele felt like a Wu-Tang album and disproved the theory that the group was dead. 6. Ironman — Ghostface Killah [1996] On the opposite spectrum from Liquid Swords, Ghost’s first solo disc was filled with emotion and honesty, from complaining to getting “jerked at the Source Awards” to remembering picking roaches out of cereal boxes. One of the lesser known members of the group for the first few years, Ghostface took off the mask and let the rest of the world into his world. 7. The W — Wu-Tang Clan [2000] Though they were able to follow-up 36 Chambers, trying to match Wu-Tang Forever would have been an exercise in futility. Instead, RZA and co. circled back towards the beginning, releasing a group album that was lean and gritty, different from most of the music at that time (including many of the Wu solos). Though not a perfect album — a few songs fall flat and the inclusion of Snoop Dogg and Busta Rhymes interrupts the disc’s vibe — much of it is vintage Wu, with complex lyricism over stripped-down beats that have only gotten better with time. 8. Tical — Method Man [1994] The first Clansman to release a solo album, Method Man was the undisputed star of the group. Charismatic in live shows and funny in interviews, Meth’s debut is dark and murky, giving the entire album of sense of impending doom. With “Bring the Pain” and “All I Need,” Meth proved he could appeal to both sexes without losing credibility. 9. Return to the 36 Chambers: The Dirty Version — Ol’ Dirty Bastard [1995] ODB’s debut is all over the place. Recorded over several years, there are a couple verses that are repeated, songs that pre-date 36 Chambers, and rhymes that meander on- and off-beat. Somehow, though, Dirty makes it all work with his infectious personality and his chemistry with RZA’s beats. 10. Fishscale — Ghostface Killah [2006] I would argue that Ghostface is the best storyteller in the Clan and his tales on Fishscale are nothing short of epic. Injecting bleak imagery with humor and absurdity over plush, soulful beats while keeping the energy high throughout, Ghost crafted an album that, while a bit bloated, takes the best of the late ‘80s and early ‘90s and created a mid-‘00s banger. 11. Only Built 4 Cuban Linx…Pt. II — Raekwon [2009] Rumored for almost a decade with all sorts of conflicting reports — Busta Rhymes is executive producing, RZA isn’t contributing at all, RZA is producing the whole thing, Inspectah Deck will be the “co-star,” Dr. Dre is executive producing — and dismissed as impossible before anyone heard a song, OB4CL2 somehow lived up to the hype as Raekwon delivered an album that was just a small notch below its predecessor, recreating the original’s vibe while also updating it. New Wu indeed. 12. Grandmasters — GZA (and DJ Muggs) [2005] Along with Kung Fu, chess has always played a key role in the development of the Wu-Tang Clan and its members, so when the group’s most lyrical MC linked up with Cypress Hill’s producer, it was only natural that the album was presented as a chess match between the two. Grandmasters gets the nod over GZA’s other non-Liquid Swords projects because Muggs doesn’t let the beats become bland or repetitive, keeping the sounds original, preventing Genius from becoming complacent. 13. Uncontrolled Substance — Inspectah Deck [1999] Deck’s debut was originally slated to drop in the first round of Wu solos, but the entire album was lost in a flood that destroyed RZA’s basement studio, ruining over 100 beats. Deck had to wait until 1999 to shine and by then a bit of the luster had worn off the Clan logo. Still, the album is strong, with Deck carrying much of the production that ranges from banging to boring. 14. Tical 2000: Judgement Day — Method Man [1998] A project full of highs and lows, had Method Man’s sophomore album included only the highs, it would have been a classic. Plagued by far too many songs, too many skits, and a few too many guests, Tical 2000 still holds some of the best post Forever music this side of Supreme Clinetele, allowing Meth to showcase his range and display his full personality. If iTunes playlists had been around in 1998, most fans would have made a 15 track Tical 2 album that would be far higher on this list. 15. Iron Flag — Wu-Tang Clan [2001] By Iron Flag, it was obvious that all of the Wu members were no longer on the same page. A schizophrenic effort, some songs sound out of place — the Wu-affiliate featured “Chrome Wheels,” Flavor Flav’s chorus on “Soul Power (Black Jungle),” and the Trackmasters-produced, Ron Isley-featured “Back in the Game” — while others sound just as good as anything that’s ever been released under the Clan banner — “Y’all Been Warned,” “Radioactive (Four Assassins),” and “Iron Flag.” The good only barely outweighs the bad, but considering the lack of cohesion in recent years and compared to 8 Diagrams, Iron Flag is better than you remember. 16. No Said Date — Masta Killa [2004] By 2004, the idea of a Clan solo album featuring all nine members over a backdrop of atmospheric beats that were reminiscent of 36 Chambers seemed impossible. However, Masta Killa made it happen. While No Said Date doesn’t belong alongside GZA’s or Raekwon’s Wu solos, it does deserve its own place in the group’s pantheon. While Killa doesn’t have the same presence on the mic as his brethren, his album, which was largely self-produced, features very few missteps and puts his groupmates, including ODB in his final appearance, in a position to thrive. Probably the most slept-on Wu-Tang album. 17. The Pretty Toney Album — Ghostface Killah [2004] His first album for Def Jam, Ghostface dropped the ‘Killah’ from his name on this album. Fortunately, the music largely remained the same. Excluding the horrible reach for radio play, “Tush” featuring Missy, this is typical GFK, with outlandish story rhymes, memories of growing up impoverished, and non sequiturs piled on top of soul samples to create a sum greater than its parts. 18. Twelve Reasons to Die — Ghostface Killah (and Adrian Younge) [2013] Twenty years after his group’s debut, Ghostface put the mask back on for a concept album produced by Adrian Younge and executive produced by RZA. Over-the-top in its violence and black humor, faithful throughout to its storyline, and coming in at lean 40 minutes, it achieves its goal of paying homage to a comic book and never leaving the listener bored. 19. Shaolin vs. Wu-Tang — Raekwon [2011] Although they had collaborated for Cuban Linx…Pt. II, Raekwon and RZA have not seen eye-to-eye for several years, dating back to the group’s 2007 album 8 Diagrams, an album from which Raekwon distanced himself immediately and announced that the Clan minus RZA would create their own Wu-Tang album called Shaolin vs. Wu-Tang to illustrate the group’s internal strife. The group project became a solo album with beats from other producers that sounded like they came from the early ‘90s, and while parts of the album knock, Rae could have benefited from the input of someone else (he executive produced the album himself). It’s a decent effort, but in trying to prove how much he doesn’t need RZA, Rae actually proved the opposite. 20. 4:21…The Day After — Method Man [2006] Method Man wanted RZA to produce the entirety of 4:21, but he only had time to contribute about half of the beats, so the album is only half good. That half is quite good, though, and is enough to counterbalance Meth’s lackluster performances on beasts by Scott Storch and Kwamé. I went back and forth on the placing and order of many of these albums, particularly numbers eleven through twenty, and I realized in my second draft that much of my commentary was basically RZA = good; no RZA = bad. While this is simplistic, I stand by the general idea of it. Besides, history supports me. The Wu-Tang projects of which RZA has been either producer or executive producer are far superior to those in which he had zero input. (If you don’t believe me, compare Raekwon’s sophomore album Immobilarity to Supreme Clientele.) Regardless of how it all ends, the Wu-Tang saga is like nothing in American music, before or since, and the legacy and influence of the group and its members will continue to be felt for another twenty years.
https://medium.com/the-passion-of-christopher-pierznik-books-rhymes/the-wu-tang-top-20-8d886156ae24
['Christopher Pierznik']
2019-09-04 20:06:14.896000+00:00
['Hip Hop', 'Classic', 'Music', 'Rap', 'Wu Tang']
AWS Security Flaw which can grant admin access!
I recently discovered an AWS Managed Policy that potentially allowed granting admin access to self or any other IAM role. This blog-post describes my findings and my interactions with AWS Security team on the same. For a particular project, we were using SSO for AWS Account login using Okta. In this, the permissions to users are given through IAM Roles. I was one of the users having been given limited access to AWS account through an IAM role. I accidentally discovered one day that I was able to put any inline policy to other IAM roles. Perplexed by this, I went to IAM console to check if I have admin access or any other policies attached related to AWS IAM. I noticed that none of the policies assigned to my role were related to IAM service and nor did I have admin access. But I could potentially grant admin privilege to any other IAM role. For instance, I could add a new inline policy with the below JSON(which grants the role admin privileges): { “Version”: “2012–10–17”, “Statement”: [{ “Effect”: “Allow”, “Action”: “*”, “Resource”: “*” }] } After discovering this, I sent out a report of my findings to AWS security team on March 19th 2018. They then asked more details on which IAM role I am referring and to give detailed steps to reproduce the same. So I followed up with them to provide the requested details. Later, their security team member acknowledged about this issue by saying: “I’ve reviewed this, and I’ve found out why you’re able to add an inline policy — the role you are using has the managed policy “AmazonElasticTranscoderFullAccess” attached. This grants (among other things) the “iam:PutRolePolicy” permission, which is what allows you to attach an inline policy to the role. I’ve reached out to the Elastic Transcoder team to review if this is necessary, and we’ll take appropriate action as required.” So, this AWS Managed IAM policy(AmazonElasticTranscoderFullAccess) potentially allowed it’s grantee to in turn grant admin access(or any other access) to any other roles. Though this policy wasn’t related to IAM, it allowed changing access to other IAM roles. I hoped that they would take quick action on this issue, as this was a serious security flaw (allowing granting of admin access ). But I did not hear any updates from them for a long time. I wanted to make other AWS users aware of this, but that could have lead to misuse of the information, so I just posted vague information of it on /r/aws on reddit, from where I learnt about Responsible Disclosure. This post is my disclosure of my findings on the same having given 60 days of time for AWS team to address the issue. At the time of this posting, AWS team has deprecated the AmazonElasticTranscoderFullAccess policy that I had reported, and they have now created a new policy for replacement with the name AmazonElasticTranscoder_FullAccess (Notice the _ being added in the name), to which they want users to migrate to and circumvent this issue. The new policy does not have the iam:PutRolePolicy in its allowed actions as it was in the previous policy, so this issue should resolve the issue I had reported. However, the users who are still using the old managed policy(AmazonElasticTranscoderFullAccess) in their AWS Account should be aware of this vulnerability in the policy, and take necessary actions on the same. Screenshots of deprecated policy: Here, you can see that the AmazonElasticTranscoderFullAccess policy grants iam:PutRolePolicy permission. And this policy is being deprecated by them ( notice the exclamation mark/icon in red).
https://medium.com/ymedialabs-innovation/an-aws-managed-policy-that-allowed-granting-root-admin-access-to-any-role-51b409ea7ff0
['Sharath Av']
2018-06-21 06:45:14.148000+00:00
['Transcoding', 'Iam', 'AWS', 'Security', 'Backend']
Chaos and Creativity
Our ability to create through extreme experiences Image by Agsandrew Last week, I didn’t leave my apartment at all for four straight days. Not to go grocery shopping, not to the convenience store, not to go for a walk. Nothing. For someone like me who gets claustrophobic at the thought of being confined in any manner, four days is a long time. On the fifth day, my daughter went into my closet, picked out an outfit and told me to get dressed. We strolled around our neighbourhood taking pictures on her iPhone and a disposal camera she had recently bought. We were outside for less than an hour, but that’s all it took to help inspire some ideas I could write about. Our short time outside also made me reflect on how as writers, our content is so connected to our experiences. The time we spend browsing our local bookstores, the few minutes we spend in line shopping or the hour we spend sitting at our favourite cafe. COVID has eliminated or severely changed the nature of these micro-experiences. The everyday interactions that we would subconsciously catalogue in our minds that somehow seep into our work have been taken away from us during this pandemic. In a study published in the Journal of Pedagogy and Psychology, author Daiga Kalēja-Gasparoviča says that, “Contemporary conditions determine a relationship between the quality of individual’s life and individually developed creative abilities: the ability to adapt to extraordinary situations and circumstances of life.” Kalēja-Gasparoviča theorizes that our capacity to create is directly connected to the conditions we’re asked to create within. Right now, it would be fair to say that the current conditions are extreme. The pandemic has kept us indoors, socially distanced from our friends and families with a yet immeasurable impact on our physiological and mental health. In addition to the conditions caused by the pandemic, we are also experiencing a cultural shift, particularly in the western world, where we actively trying to dismantle, or at the very least reimagine, many of the systems that have oppressed, marginalized, and hindered the growth, lives and livelihoods of several communities. Education, capitalism, our system of government and policing; each of these are just a few of the examples of systems that are under scrutiny and in desperate need of upgrading. These are also systems contributing to the way we creatives experience our daily lives. The tension that arises when trying to break communal cycles of thought, particularly when there is heavy pushback, weighs on our minds and impresses itself on our creations. This impact may vary from person to person, but its presence is undeniable. Creativity elevates our consciousness In speaking to other writers and artists, there seems to be some diversion in how they’ve processed the experiences of the past year. Some have kept going by necessity. Creating is how they earn their living and even though they admit to feeling a consistent strain to keep producing, stopping would put a halt to their income, which they’re not willing to bear. Other creators, the ones who don’t depend on or expect their art to pay their bills, have withdrawn or reframed their messaging. They’ve either taken long pauses in between creations or reflected on what they were creating and decided to go in a different direction. The conditions spawned by the pandemic forced them to realize that they have an obligation to create closer to their truth, a truth that always was but could more readily be dismissed prior to the pandemic and recent social awakening. Those creatives can no longer push their purpose to the pits of their stomach. They’ve been emboldened to draw a line in the sand without regard for anyone who stands on the other side. The conditions have made them courageous. In his conclusion, Kalēja-Gasparoviča says: “Studies and theories in psychology confirm that in order to exist productively in the changeability and dynamics of today’s world, the individual is helped by the ability to adapt together with the potential for a non-traditional approach and originality. Thereby, creativity becomes significant in individual’s development, self-awareness and quality of life.” Imagine that kind of power. The power to elevate our self-awareness simply by engaging in an action that we’re genuinely passionate about and can’t resist its calling. We are blessed to be creators. Our sensitivity to the pulse of the world makes us unique. Our ability to feel that energy and transform it into something tangible that others can feel makes us invaluable. Accept that power, but understand how it is impacted by the forces around you.
https://medium.com/cry-mag/our-ability-to-create-through-extreme-experiences-ec971a341f29
['Kern Carter']
2020-12-09 18:14:28.749000+00:00
['Social', 'Research', 'Creative Writing', 'Creativity', 'Anxiety']
The Analogy
How can you be pissed off that I’m taking a damn vacation? Don’t I deserve one? Don’t we all deserve one? No problem at all with you taking a vacation but can’t you keep your ass here? We live in Florida — go to the beach. You don’t have to fly hours in a plane across the country to find relaxation. Jesus, we’re in the middle of a pandemic! Aren’t you being selfish? Me? Really? Because I think your vacation isn’t worth the risk of someone catching the virus and getting sick or dying? Just so you can go on vacation? Who is the selfish one here? I’m going to a state that has some of the lowest virus numbers in the country! True enough, but you’re boarding a plane in Florida which has some of the highest virus numbers in the country. Who do you think will be on the flights with you? Floridians — going and coming. There’s not much risk. I’ll be careful. I’ll wear a mask. And, the middle seats are empty on my flights. You think that’s good enough? Empty middle seats don’t equal six feet. Standing in the aisle of a plane to get on or off is not social distancing. Breathing recycled air wasn’t healthy before the pandemic and certainly isn’t now. And, how about the terminals? Standing in line? Using escalators, people-movers, public transportation, bathrooms? You’re just making a big deal out of nothing. The risk of me catching the virus and bringing it to you or anyone in the office is low. Got some statistics on that? No, I didn’t think so. And, even if it is low — what does that mean? 10 percent, 20 percent? To me, a 5 percent chance is high. I am over sixty-five — thirty years older than you — and you think you have the right to tell me how much risk I should accept so you can go on a vacation across the country? I just can’t believe you think you have the right to tell me where I can go on vacation! I am amazed I have to! I am shocked that you and two others in this office made plans to travel by air just to go on vacation — during a pandemic! Did you even consider what you might bring back to the rest of us? You know, this is ridiculous. You’re being ridiculous. Oh, really? How about I give you an analogy — a little story? So, in this story, you aren’t feeling well and go to the doctor and have some tests. You return to the office and say: Hey, everyone, I just came from the doctor. He told me I am highly allergic to an ingredient in the perfume called Magnificent. It’s the only perfume on the market with that ingredient and I’m hyper-sensitive to it. Someone here must wear that perfume because I always start feeling bad here in the office. I have to ask whoever wears that perfume to please stop. The doctor said that my allergy to it might get worse and I may go into anaphylactic shock someday.
https://medium.com/crows-feet/the-analogy-fb9597669946
[]
2020-11-04 10:23:02.024000+00:00
['Health', 'Pandemic', 'Travel', 'This Happened To Me', 'Covid 19']
The Virtual Persona
How artificial intelligence, virtual reality, and the universe, validate, and operate using, a singular identity. Integrating personae. Virtual identity. Everyone has two personae. One private. The other public. Translated this means we have a personal identity, and, also, a professional identity. These are, as you know, not, necessarily, the same. We can translate, again, to say, we have a virtual persona. Which we translate into a persona that is appropriate for whatever we are doing. We ‘blend into the background,’ or, ‘stand out from the background,’ depending on our objectives and goals. We take on multiple identities, and incorporate them into a single identity. This means nature, also, has a virtual persona. In some cases, this is easy to see. Nature, as a whole, cannot be separated into pieces, but nature’s constituents, in order to survive, must separate nature into pieces. This ‘separating into pieces’ allows a human to interact with nature’s persona. This is how we get our food, clothing, and shelter. Although we are circling with nature, whenever we take these from nature, nature is smart enough to use us, to recirculate them, back into nature. Herein, an underlying circle, as the true identity, of the virtual persona, called ‘nature,’ becomes obvious. A virtual persona has several characteristics: these include, ambiguity, multiplicity, redundancy, and, fungibility. All of these are dependent on universal, and, also, relative, duplicity. All of these words, by the way, are code words for ‘virtual reality,’ and, also ‘artificial intelligence.’ Virtual identity. And, artificial persona. Our goal is to integrate these multiple identities into a single persona, or, multiple personas into a singular identity. Where singular persona, is, always, a duplicitous identity (in order to accommodate any other persona or identity). An ambiguous persona and identity. This is how we end up with a personal, and a professional, identity. A private, and a public, identity. A singular, and a duplicate, identity. Multiple identities. Giving us the ability, as humans, to identify, and personify, the virtual persona called nature. In technology, we call the virtual persona ‘code.’ Or, sensor. Or, integrated circuit. Deep learning. Machine intelligence. String of characters. All of these are virtual identities supported by a natural, virtual, persona, technically, and technologically, identified (and personified) as a circle. Conservation of the circle to be exactly correct. Circumference and Diameter (Zero and One) The ‘Virtual’ Persona Meaning, the virtual persona of a circle, in technological terms, is zero and one. In mathematical, or cosmological, terms, circumference and diameter. Meaning complementarity is the basis for identity because duplicity is the basis for a unit. There is a circle between individual and group. Any system. Discipline. Ecosystem. Meaning, the virtual persona, controlling everything, is a circle. Allowing for the identification of any entity (process, or system). Conservation of the circle is the core dynamic in nature. Explaining both persona and identity, and, the origination, and use, of persona and identity.
https://medium.com/the-circular-theory/virtual-identity-954bca3ca18d
['Ilexa Yardley']
2020-04-07 17:03:48.564000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Intelligence', 'Deep Learning', 'Virtual Reality']
little flower wiser
Photo by Andreas Dress little flower wiser a poem about how we be truthful to ourselves and a little wiser . this day birds sing and colors spring little flower bedded below leaning toward the dawn sun asks “ O the mighty weeping willow how can I make this swamp better?” . firstly question yourself not me the answers lie not in the old nor the self labelled worldly wise keep watch beside flowing river guard the feelings that wash you by . accumulation is heavy like too many ripe fruit of trees nectar of intoxication seeping deeper into the roots drunk extravagance left empty . the mind is not a thought scrapbook page after page of where one was a trail needed to prove you live instead stay true to sensations wrap in kindness the seeds floated . small pleasures spout from arid soils as they do a wasteland drowning I cannot be wiser than you as we occupy the same space the wisest reads deep their own face .
https://haikulovebites.medium.com/little-flower-wiser-719cf9e2e15b
['Haiku Love Bites']
2020-01-02 09:48:52.108000+00:00
['Health', 'Life Lessons', 'Love', 'Poetry', 'Life']
Behind the Pins: Building Analytics
Tongbo Huang | Pinterest engineer, Ads As the community of Pinners grows, so does the number of businesses on Pinterest. Earlier this week, we revamped our analytics tool to help businesses better understand their audiences and how their organic content is performing. This second version of our analytics product offers new features such as profile and audience analytics, user country and metro segmentation, app segmentation, more comprehensive event types (such as impressions, clicks, and repins), and Pin It button analysis for the off-network and on-network usage. The analytics architecture There are four major components of the Analytics architecture: A scalable and reliable Hadoop MapReduce batch data processing pipeline that creates a new dataset everyday. Robust HBase data storage that serves data with minimal latency. A set of API endpoints that supports fetching data programmatically and enforces access control. A web application powered by an interactive UI for business users to consume data. The raw data comes from two sources: Event-based data comes from Kafka and is logged onto AWS S3. Object-based data comes from production db. The pipeline is managed by Pinball, an internal workflow management tool. Understanding the analytics data Analytics provides three different datasets to the user, namely domain analytics, profile analytics and audience analytics. Domain analytics contains usage data about Pins or boards with Pins that link to a business domain. Profile analytics contains usage data about Pins or boards that belong to a business user. Audience analytics contains data about users or followers that interact with domain/profile content. Pinterest analytics relies heavily on different types of user interactions, including Pin impressions, click-throughs, repins, likes and creates. We provide aggregated counts for each type of event at the Pin and board level, as well as data for Pins that are the most engaging or highest ranked in the search result with proprietary algorithms. Batch data processing pipeline To provide an accurate dataset for analytics, we built a Hadoop MapReduce data pipeline. We process tens of terabytes of data each day, so it’s important to ensure the pipeline is both scalable and reliable. The MapReduce pipeline starts to process data as soon as the data is available. It’s triggered by the condition jobs which periodically check if the data is available on S3. We split up the pipeline into about 100 jobs. If some of the jobs fail unexpectedly, other independent jobs can continue processing without interruption. There are four different stages of the pipeline, and jobs within the same stage are generally independent and can run at the same time. Stage 1 is to extract object items, including Pins, boards and users. We schematize and only keep necessary fields since the data is heavily reused. We then extract events that are associated with those Pins, boards and users. All later stages depend on those events, so we made this phase highly paralleled and process each event type separately. In stage 2, we aggregate all events and users at the domain and profile level, and these metrics then power the daily metrics graphs. Stage 3 is for top item extraction, where we find the top items based on either event counts, or proprietary ranking criteria for each profile and domain. The last stage is to persist all data into HBase. Tune up the processing pipeline Since our traffic is increasing and the pipeline has a 19 hour ETA, we put in a lot of effort into making it fast and reliable. All of the data pipeline jobs are written in Hive and Cascading, and we used a few tricks to improve the performance. 1. Optimize dependency and increase parallelism. The general guideline for our jobs is to be as simple as possible. We store a lot of temporary intermediate results to increase data reusability. It also helps us alleviate data processing failures as we can resume the workflow at these checkpoints. 2. Minimize data size. Disk I/O is usually the bottleneck of processing jobs, so it’s very important to minimize the data size with shared data extraction jobs, and keep only the necessary data fields. 3. Avoid sorting the dataset. Query clauses such as ORDER BY and DISTRIBUTED BY are all heavyweight. While processing extremely large datasets, chances are that you’ll only want a limited number of top results. A better approach is to use CLUSTER BY to cluster the keys together and use a priority queue to keep only the top results. 4. Data pruning. For profile and domain level aggregated events and users, the dataset is usually heavily skewed, namely some profiles and domains own way more data than others. This makes it difficult for MapReduce to evenly distribute data onto reducers and hurts query performance. A workaround is to study the distribution of the dataset, and prune data entries with a small chance of appearing in the final results. 5. Avoid or optimize joins. Think carefully about the trade off between logging more data and spending more processing time on data joins. When a join cannot be prevented, take advantage of map join and bucket join. 6. Take advantage of UDF functions. UDF functions offer great values in increasing productivity and are a scalable way of sharing query logics. Scalable data store and serving From the analytics data pipeline, we create a half terabyte of data each day and keep most of the data around for a month. Our storage backend is powered by HBase, which is integrated well with our Hive and Cascading processing pipeline. HBase is a key/value store, and we design our storage schema as follows: Row key: userparam_platform_aggregation_date Column family: metrics/topitems Column: metric names Our schema design has a few advantages, such as, the date is the last element in our key, so all of the data is consecutive when displayed in a metric graph. Also, the locality enables us to quickly fetch the data with a scan command. We pre-aggregate data on a few levels: daily, weekly, biweekly and monthly, so we never need to aggregate any data in real time. We also pre-aggregate all available app types and store the aggregated results as separate entries. We never need to aggregate multiple userparams, so we keep it as the first element in the key, and the data will be split evenly across all region servers, and the load will be well balanced. We only have one column family in a single table because for the analytics data, it’s hard to make sure that the data have similar sizes across different column families. Surfacing data in the web application The analytics web application is powered by API endpoints. Users with access rights can fetch the same data and do analysis on their own. The web application has rich UI components to help users dig in and discover insights. You can check out the finished product now. If you’re interested in working on new monetization engineering challenges like this one, join us! Tongbo Huang is a software engineer at Pinterest. Acknowledgements: The revamped analytics was built in collaboration with Long Cheng, Tracy Chou, Huayang Guo, Mark Cho, Raymond Xiang, Tianying Chang, Michael Ortali, Chris Danford, and Jason Costa along with the rest of the monetization team. Additionally, a number of engineers across the company provided helpful feedback.
https://medium.com/pinterest-engineering/behind-the-pins-building-analytics-f7b508cdacab
['Pinterest Engineering']
2017-02-17 22:18:53.160000+00:00
['Advertising', 'Measurement', 'Analytics', 'Pinterest', 'Engineering']
NBA Predictions, Tiers, and 2020-21 Season Preview
2020-21 NBA CHAMPIONSHIP TIERS This is not necessarily how the regular season rankings will come out — we’ll get back to that. But at the end of the day, each team has different hopes and expectations. Here’s how I see each team’s playoffs and title hopes heading into the new season, and the tier title should fill in the blanks… TIER VIII — THE SHAMELESS TANKERS 30. Cleveland Cavaliers 29. Oklahoma City Thunder 28. New York Knicks 27. Charlotte Hornets Expect the tanking to get very bad in the second half of the year. This is a really strong draft class, and there are no fans in the stands to discourage the tanking. These teams are going to be really bad, and with no incentive to try otherwise, they could be REALLY bad down the stretch. Cade Cunningham awaits. TIER VII — NO PLAYOFFS THIS YEAR 26. Sacramento Kings 25. Minnesota Timberwolves 24. Atlanta Hawks 23. Detroit Pistons 22. Memphis Grizzlies There’s a lot of youth movement in the right direction on these squads, but the playoffs ain’t happening this year. Best case scenario might be a one-and-done play-in game loss, and I don’t even see that in these teams’ futures. Youth means bumps along the way, and growth is not linear. Sometimes it means a step forward is followed by a step or two back before another leap. The Kings learned that the hard way last year, and the Grizz are next in line. The Hawks are the team everyone is overrating. They can improve a long way and still be very mediocre, and the defense will be bad. These teams should be fun to watch, and there are brighter days ahead, but not in 2021. TIER VI — PLAY-IN CONTENDERS 21. Chicago Bulls 20. San Antonio Spurs 19. Washington Wizards 18. Orlando Magic 17. New Orleans Pelicans I know, I’m more excited about most of the teams in the previous tier too, but these are the ones I’m expecting in the play-in games. Pop will do his thing with the Spurs, and the Magic will be the definition of vanilla and mediocre again. The Bulls should be vastly improved with competent coaching, and Russ and Brad are too good for Washington to not be decent. The Pelicans feel somewhere stuck in the middle of everything, and I fear a lot more downside than upside with this strange team construct. Bledsoe, Lonzo, Zion, and Adams is a whole lot of non shooting, and I expect an Ingram shooting regression too. The NBA is giving us a whole lot of Zion this regular season, but if we get the Pels in the playoffs too, he will have to be the reason. TIER V — EXTREMELY DISAPPOINTING SEASONS 16. Houston Rockets 15. Utah Jazz 14. Miami Heat 13. Golden State Warriors I have no idea what to do with the Rockets right now, so let’s just stick them in the middle. The Jazz smell like disappointment. Every key player on that team is on the wrong side of the aging curve except Donovan Mitchell, and he hasn’t really shown significant improvement since a great rookie year. The Heat feel like a step back too with a worn down Jimmy Butler coming off a long run. Miami was the right team at the right time in the bubble, but unless they have a big trade coming, this is a lower tier East playoff team. The Heat and Jazz are 7-seeds for me, and I think either one of them could get knocked out in the play-in games. The Warriors get the benefit of the doubt here at the top of this tier, but I really see them a tier down and even that feels somewhat optimistic. This just isn’t the same team. Durant, Klay, Iggy, and Livingston are gone from the core, and Draymond and Steph are two years older and past their primes. Oubre is fine, but I don’t trust Wiseman, Wiggins, or anyone else. This team lacks shooting and severely lacks basketball IQ and feel. I see a one-man offense and a one-man defense. This just isn’t a top team anymore. TIER IV — SECOND ROUND EXPECTATIONS 12. Phoenix Suns 11. Indiana Pacers 10. Portland Trail Blazers 9. Boston Celtics I love Portland and Indiana as surprise regular season teams but don’t like their playoff upside quite as much. A top-4 seed and a playoff series win would be a really nice outcome for both, and I think either or both of them could contend for as high as the 2-seed. The Pacers have a really strong five-man lineup and some obvious potential for improvement if Nate Bjorkgren unlocks a few things, and Portland vastly improves their forward and big man rotation. Those are two of my favorite overs. I like but don’t love Phoenix. I think they’re now something like Portland was the last few years, a solid clear playoff team but one that probably isn’t super threatening there unless they catch a few breaks. That’s still a huge step forward, but the future is still years away for this team. Chris Paul’s job is not to get the Suns to the Finals. It’s to establish a winning mentality. Boston is the Golden State of this group. I have all the respect in the world for Brad Stevens and expect a big step forward for Jaylen Brown and another smaller step for Jayson Tatum. But Kemba Walker is increasingly worrying, and I don’t trust the Celtics depth at all. Feels like this year’s 76ers to me, a team that we can never really count out but never really comes together. TIER III — CONFERENCE FINALS UPSIDE 8. Toronto Raptors 7. Dallas Mavericks 6. Denver Nuggets 5. Philadelphia 76ers Now we’re talking. Any one of these teams could finish as the 1-seed and I wouldn’t be too surprised. Why is everyone overlooking the Raptors again? Gasol and Ibaka weren’t even healthy last year, Siakam and Anunoby will be better, and the Raptors have Nurse, Ujiri, depth, and culture. I’m not sure there’s deep playoff potential there, but you do not want to be the team that has to give them the knockout punch. Dallas and Denver got next, but I think they may have to wait out one or two more LA windows. That Jamal Murray leap in the bubble looks real, and if he contends for All-NBA, Denver has to be taken very seriously. Dallas will go as far as Luka takes them — but going from no playoff series wins to the Conference Finals is a big ask. Philly gets the top ranking in the tier based on potential, though I’m the least confident in them. This roster makes sense now. Love Ben Simmons with more spacing, love Seth Curry as a movement shooter under Doc, love Tyrese Maxey as a rookie super sub off the bench. I’m intrigued, like every year. Will they finally pay it off? I’m skeptical. TIER II — THEY COULD WIN THIS THING…? 4. Milwaukee Bucks 3. Brooklyn Nets These are the clear top teams in the East come playoff time for me, but both have serious question marks. The Bucks have done little to answer my questions from the last two years. I love Jrue Holiday, and he’s better at a lot of the things Eric Bledsoe was already good at. Milwaukee’s defense will be even better. But this team still lacks a bit of creativity and playmaking and a lot of shooting, and I still only see three guys I want in my crunch time lineup. The Bucks will have all the metrics again like usual, but has Bud learned anything and is Giannis enough? I might have to see to believe at this point. The Nets are my East favorite, and by a decent margin if they stay healthy. Kevin Durant is first pick for players you want in the East playoffs if he’s 90% of what he was before, and Kyrie Irving is probably still third. And personally, I hope they don’t trade for Harden. I love the versatility and depth with LeVert, Dinwiddie, Harris, Allen, and the others there. The offense will be great with KD, Kryie, and Nash’s coaching crew. The question is how much defense this team can play, and we may not really find out until the playoffs. This team reeks of 2019–20 LA Clippers, where we have to sort of imagine them all year until the playoffs. Let’s hope we get an answer eventually. TIER II — THE FAVORITES 2. Los Angeles Lakers 1. Los Angeles Clippers It’s just LA and LA at the top, and yes, I have them in that order. Somehow, everyone has decided that a relatively easy Lakers title run retroactively made them far better than the rest of the league last year. Those same pundits think Dennis Schroder and Montrezl Harrell are great improvements that won the offseason for the Lakers, even though neither may even fit a crunch time lineup. Look, LeBron and Brow are brilliant, and as long as LA has those two, they’re a favorite. But I never liked the rest of the roster last year and still don’t care for it. Marc Gasol is a great addition if they need to play Jokic or Embiid but otherwise shouldn’t play big minutes. Harrell shouldn’t be out there when things matter. Schroder is a downgrade from the version or Rondo we got in the playoffs, and by a wide margin. Wes Matthews is more washed than Danny Green. The Lakers are obvious title contenders, but there are exactly two reasons why. The rest of the roster remains full of question marks. The Clippers are the much more rounded team on paper, but paper doesn’t win games, as they learned the hard way. I’m not ready to count them out after a bubble where nearly everything possible went wrong. Kawhi had terrible games at the wrong time and didn’t look healthy. PG clearly wasn’t healthy, physically or otherwise. Patrick Beverley was hurt and limited. Harrell never found himself in the bubble. Lou Will was a distraction. The chemistry was all wrong. And still they came one win away from the WCF, even despite all that. That’s a pretty strong worst case scenario, if you ask me. This year’s Clippers are better. PG should be far healthier and isn’t coming off offseason shoulder surgery. Ibaka over Harrell is a big improvement in crunch time. Kennard is a shooting weapon the team didn’t have. Morris is there all season now and gives the team a set of versatile wings no one else can match and a deep set of defensive options to throw at LeBron, Giannis, and Durant in the biggest games. And there’s still another move to be made when they trade Lou Williams for a point guard and grab another center on the buyout market. Fool me once, shame on you. Fool me twice, and you’re the Clippers. Clippers-Nets Finals. Kawhi and KD. Let’s do this… ■
https://medium.com/sportsraid/nba-predictions-2021-season-preview-2020-basketball-tiers-la-lakers-clippers-brooklyn-nets-durant-giannis-aa5fe80da856
['Brandon Anderson']
2020-12-22 21:35:50.195000+00:00
['Basketball', 'Sports', 'Future', 'NBA', 'Culture']
#NaNoWriMo: Mr. Black Cat and Harry write a Magic Book together
A big black cat was walking through the wall. He wore a black suit, black shirt, black tie and black shoes. He also had a black ribbon tie on his head. The black cat walked over to the table in the kitchen. “Good morning Harry!” said the black cat. “Good morning Mr. Black Cat,” said Harry in a happy voice. “Did you sleep well?” said Mr. Black Cat. “Yes,” replied Harry, “I had a very good sleep again. Thank you very much.” “Your welcome Harry,” said Mr. Black Cat. “By the way, I have been thinking about something. Why don’t we write your book?” “Yes,” agreed Harry. The cat and Harry went over to the table. Mr. Black Cat sat on the table. Harry sat in the chair. “This is going to be so much fun. Especially when we fill it with magic spells,” said Harry. “Yes,” agreed Mr. Black Cat. “What about we name it, The Harry Potter Magic Spell Book.” “I like it,” said Harry. Hurry up and write it Mr. Black Cat. After a while they were done. “Wow! We did a very good job,” said Mr. Black Cat. “Thanks Mr. Black Cat,” said Harry. “Thank you Harry, I had the most fun in a long time. We will meet each other tomorrow again,” said Mr. Black Cat. “Good bye Mr. Black Cat,” said Harry in a happy voice. “Goodbye Harry,” said Mr. Black Cat. The black cat walked back through the wall. Harry went over to his bed and jumped on. Then he took a nap.
https://medium.com/merzazine/nanowrimo-mr-black-cat-and-harry-write-a-magic-book-together-b7ee99a80ee9
['Vlad Alex', 'Merzmensch']
2020-11-08 16:41:40.728000+00:00
['Merznlp', 'Artificial Intelligence', 'Gpt 3', 'NaNoWriMo', 'Art']
Architecture, Design and AR
Enjoy this excerpt from my new book! From Convergence, How the World Will Be Painted With Data By Sam Steinberger When multidisciplinary design and consulting firm Arup was contracted to work on a new hospital, the team quickly got to work. With over 14,000 professionals around the globe who specialize in a diverse array of roles, from architecture to security engineering, the firm knew it had to meet tight deadlines and work with clients who expected the best while demanding top efficiency. Because constructing a building is essentially a one-shot process, firms like Arup, which has been instrumental in the completion of famous landmarks like The Gherkin in London, the Sydney Opera House, and New York City subway system’s Fulton Center, undertake iterative steps during the design phase. Those iterations demand a diverse array of inputs, from sound recordings of the site’s ambient noises to 360° street-level videos, not to mention building plans from which to render 3D modeling of the proposed construction. This particular project saw many of the site elements being put together by specialists in the firm’s New York City office. Among those working on the project was Anthony Cortez, senior designer, and lead visualization specialist. After receiving the job’s specifications, Cortez, along with his colleagues, spent the next 48 hours getting the project ready for a client presentation. The presentation had to include video and sound for a VR experience that would show clients what the old site looked and sounded like, as well as how the proposal would change the site’s environment. The new project included changes to traffic patterns and it involved the function of mechanical and security systems, like ventilation and CCTV. The only hitch? The project was across the country, in Santa Monica, California. Using cloud-based collaboration and AR, including augmenting a site’s actual acoustics with modeled post-completion audio, Cortez and team met their deadline. The clients were ecstatic, he said, not only with the team’s efficiency but with the intuitive nature of VR and AR showcasing of the final product. “Clients get excited about seeing and interacting with the design we’re working on and they want more,” he said. That’s just one example of the way AR naturally integrates itself into the architectural design process. Powerful enough hardware has always been a challenge for both designers and clients. The computing has to handle “heavy” models without being tethered. “The field of view has to be right,” said Ignacio Rodriguez, CEO, and principal at IR Architects, which designs luxury real estate in Southern California. Leading devices like the HoloLens and Magic Leap have a narrow field of view. “VR is ahead of AR,” he said, “in its ease of use and general adoption.” To move inside a virtual building and alter its design, AR lags VR, big time. For its part, Arup has used a number of AR systems over the years, including Google’s Project Tango headset, iPads, and smartphones. It’s now building uses for its HoloLens system and is interested in exploring how a Magic Leap headset might fit with future projects. Today’s usability of VR, which has found enthusiastic adoption among clients is about five years ahead of AR. “We’re just on the 20-yard line working our way down the field. We want to get to that end zone as quickly as possible. But, we really want a platform that allows us to seamlessly transition from between AR and VR,” Rodriguez said. Systems aside, Arup has not wavered in its commitment to AR, noted Travis Rothbloom, a senior security engineer and design software programmer. From replacing mockups to modeling the flow of pedestrians, AR has already proven it can provide significant design benefits for the firm. During the design and drafting process for a project for New York City’s commuter rail, Metro-North, the firm used the Hololens to show the future model of new construction in conjunction with a homebuilt pedestrian simulation model. As simulated pedestrians with intelligent movements modeled after commuter behaviors flooded into the scene, the design team and client were able to see how people moved around in space and where they looked. The latter input can later be used for both optimizing signage. Even though contracts were made, in some cases, years before the technology was available, clients tend to appreciate the experience of convergently viewing the design and the present state of the site, said Rothbloom. “Getting past the ‘wow’ factor is key,” he added. “While it can be challenging to convince clients to pay extra for AR, once clients actually see how this helps with the iterative process, they’re more inclined to use it.” AR will profoundly influence the art and engineering of architecture, Rodriguez noted. In a more architectural-friendly analogy than his football comparison, Rodriguez described VR as AutoCAD, a tool with which many inside and outside of architecture are familiar. Another major advantage of AR is using Building Information Modeling (BIM) in conjunction with maintenance and building assets. Because the BIM is digital and in 3D, building owners and managers can essentially see through walls before and after building construction. Coupled with sensors, information from a BIM could enable the building manager of the future to know when to replace building assets, where to find them in building ceilings and walls and even order necessary parts before beginning a job. Managers could troubleshoot energy use or look at how the light in the building changed as the sun moved through the sky. Rothbloom demoed an example. During the renovation of a new Arup office, designers made a 3D scan of a room before it was finished, so the room’s ductwork was clearly visible. By adding and removing layers to the BIM, much like adding and removing layers in Adobe Photoshop, Rothbloom was able to give the user a view of the building before the renovation, while it was in construction, and after completion. Any user of the Hololens could see where the ductwork was and Rothbloom could even model the flow of a crowd of workers moving through the office hallways. The experience is a little like time travel. Using an AR system, a viewer can see how a building used to look, how it looked during construction, as the nerves and muscles of the building were added in the form of rebar, fiber optics and concrete, and how the building looks today. Engineers can even use AR to verify the building was built according to plans and Rothbloom envisioned construction workers of the future with headsets, so they can physically see the building they’re making–better understanding how their piece of the puzzle fits into the overall construction. Just as a building’s appearance is a one-shot experience, the acoustics within and outside of a building are carefully crafted and reworked. Designers can hear what a site might sound like after it is completed according to the modeling they’ve done. They can explore the addition of certain types of soundproofing or materials. They can also hear what an individual in a crowd might hear, if the space is meant to be shared, like a subway stop or a performing arts building. Using an advanced sound lab, Dr. Terence Caulkins, an acoustics researcher and sound designer, provided the auditory equivalent of Rothbloom’s visual AR. The scene was outside of the Cooper Hewitt Smithsonian Design Museum along Manhattan’s 5th Avenue. First, Caulkins demonstrated what the scene sounds like now: internal combustion-powered traffic moves along the road while pedestrians chat and walk along the sidewalk next to Central Park. Then he showed various modeled scenarios: what if all cars were electric? How noisy are drones? And what if large motor vehicles were banned from the stretch of 5th Avenue altogether? Combining the visuals of AR with binaural design, using a combination of BIM, recorded sounds and modeled sounds can create a visually and auditorily augmented experience. It’s an experience that allows designers, engineers, clients, and workers to better construct our future. Designers, architects, and engineers are interested in a process, an experience, that only AR is uniquely situated to handle. VR, on the other hand, is better suited to an end product: a show for a client or a way to help stakeholders visually understand a new space. AR is a tool. VR is a medium for display. If VR is the future of the clay model or plywood mockup, AR is the future of the pen, paper, and rulers that first turned dreams into design.
https://charliefink.medium.com/architecture-design-and-ar-30ebbe8cff4e
['Charlie Fink']
2019-03-10 16:21:00.719000+00:00
['Technology', 'Books', 'Charlie Fink', 'Virtual Reality', 'Augmented Reality']
Will Your Soccer Club Ever Meet Again? A Guide to Outdoor Sports This Summer.
Will Your Soccer Club Ever Meet Again? A Guide to Outdoor Sports This Summer. The risk factors around tennis, swimming, basketball, running races, and more Photo: Jeffrey F Lin/Unsplash Social distancing and lockdowns have shattered racing plans and competition schedules for many athletes — pros and weekend warriors alike. The Boston Marathon is canceled. The Tour de France has been postponed, and pro and amateur sporting events from soccer tournaments to triathlons have been put on hold. But this weekend, one event is bucking the cancellation trend: a 5K Spartan obstacle course race in Jacksonville, Florida. “We have procedures in place to make a Spartan event safer than going grocery shopping, going to Starbucks, or going in an elevator,” Spartan race founder Joe De Sena told Obstacle Racing Media. “We are expecting 4,000 Spartans per day for this event.” Participants are encouraged to bring friends and family to the race. Social distancing will be required, and racers will start in waves, with 24 racers starting every five minutes. Water obstacles will be removed, and participants and volunteers will be screened for Covid-19 symptoms and subjected to a touch-free temperature check. “Nothing has changed. The virus hasn’t changed. Our arsenal of treatments haven’t changed. Our biggest weapon right now is our behavior.” The prospect of assembling thousands of people for a recreational sporting event worries public health experts. “Right now is not the time for these large gatherings,” says Syra Madad, special pathogens specialist at NYC Health + Hospitals. Communities may be opening up again, she says, but it’s important to understand that the risks of Covid-19 remain. “Nothing has changed. The virus hasn’t changed. Our arsenal of treatments haven’t changed. Our biggest weapon right now is our behavior.” According to the Covid-19 Event Risk Assessment Planning Tool created at Georgia Tech, there’s a greater than 96% chance that there will be someone with Covid-19 at an event with 4,000 people. The Spartan race is taking place just after Florida set a new single-day record for coronavirus cases since reopening — 1,495 new cases reported on June 5. (The previous record was 1,416 new cases, on April 17.) Even with precautions like hand sanitizing, staggered starts, and masks for race officials, the size of an event like this is just too risky, says Zachary Binney, PhD, an epidemiologist at Emory University. “Oh, you’re doing hand sanitizing and social distancing? Whoopee! You’re still unnecessarily getting several thousand people together,” he says. What about smaller events? It might be possible to have smaller events if certain conditions are met, says Gretchen Snoeyenbos Newman, MD, an infectious disease physician at the University of Washington. The acceptable size will depend on local rules and what’s happening with case numbers in the area. If the event is held in a community with very few cases and falling numbers, it might be possible to have a small outdoor sporting event if organizers check participants for symptoms and fever and are aggressively ensuring that people remain six feet apart and aren’t touching shared objects or surfaces. The risk posed by any particular activity, sporting events included, comes down to a few important factors: proximity to other people, the intensity of the exposure (breathing hard and talking loudly increases the risk of spreading respiratory droplets), and time. Even if it’s outdoors, basketball is high risk, because you’re throwing a ball back and forth and breathing hard in close proximity to one another. Here’s what that means for you: Practice good hand hygiene, wear a mask when you are standing around with other people, keep six feet between you and everyone else, and minimize the duration of any close contact. Among the biggest and most challenging risks posed by athletic events is not the physical activity, but the milling around that can happen before and after an event, Snoeyenbos Newman says. It can be hard to remain vigilant about social distancing when you’ve got post-event fatigue and are wandering around the vendor tents, and it’s easy to forget to not share a water bottle with a friend or give someone a high-five. A sport-by-sport breakdown If you want to engage in outdoor sports with other people (other than an organized event where you’ll come into contact with strangers), one good strategy to lower the risk is to find a partner you trust who has been socially distancing and limiting interactions with other people, Madad says. If the risk in your area is low, you might also select a small circle of trusted friends to do team sports with and agree on terms about how you’ll reduce your exposure to Covid-19 when you’re not together, Binney says. Tennis The rules about tennis vary across the country right now, but outdoor courts are opening up in many communities. If you’re sweating and touching your mucous membranes and then touching the ball, you could potentially spread the infection on the tennis ball, Madad says. In New York’s Nassau County, county executive Laura Curran announced that courts there would be opening up, but players are asked to bring their own balls so they can avoid touching their opponent’s balls. “You can kick their balls, but you can’t touch them,” Curran said, adding, “I’m going to blush.” The sport’s governing body, the United States Tennis Association, has issued some recommendations that include staying six feet apart, washing hands before and after playing, and avoiding touching your face after handling a ball, racquet, or other equipment. The USTA guidelines don’t suggest using separate balls, but they do recommend using a racquet or foot to pick up balls or return them to your opponent. (You’ll have to use hands to serve, however.) Basketball Even if it’s outdoors, basketball is high risk because you’re throwing a ball back and forth and breathing hard in close proximity to one another, Binney says. It’s almost impossible to play a normal game without getting close enough to breathe on other players. “I wouldn’t recommend recreational league basketball right now,” Binney says. If you really must play, do it one-on-one with a member of your household, or if you can’t do that, you could consider blending with another household that would give you enough players. But really, you need to ask yourself if this is really necessary, he says. Soccer Soccer is lower risk than basketball because you’re not touching the ball and there’s less tendency to get in one another’s faces, Madad says. Still, it’s not quite low risk. “You’re spending 90 minutes in close contact with other players,” Binney says. Not all of that time is spent within the breathing range of others, and if you can reduce the amount of time you’re in close range, that helps, he says. But it’s probably best to kick the ball around with a trusted household member or friend, rather than compete in a game or tournament. Rock climbing Indoor gyms are a pretty high risk right now, but outdoor climbing can be relatively low risk if you take some precautions. When you’re belaying someone on a route, you’re naturally distancing, so as long as you stay sufficiently apart the rest of the time (no lounging about at the crag), your risk is pretty low, Snoeyenbos Newman says. The Access Fund, a climbing advocacy group, has issued Covid-19 guidelines that recommend staying close to home to be respectful of vulnerable gateway communities, washing your hands before and after, and going out with only a single partner to reduce your social contact. They also urge climbers to “dial it back a notch” to reduce the risk of accidents and injury that might require the help of search and rescue teams. Running Running by yourself outside is low risk. “People should absolutely be going out and running,” Binney says. Running in a small group is pretty low risk, too, if you maintain the social distance between one another and don’t share drinks or equipment, he says. Be careful to avoid hanging around too close to one another before or after the run. Running with the same group of people is safer than running with different people all the time. The idea here is to limit your social contact to a small group as you start leaving the house again. Bicycling Cycling is one of the safest things you can do, because you’re outside and there’s lots of airflow, Snoeyenbos Newman says. It’s okay to ride with others if you take some precautions: Keep some distance between one another, and stay out of the other rider’s slipstream to avoid coming into contact with their heavy breaths and snot or other respiratory emissions. Don’t pass water bottles or food back and forth or share equipment like pumps. Swimming There’s very little risk of getting Covid-19 from water. Whether you’re at the beach or doing laps in a pool or competing at a swim meet, the danger with swimming comes from interacting with other people around the water, Snoeyenbos Newman says. “Pools tend to get pretty tight physically,” she says, and showers and locker rooms are confined spaces, so therefore risky. Swimming laps in a way that you’re avoiding contact with other swimmers is low risk. Holding a swim meet where competitors are milling around before or after their event is much more risky, she says. You probably want to avoid the kiddie pool, Madad says. Youngsters are notorious for pooping in the pool, and the SARS-CoV-2 virus has been found in fecal matter. It’s not clear how much it spreads this way, but why take chances?
https://elemental.medium.com/will-your-soccer-club-ever-meet-again-a-guide-to-outdoor-sports-this-summer-49b2c2bdf477
['Christie Aschwanden']
2020-06-15 14:59:50.844000+00:00
['Test Gym', 'Sports', 'Health', 'Tennis', 'Basketball']
A Simple Way to Explore the Netflix Content Using Tableau
photo from https://www.pcmag.com/reviews/netflix Are you struggling to perform EDA with R and Python?? Here is an easy way to do exploratory data analysis using Tableau. Lets Dive in to know the process and look at the analysis ! Introduction Netflix is the world’s leading internet entertainment service with 158 million paid memberships in over 190 countries enjoying TV series, documentaries and feature films across a wide variety of genres and languages. I was curious to analyze the content released in Netflix platform which led me to create these simple, interactive and exciting visualizations with Tableau. Data Source The dataset used in this EDA project is taken from Kaggle. Click here to download the data. Data Description Data has 12 columns which include show id, title, type, director, cast, country, release year, description, rating and genre of the movies/TV shows. Type column gives if each record is a movie or Tv show. Country column has all the countries a movie or show is released. Each movie or show belongs to one or more genres which is mentioned in the genre column separated by commas. Data Transformation In order to visualize top genres or country wise releases, the existing data has to be transformed. We don’t require any coding experience to do this. A simple understanding of Alteryx tool creates a pipeline to transform the data. Lets break down the transformation to 2 steps. Step 1: Convert the text in country and genre columns to multiple columns. Step 2: Transpose the columns to get a record for each genre or country. See below, the screenshots of the workflow and data sample before and after transformation Data Transformation workflow in Alteryx Data before transformation Transformed Data EXPLORATORY DATA ANALYSIS Movies vs TV shows Netflix platform even has the content which was originally released in 1965. For a better visual I have only displayed the releases from 2000. We observe that initially the percentage of movies is higher than TV shows but from 2019 Netflix has started to focus more on TV shows. 2. Country Wise Releases United states has most total releases ( both movies and TV shows) followed by India and United Kingdom. The interactive dashboard is present here with dynamic parameters to see top N countries of highest releases. Tree map showing the Top 10 countries w.r.t to no. of releases 3. Top Genres in Movies vs Tv shows Movies and Tv shows are predominantly focused on international, dramas and comedies. You can see below the top genres in each category. Click here for movies and here for Tv shows to interact with the chart and see the top N genres in each year or range of years.
https://medium.com/swlh/a-simple-way-to-explore-the-netflix-content-using-tableau-5b06e17e443
['Sruthi Narapareddy']
2020-11-01 12:38:12.267000+00:00
['Netflix', 'Tableau', 'Data Visualization', 'Exploratory Data Analysis', 'Alteryx']
Why is skipping a meal worse than begin morbidly obese?
What happens when you eat Contrary to popular belief, eating is not like tanking gas. For long, we did not know better as to believe that you load your body with energy, and then you use that energy whenever you need it. Science tells a whole different story. Eating is not merely about taking in energy — it’s much more about sending and receiving signals. When you eat, depending on what type of food you eat, your body receives a signal to store energy. It enters a state in which it stores energy (and actively avoids burning energy). This state peaks and then slowly wears over a period of a couple of hours. Nearly every type of food sends this signal, but generally speaking sweet and starchy foods send the loudest signals. I’m talking about things like pasta, donuts, candy, sugary soft drinks, rice, and potatoes, to name a few. The stronger the signal, the longer and more deeply the body will enter the energy-storing state. As you might have guessed, eating a little bit of something sends a weaker signal than eating a lot of the same thing. So, in terms of signaling, two donuts is worse than one — not just because of the caloric difference, but mainly because of how deeply your body will go into the energy-storing state, and how long it will stay there. The problem with eating often Eating often results in your body entering and trying to leave the energy-storing state throughout the day. But right when it’s about to leave that state, new food is introduced and the body is pulled back into the energy-storing state. This is today’s dietary norm. We eat from the moment we wake until shortly before we go back to bed, only to sleep and start the cycle all over again. During all that time, our bodies hardly get a chance to leave the energy-storing state and burn some body fat instead. The underlying metabolical processes are much more complex than this, but the point is clear: our bodies need down-time from food — more than just the time we spend asleep — in order to be able to leave the energy-storing state and burn some body fat. What happens when you don’t eat When you don’t eat, the signals that make your body enter the energy-storing state fade away and eventually die out. This causes your body to slowly shift from burning energy directly from food to burning stored energy: body fat. In addition, in the absence of the energy-storing signal, your body eventually enters a state that encourages fixing cells and cleaning up waste throughout your body. This is why fasting has often been associated with cleansing. If you do not eat for some time, your body cranks up maintenance significantly. Energy abundance Another, much more noticeable thing that happens when you don’t eat is that you become incredibly alert. This is one of those things that is hard to believe for most people. The general belief is that in order to feel energized, you need to actively fuel your body. However, if this was the case, our species would not have stood a chance throughout history. It’s much more logical for our bodies (and brains!) to become more active and focussed in the absence of food. Higher alertness significantly increases your chances to find something edible, or catch prey, whereas lethargy and drowsiness would do quite the opposite. From an evolutionary standpoint, it makes a lot more sense to feel more present and alert in a fasted state, than to become tired and lethargic. Where the energy to power all this focus and alertness comes from? Body fat. Any healthy adult is carrying around at least 90,000 calories worth of body fat. An overweight person might even be carrying around 250,000 calories of fat, or more. That’s at least 45 days of pure energy in any healthy adult, right there.
https://medium.com/edible-future/why-is-skipping-a-meal-worse-than-begin-morbidly-obese-1dcfc93883ca
['Reinoud Schuijers']
2019-08-11 17:42:44.515000+00:00
['Health', 'Culture', 'Lifestyle', 'Food', 'Self Improvement']
Remembering Wii Fit
Remembering Wii Fit The pandemic has created a dearth of workout options. Perhaps the Wii Balance Board would be handy right now? During the coronavirus pandemic, finding creative ways to get some exercise has been one of the most dynamic struggles. Gyms open and then they close back down. The weather allowed for outdoor activity in the summer, then it put us back indoors for the winter. Yet where there’s a will to stay fit, there’s a way. Still, home equipment and workout space come in short supply for many. As a gamer, I instantly thought about how Nintendo’s now (in)famous Wii Fit would have been selling like hotcakes if released at the peak of the pandemic. As it is, Ring Fit Adventure, the company’s current workout title for the Switch, was extremely hard to come by back in the spring with relatively no marketing compared to Wii Fit. The game embodied Nintendo’s innovative and casual game approach as well as any other title on the shelf, and the Wii Balance Board looked like something out of a sci-fi Richard Simmons aerobics clip. Families had enormous fun with it, and the game sold millions of copies. Still, did it actually help you get any exercise? And did the negative aspects of the experience outweigh the positive ones? Time to dive deep and find out. The super wonky BMI scale The most glaringly obvious screw-up Nintendo made in developing this game was the decision to include a body mass index measurement for each player. BMI uses a ratio of height to weight to determine whether someone is underweight, overweight, or obese. Wii Fit. Source: UK Resistance. This scale has always been considered just a baseline that should be taken with many grains of salt, as basketball legend LeBron James has a BMI of nearly 27, a number which would push him to the “overweight” category. Anyway, getting told that you are a tub of lard by Wii Fit was a rite of passage in the gaming community in the late 2000s. For some children, they’ll shrug it off and move on with their day. For others, it was devastating. Normal-sized kids were being told to drop weight that wasn’t necessary to lose, and the fun that was taking place before the scale took hold of the living room was suddenly absent. Nintendo was forced by public criticism to apologize for the controversy around their strategy to weigh players, but they never put a warning on the game about it like many parents wanted them to do. All in all, this is a huge knock against the experience. If Miyamoto and company really wanted to create a realm in which fitness could be accurately simulated in the living room, they needed to come up with a better way to classify your healthy weight or abort that part of the experience completely. Age diversity in players As people get older, bones get brittle and joints get sorer than they are supposed to after just getting up off the couch. Fitness for this age group is always a task that requires the perfect balance of exertion and restraint. One study done by JAMA Facial Plastic Surgery says that facial fractures in adults older than 55 increased by more than 45 percent from 2011 to 2015. It’s clear that older people are encouraged to exercise, but it often comes with a price. Wii Fit was perfect for grandparents and their little tykes to get a little healthy activity in without hurting themselves or being put into a dangerous situation in terms of exhaustion. The scene described above fits right in with what Nintendo’s marketing strategy has been for eons: appeal to the largest audience possible. Researcher Ayesha Afridi, among others, found after an experiment with 16 adults averaging 67 years of age that Wii Fit Plus (the original’s sequel) improved dynamic balance and mobility in this control group after six weeks of using the game. While it may have been fair game to mock the chances of improved fitness for an adult in their 20’s using this experience, it would be foolish to do the same for older groups. These low-risk activities should have been aimed even more at this demographic. As it is, Nintendo once again was able to entice people to use their products who normally wouldn’t know what a video game does or is capable of doing. Age diversity was a huge boon for Wii Fit. Wii Fit had a sequel on the Wii U console, aptly titled “Wii Fit U”. Source: Nintendo. Setting an example for other fitness games This is the game’s lasting legacy. While it is now outdated over a decade later, Wii Fit set a standard in the industry to strive for fitness and healthy content. As I already mentioned, Nintendo improved upon their own experience with Wii Fit Plus and Ring Fit Adventure. Other companies followed suit, with Ubisoft’s Xbox One exclusive Shape Up, which used the Kinect, and their multi-console hit series Just Dance. The latter’s use of rhythm and music to get you moving has been a much better strategy for long-term success than Wii Fit’s rigid dedication to replicating a traditional yoga or gym workout. The fitness genre is an evolving industry. Developers are still trying to get down just the right formula that combines a broad appeal with actual fitness results. It also calls into question the definition of a video game. Is a simulated workout actually a game? And what needs to be a part of the experience to classify it as such? Until that fine balance is reached, hardcore gamers will view fitness games as fads and gym nuts won’t take them seriously as real healthy alternatives to the health club. Wii Fit directly encouraged players to take care of their health. It demonstrated a unique avenue for video games to pursue. Source: Nintendo. Despite these questions, it’s undeniable that Wii Fit is the main reason that there is any desire for working out with a game showing you the way. Technology has evolved, and Nintendo was a pioneer in these innovative ideas for expanding what a video game can provide. I’d say this definitely makes the game an overall success despite its technical shortcomings. Guinea pig concepts are always going to be critiqued, but the legacy of Wii Fit should be positive. Do you remember playing Wii Fit back in the day? If so, tell me some of your best or worst experiences with the title and whether it inspired you to try any other exercise games on your home console. Thanks for reading.
https://medium.com/super-jump/remembering-wii-fit-2d4d96554d9f
['Shawn Laib']
2020-12-19 08:00:58.537000+00:00
['Gaming', 'Health', 'Features', 'Fitness', 'Videogames']
Interstellar’s Vision of Cosmic Hope
Love, science, courage, the universe, the sublime—these are key concepts that convey Interstellar’s vision of cosmic hope for the human species. It’s a vision of hope to complement the paradox of our great discovery and intellectual achievement—we have discovered a vast and majestic universe that also renders us insignificance and possibly meaninglessness. Like its predecessor 2001: A Space Odyssey, Interstellar takes on the great philosophical challenges we face as a species—the specter of cosmic nihilism amidst the sublime immensity of the cosmos. Released in 2014, Christopher Nolan’s Interstellar is the greatest philosophical space film of the 21st century. Like 2001, Interstellar is not dumbed down to placate the masses, the faithful, or the studio execs worried about the bottom line. The film is filled with big ideas and big challenges for science and the human species. The following passages are from my forthcoming book, Specter of the Monolith (April 2017). Star-forming regions in vast nebula, Hubble Space Telescope, image courtesy of NASA, 2009. Interstellar quote added by Barry Vacker. Looking at Stars and Dirt In an early scene in Interstellar, Cooper laments: ”We used to look up in the sky and wonder at our place in the stars. Now we just look down and worry about our place in the dirt.” This world-weary observation follows a long tradition of humanity looking up at the starry skies in awe and wonder but also feeling fear and terror, triggering one to look away from the stars, down into the dirt of Earth. Thus another key existential theme of Interstellar centers on the contrast between stars and dirt as guideposts and endpoints for human destiny. In the film, we experience the awe of wormholes and black holes (central to human survival) yet face the terror of our possible annihilation in an ecological apocalypse that renders us extinct beneath piles of blowing dirt. (…) Space Spores Are the astronauts in Interstellar the equivalent of spores launched into space to avert extinction? At the height of the Cold War, the former Soviet Union and the United States launched rockets into space at the same time the human species faced the atomic annihilation of its civilization. Was the Apollo moon program a subconscious survival strategy for a species facing oblivion? In an analysis of Apollo 11, anthropologist Loren Eiseley suggested this possibility: “It is a remarkable fact that much of what man has achieved through the use of his intellect, nature had invented before him. Pilobolus, [a] fungus which prepares, sights, and fires its spore capsule, constitutes a curious anticipation of human rocketry. The fungus is one that grows upon the dung of cattle. To fulfill its life cycle, its spores must be driven up and outward to land upon vegetation several feet away, where they may be eaten by grazing cattle or horses. “When a pressure of several atmospheres has been built up chemically within the cell underlying the spore container, the cell explodes, blasting the capsule several feet into the air. Since the firing takes place in the morning hours, the stalks point to the sun at an angle sure to carry the tiny “rocket” several feet away as well as up. . . . “The tiny black capsule that bears the living spores through space is strangely reminiscent, in miniature, of man’s latest adventure. Man, too, is a spore bearer. The labor of millions and the consumption of vast stores of energy are necessary to hurl just a few individuals, perhaps eventually people of both sexes, on the road toward another planet. Similarly, for every spore city that arises in the fungus world, only a few survivors find their way into the future. “It is useless to talk of transporting the excess population of our planet elsewhere, even if a world of sparkling water and green trees were available. In nature it is a law that the spore cities die, but the spores fly on to find their destiny. Perhaps this will prove to be the rule of the newborn planet virus. Somehow in the mysterium behind genetics, the tiny pigmented eye and the rocket capsule were evolved together.” In our quest for space exploration, are we war-mongering and planet destroying humans little more than space spores simultaneously avoiding and leaving behind apocalypses? Was Apollo the most successful spore launch in human history, paving the way for future survival efforts in the event of an Earthly apocalypse? These are, in fact, the very questions Interstellar attempts to answer. (…) Destiny in the Voids Like the black monolith in 2001, the black hole in Interstellar symbolizes the philosophical void into which humans must hurl themselves in the quest for cosmic meaning and destiny. Upon entering Gargantua, Cooper exclaims, “Heading toward blackness. It’s all black. It’s all blackness!” Rather than turn away from the void in fear, Cooper hurtles himself into directly into it, not unlike Dave following the monolith toward the Star-Gate. In fact, the final scene of 2001 occurs after we pass through the blackness of the monolith and into our destiny as a space-faring species. Two of the key existential observations in Interstellar are offered by Romilly and Dr. Brand. Pondering the vastness outside the Endurance, Romilly remarks to Cooper, “This gets to me, Cooper. This [gesturing toward the exterior skin of the Endurance]. Millimeters of aluminum. That’s it. And nothing out there for millions of miles that won’t kill us in seconds.” The cosmic nothingness may “get to” Romilly, but he and the Endurance crew have the courage to venture into the voids. Of course, human survival is at stake, but it still takes massive amounts of courage to hurtle one’s self into the cosmic and existential voids. Is Nature Evil? After discussing the loneliness of the astronauts’ journeys into the wormhole, Dr. Brand and Cooper have the following exchange: BRAND. Scientists, explorers, that’s what I love. You know, out there, we face great odds. Death, but not evil. COOPER. You don’t think nature can be evil. BRAND. No. Formidable, frightening, but not evil. Dr. Brand’s comments counter just about every science-fiction space film since 2001, wherein the future in space is scary, filled with monsters (Alien), evil empires (Star Wars), and mass destruction (Gravity). In Interstellar, there are no monsters or evil empires, only humans struggling to survive in a vast cosmos via science, technology, and the courage to take risks. According to Jonathan Nolan (coauthor of Interstellar’s screenplay), cosmic nihilism is central to meaning of the film: “The antagonist is the void of the vacuum that we live in.” That’s why Interstellar is not a mere survival tale, for it also implies a quest for meaning amid the cosmos. Indeed, the universe is formidable and frightening and perhaps renders us meaningless, but it is also knowable and understandable via reason, art, science, technology, and — yes — even “love.” Love and Evolution As personified by Dr. Mann (Matt Damon), evolution is often portrayed and understood in simplistic notions of “survival of the fittest,” meaning that humans will kill to survive into the future. While he is trying to kill Cooper on the ice planet, Dr. Mann (surely symbolizing “mankind”) explains how the fear of death in the evolutionary instinct drives individual survival and thus the perpetuation of the human species. This is fear-driven evolution. It works for individuals and species collectively: Adapt and evolve, or die and become extinct. Of course, this is true. The apes in 2001 show this, as does the history of human warfare and extinction events on Earth. But 2001 also shows that evolution operates on multiple levels for our species, including our consciousness and the human cooperation necessary to build a technological civilization — the monolith inspires the bone technology that leads to space technology and human exploration of the cosmos. In addition to fear, love also drives human evolution. After all, the apes were soon caressing the monolith, as if in love with the mysterious sleek object and what it might represent. Dr. Brand is correct when she says, “Love isn’t something we invented. It’s observable, powerful. It has to mean something. . . . Love is the one thing we’re capable of perceiving that transcends dimensions of time and space. Maybe we should trust that even if we can’t understand it yet.” I think we trust love in our evolutionary process more than we realize, for it is our love of ideas and things of value — like art, beauty, science, discovery, wonder, nature, architecture, hope for the future, and so on — that propel us forward, individually and as a species. It is love — love of life, love of existence, love of people special to us — that also fuels the evolutionary extinct and the quest to overcome the annihilation of our significance. Of course, there is craving for pure sex, sheer greed, utter gluttony, and total narcissism, too. But we must love things other than mere survival and sheer hedonism, otherwise most of the artifacts of technological civilization would not exist. Fear and love reside deep in the human psyche, side by side, especially when confronting the cosmic sublime and the annihilation of our significance and outmoded narratives. (…) Interstellar: Here’s What We Can Hope For 1) In the short term, our ecological-intellectual futures are bleak. There is no better expression of this dystopian and apocalyptic vision than Professor John Brand’s declaration, “We’re not meant to save the world. We’re meant to leave it.” This suggests there is no hope for protecting the planet’s ecosystems, no hope for cleaning up the oceans and environments, and no hope for a sustainable civilization. As symbolized by educators denying that Apollo missions landed on the moon, the future for science and life on planet Earth looks hopeless. 2) We face our possible extinction. Due to the blight-caused apocalypse and to anti-intellectualism, billions of humans have died off and the human species faces its possible extinction event. Given that human narcissism and consumer society may be causing a sixth extinction, it seems fitting that we might perish, too. The trash and remnants of our civilization will become fossils studied by a future species. 3) We can make the impossible possible. Humans can be audacious risk-takers capable of achieving great things with art, science, and technology. Without doubt, Interstellar presents a vision of heroic scientists and astronauts who risk everything to save the human species, made possible by love, vision, courage, creativity, technology, and an overall rationality and commitment to science and evidence. NASA’s plan to go through the wormhole is audacious, as is Cooper’s seemingly impossible quest to retrieve the quantum data in the black hole. Both examples serve as powerful metaphors. Interstellar seems to be retrieving the vision of Apollo, where the impossible was made possible for the world to see via reason, science, technology, and a risk-taking spirit. 4) We likely have a lonely journey into the vastness of space. In one conversation during their journey to Saturn, Dr. Brand explains to Cooper that the twelve previous astronauts had embarked on “the loneliest journey in human history.” Given that there is no sign of intelligent life in our tiny part of the Milky Way, our initial journeys into space will require the astronauts to be more alone than any other humans have been. 5) There is no exit from existence, no exit from the future. Throughout Interstellar, it is clear: To save the human species, we had to follow the laws of the universe. There is no exit from this responsibility, or there would be no escaping the extinction event. To save us, there is only us and our brains, with no Creators, no prayers, no miracles, and no raptures. As symbolized by Cooper inside Gargantua and the Tesseract, there is no escaping the universe, no escaping the future. But it is a future of our making. While on the space station and recovering from his journey, Cooper muses: “I don’t care much for this pretending we’re back where we started. I want to know where we are, where we’re going.” Later, Murph advises Cooper where to go: “Brand, she’s out there. Setting up camp. Alone, in a strange galaxy. Maybe, right now, she’s settling in for the long nap. By the light of our new sun. In our new home.” 6) We have the courage to venture into cosmic nihilism and the cosmic sublime. The Endurance as a tiny speck next to Saturn and Gargantua signifies our physical insignificance, yet it is a testament to the power and sheer bravado of the human species — to use the laws of the universe to venture that far into the cosmos. In the quest for survival and meaning, we will find our destiny. We need a philosophical launch to accompany the spore launch. (…) That launch is outlined in Chapter 4 of Specter of the Monolith and I have posted excerpts here in Medium. _______________ The above passages are from my new book, Specter of the Monolith (2017). For more information or to purchase the book in Amazon, click here. You can follow me on Facebook and at my new Twitter page.
https://medium.com/explosion-of-awareness/space-films-5-interstellars-vision-of-cosmic-hope-958047cedd0
['Barry Vacker']
2020-08-22 18:07:52.054000+00:00
['Science Fiction', 'Atheism', 'Art', 'Space', 'Science']
Instagram Shopping Will Take over E-Commerce Faster Than You Think
The importance of social media for businesses has always been a controversial subject. Many think the ROI is too small and the effort is better spent somewhere else. Of course, the matter isn’t black and white. It depends on what you’re selling, and how you approach it. Nevertheless, it’s a valid point. Instagram and other social media are certainly not worth it for all businesses. I don’t see a corporate law firm ever needing an Instagram strategy. But even Instagram suitable e-commerce businesses, such as clothing and cosmetics brands, could’ve been better off focusing their efforts on multiple other things than social media for the last ten years. Don’t get me wrong, as someone who makes part of his living by helping businesses grow and increase revenue with Instagram, I know that with the proper approach the platform can be a huge asset. The problem is that it’s really easy to spend a lot of time on Instagram, and still get it wrong. Other avenues are usually more straight-forward. Properly integrated social commerce changes the game. An Instagram account will soon be the heart of a lifestyle brand. The major part of their revenue will come from Instagram, and no longer can the platform be an afterthought for businesses. Instagram Offers Convenience and Consistency Websites Simply Cannot Compete With Instagram introduced its checkout feature in March of 2019, which allows people to buy products they see on Instagram without leaving the app. It has been in closed beta ever since and is currently only available for people in the US. There’s a little under 30 businesses taking part in the beta testing. Instagram is, understandably, taking its time to get it right, because they know the crazy potential of shopping on their platform. Two key things make it a game-changer: Saved shipping and payment info — no more writing your address and credit card number. — no more writing your address and credit card number. A consistent shopping experience — choosing color, size, quantity, reading the product description, and adding to cart will be second nature to you, no matter what brand you buy from. Every business will essentially be able to offer a perfectly optimized buying experience to their customers — and with Instagram’s data and resources, ‘perfectly optimized’ won’t be far from the truth. When more and more businesses upload their entire product catalog to Instagram, it won’t take long before customers will be used to the level of shopping convenience. Old E-Commerce websites will work as secondary options for a few years, and then eventually die out. Now is the time to prepare for what’s coming. Betting Big on Your Instagram Game It’s still unclear when more businesses and countries will become eligible to use Instagram Checkout. In the meantime, there are a handful of things E-Commerce business owners can do to get prepared for the new era of online selling. Obviously start with uploading your product catalog and sign up for Instagram Shopping, if you for some reason haven’t already done that. Currently, the purchases need to go through your website, so the convenience isn’t there yet, but at the very least you’ll get to practice selling on social media. Then move on to the following things. This is all useful stuff even today — but soon the importance will be tenfold. Grow a Loyal Audience We all know what happened with Facebook Pages. Businesses spent time and money on building huge followings, but Facebook pulled the rug from under them and killed organic reach. Facebook became pay to play. I strongly doubt the same thing is going to happen here. This time around, Instagram is getting their cut from your sales, so they’re not going to handicap your ability to reach your customers. Sure, an abundance of product posts might cause the need for some kind of changes to the platform, and paid ads will always be around to boost your posts, but a large and engaging audience will absolutely be valuable. The best way to grow an audience like that is to be useful. Focus on the foundation of your account, and not the growth hacks. Learn the game of Instagram and establish a real connection with your customers. Put Resources into Product Photography Social commerce makes product photography more important than ever. Images and product descriptions will be the only things to stand out with — or screw up — in your shop. These of course matter on websites today, so if you’ve been pushing this project back for a while, now is the time to act on it. In addition to the main product images, you want real-world photos and videos of your products being used. These are the posts you’re going to be selling with. Learn what works now, so that when the in-app shopping comes, you have tested content ready. Build and Nurture Relationships with Influencers Instagram is already testing the feature of influencers tagging and selling products in their posts, and shoppable posts will make influencer marketing much more effective. As much power influencer marketing has today, it hasn’t reached its full potential. Influencers are mostly native to Instagram, but Instagram is not a good sales platform today. When that changes, influencers could become the retailers of the future. Even if you haven’t seen good results from influencers yet, that might change with shoppable posts. Build those relationships now, so that you can win big later. Don’t Take My Word for It All of this is of course speculation. I don’t have a crystal ball, and much wiser people have been wrong about their predictions. Maybe Instagram Shopping fails miserably, and something new takes over. Do your own research. At the very least I think this is something you should follow closely if you have an E-Commerce business. One thing is for sure, though. The ones who bet on an uncertain future, and happen to get it right — they win big.
https://medium.com/swlh/instagram-shopping-will-take-over-e-commerce-faster-than-you-think-37b12c81a767
['Sebastian Juhola']
2019-12-09 09:01:01.501000+00:00
['Marketing', 'Ecommerce', 'Sales', 'Business', 'Social Media']
C++ Container With Conditionally Protected Access
When we talk about the processing of the container’s content, we mean two situations. The first one is an iterator with full access to read and change the elements of the container. The second one is the constant iterator which has no right to change the content. Though what if we need to change the value of an element, but have to keep some condition on it? Let’s see an example. We have a simple array-based container, which should keep only values from the interval [0, 1] . If we start with the simplest form of changing the content — index access operator (or random-access operator or indexator), at some moment we have to watch the next picture: So you can simply imagine the same picture for the iterator and the dereference operator. Why should we do that? It is the moment you should say: “Hey, just add the specialized setter method with the built-in condition and the work is done!”. Okay, but I suppose that we are good developers and want to create a container compatible with STL requirements, with all matching iterators and operators. Even if no, you still need some tool to process every element, so there is no way to avoid an iterator development. You may point on several alternatives. a very fancy template, which will prevent the assignment of inappropriate values. The negative result is that the “bad” cases won’t compile. And, I believe, we may get a practically unreadable template. a specialized class with a proper data type. You will be happy to get a lot of work with the new hierarchy, type conversions, assertions, specific constructors and so on. an external condition. We won’t even discuss this idea, because you can imagine the problems of scaling, reusing and porting of such code. All we need is a simple, perfectly compiled and conversions free solution. Proxy class Okay, we will need an additional class. Though, it is not a new data type, as we discussed earlier. This class will be a part of our container and will represent a lightweight wrapper to keep the reference to the instance and the method with the condition to be checked. Somehow it may remind the proxy class for the bool vector. The idea is very simple: Container’s overloaded operators return the reference to a one-time proxy-object; Proxy-object takes the desired value and checks the condition; If the check is passed — provide the value inside the container. Since the idea is very simple, we can now provide a code: Implementation is simple, you may only want to add more arithmetical operators’ overloads. As you may notice, we also added the property operator for the quick conversion into the basic container type. Also, you can implement any condition, checking policy, exceptions generation and so on. As an example I followed the idea of IntervalContainer , so all upper values are saved as 1 and all lower values are saved as 0. Indexator We have already seen the code of how we want it to work, so now I’ll show you the implementation. It is an operator overload, which returns a proxy object which wraps the real element access. To dismiss problems with the conversions, we separate the read-only access and write-access. You may notice, that all the functionality is kept in the proxy-class and it gives a lot of flexibility and portability to the code. We can simply imagine the PImpl idiom, with the condition check kept in the descendant proxy classes. Iterator We want the next code to compile and work: As you may understand it is the time for the iterator implementation. In general, logic is the same as with the indexator overload: return an object of proxy-class with some restrictions.
https://medium.com/swlh/c-container-with-conditionally-protected-access-9d249393183e
['Pavel Horbonos', 'Midvel Corp']
2020-02-13 09:01:01.084000+00:00
['Containers', 'Software Engineering', 'Cplusplus', 'Programming', 'Software Development']
Fitting the Curve: Methods of Reasoning and Coronavirus Modeling
Before I get started, I want to reiterate that I am a software engineer and not an epidemiologist. I broke down 2 widely cited models, their critiques, and how they relate to various ways of reasoning and predicting outcomes we have never seen. The goal here is to walk through the process of learning variables, making assumptions, and building reasoning structures that can turn those values into predictions. If you are interested in infectious disease modeling I have linked every paper I reference and also there is (of course) a Wikipedia page on it. And remember, the usual George Box disclaimer: All models are wrong but some are useful. Contents Reasoning and Modeling Techniques Let’s say I ask you to predict the 1980 population of LA, Chicago, and Kansas City based only on the first chart. How would you go about doing it? Only Cleveland and Omaha are shown through 1980. (The second chart shows the actual data) Based on only that chart, it would be extremely hard to predict. Now if I told you: LA borders an ocean, Kansas City and Omaha are landlocked, Chicago and Cleveland are on a lake. Also if I mentioned LA had no nearby competitors, and that Chicago had already been established as the hub of the Midwest. That might help your prediction. What about the reduction in Western birth rates as we grew wealthier? Then there are totally unpredictable events like LA becoming the world’s film capital in the 1920s. On top of that, what if the data is unreliable? Should suburbs count? Did a city recently incorporate a new area? Dynamics like these are near impossible to predict. While this is not a perfect analogy. It’s something to consider when looking at the epidemiological models later. Reasoning Definitions There are a few different ways of reasoning towards a conclusion. One of the key comparisons is described in this paper: The deductive method requires expert knowledge to build a mechanistic-based model and depends on a first-principles understanding of the mechanisms acting within the ecological system. In contrast, the inductive method only uses the information content of the available empirical output data of the ecological system to construct a predictive model. In other words Nowadays, when we hear about statistical modeling we are typically talking about a 3(ish) step process. Model training/fitting Prediction Evaluation We then improve our models over time by re-training with better models, better data, or both. Fitting and training can be associated with rule-learning. This learning typically occurs with many examples used to derive an association. More akin to inductive reasoning. Predictions or calculations can be associated with applying these learned rules. More akin to (though not exactly) deductive reasoning. Every model does some combination of these steps. They do so at different levels of granularity, with different learning techniques, but they do these steps. Epidemic Modeling for Coronavirus We will look at three approaches. Compartmental Models The IHME / UW approach The Imperial College Approach Compartmental Models Compartmental models look to place segments of the population into different states. Many epidemic models use compartmental approaches. I want to mention these because they have a history of use in epidemiology. The SEIR model includes these main states, Susceptible, Exposed , Infected / Infectious, Removed Differential equations are then set up based on how each of these states relate to each other. These flow states look akin to this MIT DELPHI model, The equations produced look like this, Equations for Susceptible, Exposed and Infectious Note that these models attempt to capture ebbs and flows of infections as people go from susceptible, to infectious, to recovered. We see how the change in infectious population over time (dI/dt) is governed by the number of exposed individual (E(t)) minus the number of already infected individuals. Individuals can move between states depending on how much time has elapsed. This can result in situations where we see some sort of dampening curve or oscillation that moves toward a stable state. I would consider this as more of a deductive approach, because we set up rules for the states to follow, and through these rules we arrive at the solution. Some of the parameter values that modulate these variables, however, are learned in an inductive manner.
https://towardsdatascience.com/fitting-the-curve-comparing-approaches-in-coronavirus-predictive-modeling-4a5f0e36c3c5
['Zahid P']
2020-05-14 21:21:40.774000+00:00
['Data', 'Data Science', 'Coronavirus']
Data analysis made easy: Text2Code for Jupyter notebook
Example of plugin in action Inspiration: GPT-3 In June of 2020, OpenAI launched their new model GPT-3, which not only has futuristic NLP(Natural Language Processing) capabilities, but was also able to generate React code and simplify command-line commands. Looking at these demos was a huge inspiration for us and we realized that while doing data analysis, a lot of times, we often forget less-used pandas or plotly syntax and need to search for it. Copying the code from StackOverflow then requires modifying the variables and column names accordingly. We started exploring for something which generates ready-to-execute code for human queries like: show rainfall and humidity in a heatmap from dataframe df or group df by state and get average & maximum of user_age Snippets was one such extension we used for some time but after a certain number of snippets, the UI becomes unintuitive. While it is good for static templates, we needed something more to handle dynamic nature of our use-case. Snippet extension example We decided to attempt building a new jupyter extension for this purpose. Unfortunately, we didn’t have beta access to GPT-3, so using that amazing model wasn’t an option. Simplifying the task: We wanted to build something which runs on our desktops (with GPUs). We initially tried treating the problem as a chat-bot problem and started with Rasa but were soon stopped short due to lack of proper training data. Having failed to build a truly generative model, we decided to develop a supervised model which can work for the use-cases defined in the training pipeline and could be easily extended. Taking inspiration from chatbot pipelines, we decided to simplify the problem into the following components: Generate / Gather training data Intent matching: What is it that the user wants to do? NER(Named Entity Recognition): Identify variables(entities) in the sentences Fill Template: Use extracted entities in a fixed template to generate code Wrap inside jupyter extension Generating training data: In order to simulate what end “users” are going to query to the system, we started with some formats we thought we ourselves use to describe the command in English. For example: display a line plot showing $colname on y-axis and $colname on x-axis from $varname Then, we generate variations by using a very simple generator to replace $colname and $varname to get variations in the training set. Example of some (intent_id,ner-formats) Intent Matching: After having generated the data, which is mapped with a unique “intent_id” for specific intents, we then used Universal Sentence Encoder to get embeddings of the user query and find cosine similarity with our predefined intent queries(generated data). Universal Sentence Encoder is similar to word2vec which generates embeddings, but for sentences instead of words. Example of intent matching NER(Named Entity Recognition): The same generated data could be then used to train a custom entity recognition model, which could detect column, variable, library names. For this purpose, we explored HuggingFace models but ended up using Spacy to train a custom model, primarily because HuggingFace models are transformer based models and are a bit heavy as compared to Spacy. Example of entity recognition Fill Template: Filling a template is very easy once the entities are correctly recognized and intents are correctly matched. For example, “show 5 rows from df” query would result in two entities: a variable and a numeric. Template code for this was straightforward to write. df.head() or df.head(5) Integrate with Jupyter: Suprisingly, this one turned out to be the most complex of all, as it is slightly tricky to write such complex extensions for Jupyter and there is little documentation or examples available (as compared to other libraries like HuggingFace or Spacy). With some trial and errors, and a bit of copy-paste from already existing extensions, we were finally able to wrap everything around as a single python package, which could be installed via pip install We had to create a frontend as well as a server extension which gets loaded when jupyter notebook is triggered. Frontend sends the query to server to get the generated template code and then inserts it in the cell and executes it. Demo: The demo video was prepared on Chai Time Data Science dataset by Sanyam Bhutani. Short video of supported commands Limitations: Like with many ML models, sometimes intent matching and NER fail miserably, even when the intent is obvious to the human eye. Some of the areas we could attempt to improve the situation are:
https://towardsdatascience.com/data-analysis-made-easy-text2code-for-jupyter-notebook-5380e89bb493
['Kartik Godawat']
2020-09-06 21:12:54.717000+00:00
['Machine Learning', 'Artificial Intelligence', 'NLP', 'Data Science', 'Data Visualization']
Welcome to our board, Leah Johnson and Bonita Stewart
From left: Bonita Stewart, Aaron Skonnard and Leah Johnson Over the past months as we’ve settled into our new role as a publicly held company, we’ve benefited from counsel and insights from our board of directors to guide our success. As we continue to grow, we recognize the importance of including new perspectives and expertise to drive our mission of democratizing technology skills across the world. So it’s with great pleasure that I welcome Leah Johnson and Bonita Stewart to our board. With the addition of Leah and Bonita to our already strong leadership team, we will benefit from the discerning guidance of two trailblazers who have made significant impact in their respective fields. Leah is the founder and CEO of LCJ Solutions, a strategic communications consulting practice specializing in reputation risk management, messaging and change management for marquee clients. Before founding LCJ Solutions, Johnson was the senior vice president of corporate affairs at Citigroup, serving as the chief communications adviser to four CEOs, and vice president of corporate communications for Standard & Poor’s. Leah brings deep expertise working on policy, mergers and acquisitions, and crisis management, as well as with investors and the SEC, proficiencies that will help us on our still-new journey as a public company. Bonita currently serves as vice president of global partnerships at Google, responsible for its U.S. strategic partnerships team representing news and publishing, broadcast, media and entertainment, mobile apps, search, telecommunications and commerce. Prior to her 12 years at Google, Bonita served as director of interactive communications for DaimlerChrysler. We look forward to Bonita’s wisdom from working with large global brands on behalf of one of the most respected technology giants in the world. In particular, Bonita’s understanding of digital transformation from the customer and buyer perspectives, as well as the technology seller viewpoint, will prove invaluable. Combined, their experiences provide strategic perspectives we need as we continue expanding our global community of learners and technology leaders. Welcome, Leah and Bonita. We’re proud to have you on our board.
https://medium.com/pluralsight/welcome-to-our-board-leah-johnson-and-bonita-stewart-8754a93fad34
['Aaron Skonnard']
2018-10-31 16:28:46.605000+00:00
['Pluralsight', 'Entrepreneurship', 'Business']
Misconceptions about Software Engineers
photo courtesy of google.com On one of my random tours on twitter, I came across this topic on what misconceptions people have about Software engineers or Software developers (I don’t get the hullabaloo concerning the titles but that is a story for another day…) I am a Software Engineer in Kenya and for the past two years, I have had to explain what I do to practically everybody. I do understand the confusion, especially from Generation X i.e my parents who only knew medicine, teaching and accounting as professions. They have had to learn it the hard way with the pace at which technology is moving. For Millenials though, It is sheer ignorance if you don’t know what a software engineer does. Spare a couple of minutes and “google” that, thank you! ☺️ For most of my career, the biggest misconceptions people have had is that I am the IT guy (or lady in this case). You are the first person people run to when their phones aren’t working, their flash disks are unreadable or they require a quotation for a new laptop. For some, this would probably be a business opportunity but NO! we are not the IT handyman/woman who you call when you can’t switch on your computer because you forgot to plug in the monitor cable. We are the people who design, develop, maintain the websites and applications you use. You know the Facebook app you use? there is a person who created that. That person is a software engineer. I don’t mind fixing the phones and laptops but I do mind when my competence is placed on a scale for a job that is not mine. Please spare us when we don’t know what the issue is with your machine. We try our best but it doesn’t mean we got all the solutions to your computer problems. The second misconception is that I am a hacker, lol! This gets funnier each time I hear it. What is Hollywood and sci-fi movies doing to you guys!? I respect hackers especially ethical hackers whose aim is to raise awareness in cybersecurity but not all of us are hackers. Some of us barely know how to impersonate a user leave alone penetrate a system or create spyware. Our job involves trying to implement the best security policies for your application and not to steal people passwords for illegal practices. The last misconception that I get to hear is that we are always coding. gif courtesy of giphy.com We indeed love our job and it’s our passion but we also have a life and other interests! If you get to know us, you will know we have a whole other life outside code. Our lives don’t revolve around code but it does occupy the majority of our time. These are the three common misconceptions that I get to hear regarding my career. I am sure that there are many more that I haven’t mentioned but at the end of the day, regardless of what people think you do, find joy in doing your job.
https://medium.com/dev-genius/misconceptions-about-software-engineers-dfa9c321c67b
['Raycee Mwatela']
2020-06-19 07:35:49.771000+00:00
['Software Engineering', 'Technology', 'Software Development', 'Developer', 'Code']
Voice in Apps: Amazon Shopping
Voice Search Voice in Apps: Amazon Shopping In-depth analysis of the voice search feature in Amazon India Shopping app. Graphic showing the new mic icon in the Amazon Shopping App in India Till date, most of the attempts to add voice search to e-commerce apps have been half-hearted and half-baked. Most of these companies just tied up the mic button in the search bar and Google’s Speech Recognition together and called it a night. Tata, Goodbye :) Not Amazon, they went a step ahead and added ‘Alexa’ to their shopping app. Voice Search is taking over India. It looks like Amazon took notice of Google’s Year in Search report 2018 which showed 28% of all the searches are happening through voice. That’s not all, voice searches are growing at 270% year on year in India. Hindi voice searches are growing at 400% YoY. It shouldn’t come as a surprise, when Jio added Google Assistant to its Jio Phone, Google Assistant’s usage in India jumped by 6X. In this week’s ‘Voice in Apps’, we are breaking down the Amazon Shopping App. For the uninitiated, Slang Labs has started a new(now pretty old) series called ‘Voice in Apps’. Every week(we try to!) we take an app which has integrated voice inside it and break it down. We are doing this because we think that it’s important to give recognition to the trendsetters. We want to surface how and why these businesses are adding voice inside their apps and what is the result of it. We have already broken down voice features in ‘My Jio’, ‘Gaana’, YouTube and Paytm Travel. Voice Shopping in Amazon Shopping App Visual Breakdown: Mic Icon Mic button placed in the top right, next to the cart We have a standard easy to understand old school mic icon on the top right side of the title bar of the Amazon App. The icon is filled with white which gives a clear contrast to the background colour. Placement of the mic icon at such a prominent place in the app indicates the seriousness and value Amazon believes in-app voice shopping can deliver to their customers. Difference of Mic Icon in Amazon US vs Amazon India app Amazon US app Alexa/Voice Shopping Amazon has added Alexa’s logo in the US app for voice shopping feature instead of the mic button. This is probably done due to a higher brand recall of Amazon Alexa. Alexa’s mindshare is much higher in the US when compared to India because of the high market share of Echo devices running Alexa by Amazon. Onboarding Flow: Onboarding Flow for Amazon Shopping App in India (Source: NDTV GADGETS) Users are introduced to this voice search mic by a simple coach mark. On clicking the mic button user sees a dialogue box which explains what the user can do with Voice. On pressing the continue button, the user needs to grant mic permission. Alexa is triggered after the permission screen and the characteristic blue wave appears. You can also look at the videos of onboarding on our YouTube channel. ‘Speak To Shop’ Dialogue Box: Speak to Shop Dialogue Box during Onboarding Speak To Shop is the biggest element on the dialogue box attracting instant attention. This helps to set the context to the user and introduce them to the feature. Try Saying has a couple of utterances shown as an example to train the user. This helps in setting the right expectations for the user. Even, in the Slang surface, we show these sentences to guide the user. Amazon tells the user to enable Mic permissions for this feature. In Slang, we speak out the purpose for the permissions and thereafter show it. UI Breakdown It might not seem like much but there is quite a bit to breakdown here. Let’s get started. On clicking the mic button a user gets to see a bluish-green moving wave at the bottom of the scene with help utterance right above it. This helps in setting the context and guiding users on what they can ask Alexa. Listening mode: While the user is speaking the greenish wave vibrates from the centre-out. Processing Mode: When Alexa detects silence or end of the utterance, the greenish waves go all the way till the end. The waves pulsate slightly as well. Speaking Mode: This mode the wave appears to be more of a horizontal line filled with moving blue colour. Auditory Feedback: As soon as the mic is clicked, Alexa starts listening after a ‘ting’ which acts as an auditory cue. The same ‘ting’ is triggered when Alexa stops listening. We have implemented the same thing in Slang as well. We have a different ting sound when we start and stop listening. We hope users get trained overtime and become familiar in knowing when to start and stop speaking. Error Handling: Variations of the help screen in the India Amazon Shopping app After an unrecognized utterance, Amazon shows this help screen which tells the user, Alexa couldn’t find what user was looking for and shows a bunch of utterances use can speak. This dialogue box is opaque and has an Alexa button to speak again. Food for thought — why did Amazon not add the mic button here instead? Why add a button which users haven’t been exposed to yet, instead of one which they recognize. Functional Breakdown Accuracy Full points to Amazon on accuracy for Alexa, It’s highly accurate even when spoken in different accents. It even recognized long brand names and difficult product names(like mamy poko pants ) with almost no recognition error which is surprisingly good. Be it long-form product names or produce name with just one simple word they have managed to outdone themselves. Accuracy is top-notch. Speed Alexa excels over here as well. Breaking down the speed of exact components like ASR and NLP is not possible as Alexa doesn’t let you look under the hood. If it did, we would have been able to get more insights. Having said that, Alexa in the Amazon Shopping app is extremely fast. Search results pop up almost in under a second. Kudos to the team at work who made this happen. Natural Language Processing This is one area where Amazon is still falling behind. “I want to see toys Alexa” searched for ‘Toys Alexa’. Removing fluff and stop words from an utterance is not that difficult of a problem to solve. This is one area where Voice search in Amazon app has been disappointing. Lack of a good NLP engine brings down the Voice Augmented eXperience in Amazon app. What can I do with Alexa? You can also ask Alexa, what you can do with it using voice. It tells you the two things it can do via voice. Instead of just replying with a voice-only answer, Alexa should utilize the screen and show the user all the things it can do both visually. Currently because of the only three choices Limitations around Voice Search in Amazon App Alexa’s Boundary issues During voice search onboarding Amazon tells us two things we can do with the mic icon — Track orders and search items. Since it’s Alexa that is embedded in the app and not just another shopping assistant, it can do a lot more than tracking and searching orders. This ability feels more creepy than useful. If I speak something(maybe I am talking to someone with mic icon clicked) which is not a product in Amazon’s database, Alexa starts speaking out search results which feel awkward and weird. Sometimes bowing down and saying, I did not understand that is just fine. Not reproducible, but it even tells me about the weather and random facts without even asking for it. Imagine while shopping, Alexa speaking out…”According to NHS, you should at least wash your hands for 20 seconds..”. Support for Vernacular Language Mic icon missing in Amazon App in Hindi(on the left) If you switch to Amazon India app to Hindi, mic icon disappears. Vernacular support is still missing. India being so linguistically diverse, supporting vernacular languages should be a high priority for Voice Shopping. Better UI and surface design Although highly subjective, but in my opinion, Alexa’s UI could be much better. Right off the bat, the text in the help window can be increased to improve readability. Hints shown on the Alexa surface in the listening mode could be bigger too. Amazon’s Voice search is hands-down the best Voice Augmented eXperience in any eCommerce app. (Probably not for long. *Wink*). Even in its current avatar, it’s miles ahead of any other competitor. Slang has recently introduced ‘Slang for Grocery’. It’s VAX experience built specifically for the Grocery domain. It’s an off the shelf integration which voice enables your app in 4 different languages — English, Hindi, Tamil and Kannada(Malayalam in beta). With ‘Slang for Grocery’, you can voice-enable your grocery app in less than 2 days. If you would like to add the most accurate multi-lingual Voice search to your Grocery app in just a couple a days, let us know at 42@slanglabs.in
https://medium.com/slanglabs/voice-in-apps-amazon-shopping-fc6caebf8997
['Vinayak Jhunjhunwala']
2020-04-21 14:20:36.204000+00:00
['Technology', 'Voice Assistant', 'Android App Development', 'Business', 'Amazon']
How we optimized SunRoof’s Pipedrive setup to improve reporting and sales performance
How we optimized SunRoof’s Pipedrive setup to improve reporting and sales performance Aneta Szotek Follow Sep 15 · 10 min read SunRoof creates beautiful, 2-in-1 smart solar roofs and innovative facades that produce power from the Sun. They operate in the Scandinavian countries, Poland, and Kenya. Their mission is to equip everyone with a solar energy source on their roof. They dream of a future where we can all thrive together with nature. SunRoof is the most efficient and environment-friendly solar roof on the market. sunroof.se When we first met, SunRoof had already implemented Pipedrive at their company. What they needed help with was the optimization of processes and their setup. They also needed to create a proper data structure. The team wanted to put in place the guidelines on how to use Pipedrive that would be a reference point for anyone on the team. The team wanted to get access to the most accurate sales data possible. They needed to differentiate between high-value KPI and low-value KPI, improve reporting, and sales performance. Phase 1: CRM audit The first phase included an initial diagnosis of what currently works and doesn’t work in SunRoof’s Pipedrive setup. The goal was to understand how the team uses Pipedrive and how they would like their CRM to support them in the sales process. The SunRoof team didn’t have a full understanding of the system and didn’t use all the benefits a CRM system can bring. They didn’t have any guidelines or unified rules on how to use the system. Having performed the audit, we created a Pipedrive specification for the SoftwareSupp consultant that later took over the optimization project. This way, the client knew exactly what and how would be improved and optimized. The specification included diagnosed problems and the actions that the consultant or the team can take to optimize the Pipedrive setup. Pipedrive specification Thanks to the initial audit, SunRoof could identify the potential benefits for their organization. In the first phase, we discovered the most important areas that should be improved to make the CRM implementation the most effective. Phase 2: Pipedrive configuration and optimization In the second phase, the SoftwareSupp consultant implemented changes to the Pipedrive setup that allowed SunRoof to optimize the usage of the system. The process included creating guidelines, educating the team on how to use the system, automation, and security improvements. During this phase, we stayed in touch with the SunRoof team to make sure the changes affect positively their workflows. We created a guide that includes all rules and best practices for using Pipedrive at SunRoof. Deal and sales forecasting Pipedrive sales forecasting provides a comprehensive way to plan sales activities for specific timeframes. For the forecasting to work, the team needs to add probabilities and expected close dates for every deal or stage. This way the system understands how long it would take to close a deal. To make sure that the forecasting is possible to use, the team needs to follow specific rules while entering the data. We identified the flow and the data structure to make financial forecasting possible for SunRoof. Deal sources and lost reasons Adding deal sources and “lost reasons” to deals is a crucial element that lets you enable more precise reporting. If you track where your leads come from, you can optimize your marketing strategy accordingly. Additionally, if your lost deals have the reasons why the client didn’t decide to go with your product attached to them, it’s easier for the sales team to adjust strategy and compare it with specific lead sources. In SunRoof, the lead sources and lost reasons weren’t tracked properly. They were entered manually in several languages as the employees operate on different markets. There weren’t any rules on how to use these fields. In consequence, reporting was ineffective. The custom fields were filled in one-by-one. It wasn’t possible to analyze data on a bigger scale and draw conclusions. To solve the problem our software consultant created a fixed structure for these fields. For the “deal source” fields, we agreed on a set of rules that the team can refer to when they have a new lead. Lost reasons Both fields were predefined and the team could choose a reason from the drop-down list. The field was made obligatory so that the salesperson needs to fill this field in to be able to change the deal stage. Accurate data provides valuable insights into the sales process optimization. Labeling SunRoof didn’t use the labeling system to distinguish between the cold leads, hot leads, and actual customers. Labeling allows the users to use more advanced filtering options that help them identify the leads that are most likely to buy the product. In SunRoof, the Pipedrive funnel includes additional post-sales stages. It is necessary to supervise the solar roof installation. The salesperson is in touch with the client to support them until the roof is ready to use. They make sure that the client is content with the final outcome of the installation. For such a funnel, the labeling system is useful because it lets the sales team filter out the deals that are still in progress and the ones that are already closed. Labeling system In SunRoof, we implemented labels to differentiate between: cold leads warm hot leads customers The labeling process was automatically changed depending on the stage that the deal entered. The team didn’t need to remember any additional steps to be able to fully use this functionality. Labeling can be used to filter out the leads that are the most likely to make a purchase as well as to perform a lead segmentation that can be used for future marketing campaigns. Mailing SunRoof didn’t use Pipedrive’s mailing system to track conversations with customers. The team needed to add all the information manually on the deal level. This process was time-consuming and prone to human errors. The SunRoof team decided to upgrade Pipedrive to a higher plan and that allowed them to connect their mailboxes to Pipedrive. Thanks to this feature, the messages to and from the customer can be saved in one place. Everyone on the team can easily track the conversations. All data in the system is always up to date thanks to this automation. Source: pipedrive.com The salesperson’s mailbox is synchronized with Pipedrive, but it’s also possible to access the emails through the personal mailbox. The email integration doesn’t require the salesperson to use only Pipedrive to send messages. They can use their mailbox as they are used to, but they get the benefit of the messages being tracked in the system. With Pipedrive, all emails are being tracked. The system monitors if the recipient opened the email or clicked the links included. It can help the sales team to follow up in a more appropriate way that would maximize the probability of closing the deal. To protect the salesperson’s privacy, it’s possible to adjust the visibility settings of their emails. Thanks to that, it’s possible to make some emails private. Those emails won’t be visible to the rest of the team. Pipedrive also offers “smart email BCC”. Every user gets a unique Pipedrive email assigned. If you add this email to BCC, the email will appear in Pipedrive and can be copied to the relevant deals. You can also use this address to forward customer emails to Pipedrive to update existing deals or create new contacts. Another benefit of using Pipedrive’s mailing is the ability to create templates in the system. With these predefined emails, the salesperson can speed up their reply rate. They don’t have to write every email from scratch. As SoftwareSupp, we helped SunRoof connect their mailboxes with Pipedrive and supported them in the process. Along the way, anytime the team member had a question they were able to ask it using the ticketing system on the SoftwareSupp platform. Some of the questions that came up during the process were about the visibility of emails and email threads. The SunRoof team got the full instructions on how to solve the issues they came up with during the setup process. Partner program SunRoof has a network of partners that they work with. Apart from the SunRoof team, there are additional people that are involved in the sales and installation process: affiliates sales partners installation partners Because of that, the team needed an efficient way to manage these relationships and deals that the partners bring in. Together with SunRoof, we created a funnel that allows the team members to access partners’ deals and help them with the process. Some partners have access to Pipedrive, others can submit their deals via a webform that is integrated with Pipedrive. With Pipedrive, it’s easier to manage partners’ deals and their commissions. Security settings To improve security and prevent data leakage, we set up Pipedrive security settings to match the organization’s data protection standards. For the partner program, we created specific groups for every type of partner. Some of them use Pipedrive, but have limited visibility of SunRoof’s overall deals. Data security alerts We also showed the team how to use Pipedrive’s Security Dashboard to maximize their data protection. Marketing integration SunRoof didn’t use Pipedrive in the marketing process. They didn’t have any integration or a way to transfer data between the sales and marketing systems. Together we decided to choose Mailchimp as a marketing tool. With this integration, the team can export specific groups of people to later add them to the Mailchimp campaigns. We created an automation that adds all clients that get a label “customer” to a specific Mailchimp group. Exporting emails to Mailchimp The labeling system also makes the marketing activities easier as the marketing team can use lead segmentation to adjust the offer that they’ll send as a win-back campaign in the future. Website forms For their partner program, SunRoof uses Google Forms. To allow the data to appear in Pipedrive automatically, we created a Zapier integration. This automation creates deals based on the information that the partner provides in the form. Another web form that is necessary for the sales process is SunRoof’s calculator that estimates the cost of the project. Thanks to that, the potential customer can check how much they could save on electricity bills with SunRoof. sunroof.se With a Pipedrive automation, once the web form is filled in, the lead’s details are added to Pipedrive. A new deal is created and the link to the estimation is included. This automation was created using Pipedrive’s API. As SoftwareSupp, we prepared the API keys and the process to follow for the client’s IT team. We needed to create specific custom fields in Pipedrive to transfer the information from the webform. Now the sales team can see the data entered through the form in one place. Data enrichment To help the sales team get to know their potential customers better, we integrated Pipedrive with Clearbit. It’s a database that lets the users check information about the company based on the domain that is used to sign up. Clearbit allows the user to understand their customers, identify future prospects, and personalize interactions. SunRoof has instant access to the following data about the potential customer: website general email address general phone number sector industry group industry sub-industry estimated annual revenue SIC code Having this additional information saves the sales team’s time that they would need to spend on research. Instead, they can prepare a personalized and more contextual message to the potential customer. Lead segmentation Together with the team, we established rules about lead segmentation. Depending on the source, the lead gets a specific value assigned. This process helps the sales team to categorize the potential customers according to the probability of them buying the product. Lead sources The proper lead segmentation setup also improves the quality of reporting and data accuracy. Optimizing Pipedrive with an expert consultant The changes implemented at SunRoof transformed the way the team uses Pipedrive. With all customer information in one place, they are always up-to-date about their sales process. They have access to better analytics and use these insights to make better decisions. With improved security settings, they are sure that all the confidential data is safely stored in the system. They also know who has access to the information. The Mailchimp integration made cooperation between sales and marketing easier. Implementing Pipedrive at the company is just the first step to start optimizing the sales process. Without the proper setup, automation, and adjusting the funnels to the specific industry and the company workflow, the team can’t discover all the benefits of using the CRM system. With SoftwareSupp, you get a step-by-step guide with the actions that your team can take to improve the sales process. We can audit your current setup and match you with a consultant that will help you transform the way you operate on a day-to-day basis.
https://medium.com/softwaresupp/how-we-optimized-sunroofs-pipedrive-setup-to-improve-reporting-and-sales-performance-c0ca81ee3702
['Aneta Szotek']
2020-10-11 19:31:56.275000+00:00
['Marketing', 'Business', 'CRM', 'Customers', 'Sales']
Understanding CBAM and BAM in 5 minutes
In the case of BAM, only GAP was used to get the statistics of the feature map in spatial and channel dimensions. Whereas, CBAM also considered to use the MaxPooling with Average Pool. They proved that using Maxpooling accounts to generate the most salient features from the feature map and compensate the GAP output which encodes the global statistics softly. In the case of BAM, the convolutional operation was done using dilation value d=4 to increase the receptive field as we go deep in the network. Whereas, CBAM used the higher kernel size of 7x7 and normal convolutional layer with d=1 to incorporate the same. In the case of BAM, parallel generation of spatial and channel attention maps were taken into consideration which was later added to get the final attention map. Whereas in the case of CBAM, a sequential approach was used. Firstly, the channel attention map was calculated and then the Spatial map was finally obtained from the generated intermediate feature map(Channel Map×Input Feature Map). Order of the sequential arrangement in the case of CBAM is Channel Attention → Spatial Attention.
https://medium.com/visionwizard/understanding-attention-modules-cbam-and-bam-a-quick-read-ca8678d1c671
['Shreejal Trivedi']
2020-06-12 20:36:59.294000+00:00
['Research', 'Computer Vision', 'Data Science', 'Deep Learning', 'Convolutional Neural Net']
How Do Psychometric Test Results Vary Across Age, Race and Gender?
Most of us have taken a personality quiz at one point or another. Whether it was for a job, school or just for fun, you probably remember skimming over the results to find out more about yourself and how you compare to the rest of the population. But have you ever wondered how these results are correlated to other demographic factors such as race, gender, age, religion and sexual orientation? I asked that myself recently and took a deep dive into the data to find out the answer to this question. After exploring all of the publicly available data sets published by the Open Source Psychometrics Project, I chose three in particular (Experiences in Close Relationships Scale; Depression, Anxiety and Stress Scale; and the Rosenberg Self-Esteem Scale) with the results of more than 110,000 psychometric tests released with the consent of test takers. Based on the answers recorded in these data sets, I found that: 1. There is a relationship between age and personality trait dimensions, as well as relationship attachment styles. 2. There is a relationship between religion and stress scores. 3. There is a relationship between race and measures of mental health, such as stress, depression and anxiety. 4. There is a relationship between gender and measures of emotional stability and agreeableness. 5. There is a relationship between sexual orientation and depression, anxiety and stress scales. Although the data analysis yielded some interesting results, I must also stress that due to the nature of online personality tests and the inability to verify the accuracy of responses given, this analysis is meant to provide some food for thought and even motivation for further research on the topic rather than results that can be generalized to the larger population. Having said that, here is a quality comparison of the data collected through the Open Source Psychometrics Project and data collected on Amazon Mechanical Turk. The results indicate that the former “contains less than 25% the rate of invalid responding as AMT data.” And for those interested in the process behind this analysis, the data and code can be accessed here.
https://towardsdatascience.com/how-do-psychometric-test-results-vary-across-age-race-and-gender-2651672cd96c
['Nayomi Chibana']
2019-06-28 19:33:30.691000+00:00
['Demographics', 'Psychometric Testing', 'Psychology', 'Data Science', 'Data Visualization']
Portrait of the Heart
LOST PLACES — If you could see into the hearts of the lonely, abandoned, abused, betrayed, and mislead, wouldn’t the scene be the same?
https://medium.com/illumination/portrait-of-the-heart-c41aa759e000
['Lynda Coker']
2020-08-03 13:07:06.921000+00:00
['Emotions', 'Healing', 'Microfiction', 'Awareness', 'Society']
Getting Started with React + TypeScript
Getting Started with React + TypeScript The basic benefits of combining both Source: the author In this article, we will go through a few examples of how to use React & TypeScript. We look at props, interfaces, a few hooks, and form handling. Of course, we have to build a setup first. The easiest way is to use npx. npm install -g npx npx create-react-app my-app — template typescript cd my-app/ If you already have a React.js project, you can add TypeScript support to it later. npm install — save typescript @types/node @types/react @types/react-dom @types/jest Then simply rename each file extension from “js” and “jsx” to “tsx”. The good news is that TypeScript and JavaScript are very compatible. This means that we can have code in the .tsx files without problems, which is actually nothing else than ordinary javascript. An example of this in our app.tsx: function displayCount(userCount : number) : number { return userCount } function add(x) { console.log(x) } function App() { return ( <div className="App"> Current Users: {displayCount(24)} <button onClick={() => add(24)}>Click me</button> </div> ); } The upper function uses types, which is only possible thanks to TypeScript. The add-function would work the same way only in JavaScript. Nevertheless, we can use both together. You may need to disable the strict mode in your tsconfig.json: "strict": false Creating the first component Now we can use TypeScript to write some code for React.js. Let’s create a function-based component. For this, we can optionally use the following syntax. So we define a component of type React.FC, which stands for Functional Component. const UserInput : React.FC = () => { return <input type=”text”/> } Thanks to TypeScript, we can now also define which data types we expect for our props. const UserInput : React.FC<{text: string}> = (props) => { return <input type=”text” placeholder={props.text}/> } <UserInput text="Please enter"/> If we now pass a number as text, TypeScript will report an error — because we expect a string as a prop with the name text. Using interfaces for props But we can also implement the whole thing with an interface. interface Props { text: string } const UserInput : React.FC<Props> = (props) => { return <input type=”text” placeholder={props.text}/> } We can also combine interfaces. In this case, we specify that we expect as props the interface car, which consists of brand and speed, both assigned a data type. interface Car { brand: string speed: number } interface Props { car: Car } const CarSeller : React.FC<Props> = (props) => { return <p>The {props.car.brand} is {props.car.speed} fast </p> } <CarSeller car={{brand: "Mercedes", speed: 150}}/> Making props optional Maybe the speed is not interesting for the potential buyer of the car. In that case, we don’t have to hand it over, and we can mark it as optional in the interface — this is possible with the question mark. interface Car { brand: string speed?: number } Using interfaces for our props, we can create a clean and understandable list of which props our component needs and is optional. To clean up a bit and make the code easier to read, we can, of course, do without the props keyword in TSX: const CarSeller : React.FC<Props> = ({car}) => { return <p>The {car.brand} is {car.speed} fast </p> } We can also sort the interfaces into their own files. It makes sense to have an interface for each component in the same file, so you can see directly which props are needed. But for our props interface, we ultimately access the car interface, and if we need it in several other components, we should create a separate file for it. CarInterface.ts: export default interface Car { brand: string speed?: number } Importing it in the App.tsx: import Car from “./CarInterface” interface Props { car: Car } Hooks Using TypeScripts features like Types is again purely optional for Hooks — but it makes sense. But we don’t have to specify a datatype for useState, as in the following example. function App() { const [count, changeCount] = useState<number>(0) return <p>{count}</p> } Again, we can use interfaces to define our data types. interface userInputInterface { count: number } function App() { const [userInput, changeInput] = useState<userInputInterface({count: 0}) return <p>{userInput.count}</p> } Using Refs To do this, we need to access something you already know from working with TypeScript outside of frameworks like React or Vue. The types HTMLInputElement — since we are now working with refs. const inputFieldRef = useRef<HTMLInputElement>(null) const getFocus = () => { inputFieldRef.current?.focus() } return <> <input type=”text” ref={inputFieldRef} /> <button onClick={getFocus}>Click me</button> </> We pass null as a parameter to the useRef, since at the time of execution, the input tag is not yet rendered in the DOM. This way, we make sure that the connection starts from our input tag. Form handling with refs & TypeScript We just had HTMLInputElement to access the DOM with a ref. Now let’s have a look at TypeScript-React with refs in handling a form. A shape always has a Submit button, which makes sure that the shape is sent. Without frameworks, there are some attributes for the form, e.g., to which URL the data should be sent and with which method. But we want to solve all these things manually, with React.js in JavaScript style. Therefore we write a function, which is triggered when the form is sent — and then prevents the standard procedure with preventDefault(). const App : React.FC = () => { const submitHandler = (event: React.FormEvent) => { event.preventDefault() } return ( <form onSubmit={submitHandler}> <input type=”text” /> <input type=”submit” value=”Submit”/> </form> ) } As a parameter for the submitHandler function, we pass the event, which must be of type FormEvent, because it is an action with our form. const App : React.FC = () => { const textInputRef = useRef<HTMLInputElement>(null) const submitHandler = (event: React.FormEvent) => { event.preventDefault() const enteredText = textInputRef.current!.value console.log(“submitted:”, enteredText) } return ( <form onSubmit={submitHandler}> <input type=”text” ref={textInputRef} /> <input type=”submit” value=”Submit”/> </form> ) } As the default value for useRef, we take again null, as type, we retake the HTMLInputElement. Our function for handling the submits now accesses the ref. Using the current! syntax, we tell TypeScript that it is okay if the value is null. Finally, no input has been entered yet. In the actual input field, we just need to make the connection to the ref, and then everything is ready. If we now press submit, the handler function is executed, which gets the user input from the ref — and outputs it to the console. So we can evaluate a form with TypeScript & React.
https://medium.com/javascript-in-plain-english/react-typescript-813b02ff3672
['Louis Petrik']
2020-09-30 13:18:08.260000+00:00
['Typescript', 'JavaScript', 'Reactjs', 'Programming', 'Web Development']
Technical interview on algorithms: Where should I begin?
Picture this. You’re in your 3rd job and your career has progressed a lot. In a software developer’s life, that would mean you’ve learned lots of different technologies and gained lots of experiences working with various platforms. This might also mean you’ve gained non-technical experiences like leading teams or client interaction. Then one day you wake up and decide that it’s time to move on. It’s time to go and look for a new job. You submitted your resume to several companies and scoured LinkedIn for interesting job postings. Eventually you got some favorable responses and after a few emails, you’re now in line for a couple of interviews. Depending on the role you’re applying for, the questions will primarily be focused on your target role and technology which is most likely closely related to your latest experience. But as developers, we all know that there a category of questions that will almost always be asked regardless of the role you’re applying for — algorithms. Don’t lose them. Let them linger at the back of your minds. Let’s face it, with all the development worked you’ve done for the past years, did you really have the time to actually implement and use your own sorting algorithm for a live project? Let’s say that once in a while you had to implement your own custom sorting algorithm for very special circumstances. How many times did you actually have to do that? I’m not saying these concepts are useless. What I’m trying to say is that with years of experience and years of translating high-level business requirements into code, there is a high chance that developers get out of touch with these core concepts. But in reality, as developers, we should always keep these concepts very close to us. In fact, based on my own experience, I often apply these concepts in my daily work. I just don’t realize that I’m doing so. This is why technical interviewers want to ask you about them. They want to know if you’re still able to explain the concepts. Ideally, interviewers shouldn’t expect you to memorize names. They should be able to extract your understanding of some of these concepts. At least that’s how I did it when I became a technical interviewer myself. But there are lots of them! Let me reiterate. Interviewers just want to get a glimpse of your understanding of some of these concepts. As you can see, I’m not using the word “all” here. No one should expect something like that from you. Honestly, if your interviewer seems like he knows all the algorithms in the world, there are 2 possibilities to it. First is either he really practices using all of them in his daily routine (good for him!). Second is he’s been doing interviews for quite a while, so he regularly brushes up on them. Either way, regardless of wherever your interviewer falls in the above two possibilities, their whole strategy will most likely revolve around this: “How comfortable is this candidate when it comes to explaining or discussing algorithms?” My suggestion is for you to ensure that prior to going to these types of interviews, you brush up on your algorithm knowledge. Pick a few that you can have a comprehensive understanding of. Then, do a high-level review of a couple more. Which ones should I select? This next part is based on my experience as an interviewee and of course, as an interviewer. I’ll share with you how I choose which algorithms I usually review before going for an interview. Technical interviewers may ask you to write pseudocode, or even use a specific algorithm to solve a problem on the spot. Based on my experience, the test for algorithm knowledge usually revolves around the ff. categories: Sorting, Path Finding and Text Parsing. Sorting I always have Quick Sort ready. Regardless of your choice, the idea is to have a sorting algorithm ready when tackling technical interviews. Sorting-related technical problems are always around the corner. Honestly, with all the out-of-the-box sorting functions we use in our daily lives, studying algorithms under this category is very, very vital. You don’t want to be known as “that candidate who just uses functions without understanding how they work”. Path Finding There are lots of algorithms for path finding out there. I personally review BFS, DFS and A* before any interviews. For BFS and DFS, note that it will be better if you’re able to implement them in either a recursive or iterative approach. In other algorithms (like A*), there’s a concept of a heuristic. For these, make sure you understand how to identify the correct heuristic for a given problem. Text Parsing This can include manipulating strings or implementing your own Regular Expression engine. I personally believe this category is the trickiest among the 3. In fact, with text parsing, you’re often required to mix and match other algorithms in your solution. Finally, practice! Let me emphasize one thing. Whatever set of algorithms you pick, always practice! It is very important to know the ins and outs of each algorithm. In some cases, you may even be asked to combine or expand existing algorithms to solve particular problems. Other interviewers may even ask you to compare the pros and cons for different algorithms. Additionally, you can primarily focus on the algorithms you pick based on the categories I mentioned above; but, also ensure that you get a chance to have a high-level understanding of other algorithms as well. Remember, it’s your understanding that’s being tested and not your memorization skills. It doesn’t matter if you can write the pseudocode perfectly. If you’re not able to explain it, then that’s not going to reflect well for you. Lastly, if you’re asked about an algorithm you don’t know anything about. Don’t lie. Don’t make up an algorithm just based on the name. In this situations, it’s better to be honest. Interviewers are trained to dig deeper especially on answers they feel are untrue. The more they dig up, the more you get in trouble. Just say you don’t know that specific algorithm. More often than not, your interviewer will ask you about another algorithm or let you pick which one you want to discuss. Anyway, here’s to hoping for a successful next technical interview for you! Cheers!
https://projectkenneth.medium.com/technical-interview-on-algorithms-where-should-i-begin-7be348dd430c
['Project Kenneth']
2020-12-05 11:12:45.272000+00:00
['Algorithms', 'Technical Interview Tips', 'Software Engineering', 'Technical Interview', 'Interview Questions']
How to make a VR app with zero experience
Unless you’ve been living under a rock, you should be fascinated (or at least intrigued?) by VR. Check out some articles that gave me inspiration: After some investigation, turns out it’s actually not that hard to make a VR app, even if you don’t have any 3D or coding experience. Just to give you an example, here is an app I made using this method. Tools you need: The app we’ll be making is a simple virtual environment tour. You can toggle auto walk using the trigger in your VR headset. Step 1: Create a virtual environment Open Unity. Create a project in the pop up window. No need to modify any settings at this point. I’ll be using this Forest Environment free asset created by Patryk Zatylny, but you can use whatever asset you like. Open the URL, click “Open in Unity”. Unity will load it in the Asset Store panel. Then click “Download” (Unity doesn’t allow downloading asset directly from the web page). After the download is finished, you’ll see a pop up. Click “Import”. Navigate to your Project panel (If you can’t find it, go to top menu bar, Window > Layouts > Default). Double-click the demoScene_free file in the file structure (use the slider in the lower right to change thumbnail size). Now you can see the beautiful view in the Scene panel. Step 2: Set up Cardboard Unity SDK In the Hierarchy panel, delete First Person Controller and Main Camera. Unzip the Cardboard SDK you downloaded from the Github repo, you get a cardboard-unity-master folder. In the top menu bar, go to Asset > Import package > Custom package, choose CardboardSDKForUnity in the cardboard-unity-master folder. In the next pop up, click “Import”. In your Project panel, you’ll see a Cardboard folder. Go to the Prefabs subfolder, drag CardboardMain and drop it in the Scene. Test it out by clicking on the play button. Use your mouse/trackpad and alt/control keys to simulate camera pan/tilt. When you are not in play mode, you can modify the initial position of CardboardMain using the Transform section in the Inspector panel all the way on the right, or using the transform tools in the toolbar in the upper left. (More details on how to position things in Unity) Step 3: Add an auto walk function Unzip the auto walk script you downloaded from the Github repo, you get a Google-Cardboard-master folder. Drag the Autowalk.cs file and drop it in the Assets folder in the Project panel. Click to select CardboardMain in the Hierarchy panel, click “Add component” in the Inspector panel all the way to the right, find Autowalk and select it. You’ll see a new Autowalk section in the Inspector panel. Check “Walk When Triggered” and set the speed to 1 (or whatever you like). Now in play mode, you can use mouse click/trackpad tap to simulate the trigger to see autowalk in action! Step 4: Package the app Go to top menu bar, File > Build settings. Select Android and click on “Player settings”. Enter a Company Name (up on the top) and a Bundle Identifier (in the Other Settings section down on the bottom). In Resolution and Presentation section, change Default Orientation to Landscape Left. Scroll down to Publishing Settings. If you don’t have a keystore, check “Create New Keystore”, enter your password, and click “Browse Keystore”. In the pop up, enter a name for your keystore and click “Save”. Now you should see the file path of your keystore next to “Browse Keystore”. (More details on signing an Android app) In the “Key” section below, in the Alias dropdown, select “Create a new key”. Enter your info in the pop up and click “Create Key”. Optionally you can add your app icon in the Icon section. Click “Build” in the Build Settings window. During the process, you might get asked to select the root Android SDK folder. Unzip the Android SDK file you downloaded and select that folder. You might also get asked to update SDK. Just confirm to update. After the build is finished, you can install the app on your Android phone, test it out with you VR headset, or even upload it to Google Play Store! (Sometimes when you rotate your head the camera doesn’t rotate with you. Exiting and reopening the app should fix it. It might have something to do with the SDK version & Android version. Let me know if you find out more details about this bug.) You are done! That wasn’t too hard, was it? Rambles
https://medium.com/hackernoon/how-to-make-a-vr-app-with-zero-experience-927438e2dede
['Liu Liu']
2017-07-11 23:09:34.052000+00:00
['Design Tools', 'Virtual Reality', 'Design']
How Deep Learning Can Help Doctors Prevent Blindness In Diabetes
One day, diagnosing serious diseases may be as easy as taking a temperature or checking blood pressure. But in the near term, millions of diabetics could keep their vision thanks to an AI algorithm helping doctors quickly spot diabetic retinopathy. Retina image Table of contents Introduction Data Evaluation Metric EDA and Image Processing TensorFlow Input Pipeline Model Error Analysis References Future Work 1. Introduction The covid-19 pandemic is stretching hospital resources to the breaking point in many countries in the world. It’s no surprise that many people hope AI could speed up patient screening and ease the strain on clinical staff. This case study is not about covid-19 but diabetes which is a growing concern. With 70 million people with diabetes, India has a growing concern with diabetic retinopathy. The disease creates damage or an abnormal change in the tissue in the back of the retina that can lead to total blindness, and 18 percent of diabetic Indians already have the ailment. With 415 million diabetics at risk for blindness worldwide, the disease is a global concern. But the good news is that permanent vision loss is not inevitable. If caught early, the disease can be treated; if not, it can lead to total blindness. Examples of retinal fundus photographs that are taken to screen for DR. The image on the left is of a healthy retina (A), whereas the image on the right is a retina with referable diabetic retinopathy (B) due a number of hemorrhages (red spots) present. One of the common ways to detect diabetic retinopathy is to have a specialist examine the pictures of the back of the eye and determine the disease’s presence and it’s severity. Severity is determined by the type of damage present. Specialized training is required to interpret these photographs. Recent advances in Machine Learning and Computer Vision can improve the DR screening process. Deep Learning algorithms can interpret signs of DR in the retinal photographs, helping doctors screen more patients. 2. Data The data is obtained from the Kaggle competition APTOS 2019 Blindness Detection. The dataset contains a large set of retina images taken using fundus photography under a variety of lighting conditions. There are a total of 3662 retina images in the dataset. A clinician has rated each image on a scale of 0 to 4. 0 — No DR, 1 — Mild, 2 — Moderate, 3 — Severe, 4 — Proliferative DR Few images from the dataset 3. Evaluation Metric The evaluation metric for a Multi-class Classification problem could be a classification accuracy or an F-score. Kaggle competition had a defined evaluation metric — Quadratic weighted kappa. Quadratic weighted kappa is a measurement of agreement that ranges from 0 (random) to 1 (perfect agreement). There is a better explanation available here. Cohen’s kappa 4. EDA and Image Processing The dataset is imbalanced. There are a lot more images for a healthy retina. Only 5% of the total images belong to class 3 (severe DR). Class distribution To correct for data imbalance, we will use class weighting. Class weighting Weight for class 0: 1.01 Weight for class 1: 4.95 Weight for class 2: 1.83 Weight for class 3: 9.49 Weight for class 4: 6.21 Let’s use TSNE visualization with a perplexity of 40. Class 0 is separable but the classes are not. 5. TensorFlow Input Pipeline 1. We are defining the key configuration parameters. Configurations 2. Load the data The tf.data API enables you to build complex input pipelines from simple, reusable pieces. To construct the dataset we are using tf.data.Dataset.from_tensor_slices(). We will transform this dataset into a new one by chaining methods. Load the data using tf.data API Training images count: 2929 Validating images count: 733 As we have 5 labels, we will convert these into a one-hot tensor. For example, 2 will be converted to [0, 0, 1, 0, 0]. Also, we have to map each filename to its label. We can do this using the following methods. Create (image, label) pairs Let’s visualize the shape of the image and label. Image shape: (320, 320, 3) Label: [1. 0. 0. 0. 0.] Let’s use buffered prefetching so we can get data from disk without having I/O getting blocked. We are using tf.image API for data augmentation. Data augmentation Visualize the dataset after image augmentation. Visualize augmented images 6. Model 1. Define Callbacks The checkpoint callback saves the best weights of the model, so that the next time we want to use the model, we do not have to train the model. The early stopping callback is used to stop the training process if the model starts overfitting or becomes stagnant. Reduce LR on plateau callback is used to reduce the learning rate when the metric stops improving. Callbacks 2. Transfer learning for pre-trained weights We are initializing the model with pre-trained ImageNet weights. For our use case, we have used accuracy as the metric which tells us the fraction of correct predictions. Since there are 5 classes, we are using categorical crossentropy as the loss function. We have also specified class weights as we discussed earlier. Let’s plot the model accuracy and loss for the train and validation set. We can see that accuracy of our model is 83%. We can see our accuracy on validation data is lower than the train data which indicates overfitting. Loss and Accuracy The confusion matrix indicates classes 1, 3, and 4 are being misclassified as class 2. Maybe our model has not been able to detect the spots/hemorrhage that are present in classes 3 and 4(severe cases of DR). Confusion matrix for validation set 3. High Resolution Network HRNet is recently developed for human pose detection but can be used in Image Classification, Object Detection, etc. Code is provided by the researchers here. Official code is written with PyTorch. We had to rewrite the code in TensorFlow. HRNet maintains high-resolution representations through the whole process of connecting high-to-low resolution convolutions in parallel and produces strong high-resolution representations by repeatedly conducting fusions across parallel convolutions. The research paper is linked here. High Resolution Network Architecture For Image Classification, we need to replace the head with a softmax layer. You can find the code in my GitHub repository. The results of this model were not encouraging. We got an accuracy of 68%. We experimented with different loss functions and optimizers but we were not able to improve the performance. 7. Error Analysis Let’s see the classification report. In our case, the Recall score for classes 3 and 4 is very low which means that we are misidentifying these classes where cost associated is very high. We need to improve our model and recall scores for each class. Classification report Let’s visualize the image with actual and predicted labels. Actual and predicted labels with score in percent We can see the image and the model prediction with the probability score for each class. Visualize image and its predicted label with score Actual Label - Proliferative DR Predicted Label - Moderate Image and prediction score The inference time is 0.45 seconds and the rate is 0.01 predictions per second in Jupyter notebook. Streamlit web app takes more time on local machine. It took 10–20 seconds per prediction. Streamlit web app to upload an image and get the prediction References Future Work Use data from other sources such as eyePACS/Messidor which could further improve our accuracy. Set up a continuous integration system for our codebase, which will check the functionality of the code and evaluate the model about to be deployed. Package up the prediction system as a REST API and deploy it as a Docker container as a serverless function to Amazon Lambda. You can connect with me on LinkedIn. You can view the code in this GitHub repository.
https://medium.com/analytics-vidhya/how-deep-learning-can-help-doctors-prevent-blindness-in-diabetes-98d94761227
['Aniket Mishrikotkar']
2020-12-21 09:46:43.256000+00:00
['Machine Learning', 'Python', 'TensorFlow', 'Deep Learning', 'Healthcare']
Getting to 100 percent: Enugu’s fight to maintain its polio free status
By Patrick Egwu (Lead Writer) Enugu State Governor, Ifeanyi Ugwuanyi during the flag-of of one of the immunisation exercise. Photo credit: Nigeria Health Watch Veronica Asogwa is a petty trader in Abakpa Nike, one of the largest suburbs in Enugu State, southeast Nigeria. When the state announced the commencement of the sub-national polio vaccination campaign on January 17, 2020, Asogwa, 37, prepared to take her two-year-old son, Emeka, to a vaccination centre a few kilometres from where she lives with her husband. “I even told my neighbours where I do business who didn’t know about it, because I don’t want anyone to be left out,” she says, adding, “I don’t want to be a victim like my sister.” About 10 years ago, Asogwa’s elder sister who was living in Sokoto, northwest Nigeria, lost her baby to polio. She did not take him to be vaccinated when the state government began a vaccination programme in collaboration with the World Health Organisation [WHO] and other global partners, to eradicate the disease. A few months later, she lost her son to polio. “We were all saddened with the loss at that time and she never fully recovered psychologically because he was the only son she had,” Asogwa recalls. “So that is why whenever I hear anything that has to do with disease or campaigns for healthy living, I don’t joke about it.” From January 18–21, 2020, Enugu State conducted the sub-national vaccination against polio. The state targeted an estimated “one million children between 0–59 months,” according to George Ugwu, the Executive Secretary of the Enugu State Primary Health Care Development Agency, which was responsible for the vaccination exercise. A record number of 479 immunisation officers across the 17 local government areas of the state were mobilised. A signpost on immunisation at one of the centres. Photo credit: Nigeria Health Watch When the exercise ended on January 21, 2020, a subsequent two-day mop-up exercise was carried out to capture areas that failed to achieve the 100 percent target to ensure that no child was left out in the process. In February, the state announced that it had achieved 100 percent success in the immunisation exercise against polio — a rung higher than the 98.5 percent it recorded in 2019. “This time the Sub-National Immunisation Plus Days we just concluded was very successful. I can categorically say that we recorded 101 percent, even exceeding the numerical mathematical 100 percent,” Ugwu said during the polio-free announcement. “Our national supervisors, partners and other stakeholders were impressed with the success and vast reach of our immunisation teams during the exercise.” One particular approach — collaboration with traditional rulers, religious leaders and community heads — greatly contributed to the success of the vaccination programme, Ugwu says. These individuals used their positions of authority and influence to mobilise their followers and to create awareness on the need to get their children immunised. Using local health coordinators at LGAs, they reached out to these community influencers to help them create awareness, and make people aware about the upcoming immunisation exercises. This is their method of achieving great turnout in the exercises, and so far it seems to be working for them. Image credit: Nigeria Health Watch Ugwu also said that another factor that helped during the exercise, was that each local government in the state engaged in a friendly competition to out-do each other in reaching all the children in their area of supervision. Meningitis, measles get similar attention In December 2019, the Enugu State became the first state in Southeast Nigeria to begin mass immunisation against meningitis with a view to preventing maternal mortality. The immunisation was specifically targeted at children between the ages of 0–5 years old. There was a high turnout of people, especially mothers and care-givers who brought their children for the immunisation. Ugwu said the government is addressing all public health concerns in the state as it affects mothers and newborns, especially with “the just concluded introduction of the second dose of measles into the Routine Immunisation for children between 15 and 23 months of age.” In November 2019, the state ensured that the Measles Second Dose Campaign was transferred into the routine immunisation exercise following campaigns by the Federal Government to immunise more than 28 million children nationwide. The immunisation programmes against public health diseases in the state, according to Ugwu, is in line with achieving one of the Sustainable Development Goals (SDGs), which is to reduce the global maternal mortality ratio to less than 70 per 100,000 live births by 2030. The UN SDGs hope to end preventable deaths of newborns and children under 5 years of age. All countries aim to reduce neonatal mortality to as low as 12 per 1,000 live births and under-5 mortality to as low as 25 per 1,000 live births. Mothers and care-givers with their children at one of the immunisation centres. Photo credit: Nigeria Heath Watch Last year, Enugu State was in first place nationally in the maternal and child health indices, according to a survey by the Lots Quality Assurance of the National Primary Healthcare Development Agency. The survey is done every quarter to assess the performance and quality of routine immunisation programmes, in order to guide decision making across the states. A government’s determination to fund healthcare for its citizens The polio immunisation exercise was funded by the state government. Prior to this, international donors were relied upon to provide funding for activities targeted towards the elimination of polio in Nigeria. The Bill & Melinda Gates Foundation, Gavi, the Vaccine Alliance, Rotary International and WHO are some of the notable funding partners providing funding to eliminate polio in the country. However, Enugu State is reversing this trend and adopting a more independent approach by providing mobilisation funds to the agencies in charge, to carry out their duties. Last year, the state government provided N100 million to enable the agency benefit from the Basic Healthcare Provision Fund (BHCPF). Another N200 million take-off grant was provided to the agency for Universal Health Coverage. These funds are being channeled into addressing public health issues in the state to avoid outbreaks, Ugwu said. Immunization officials getting ready with their equipment at one of the centres. Photo credit: Nigeria Health Watch For the past 14 years, the state had been without polio. The last polio case was in 2006. This, Ugwu says is as a result of the huge funding support from the state government in the light of dwindling funding from international health organizations. But experts are concerned about the sustainability of funding for health programmes by the state government. “We cannot deny the fact that funding support from international organisations like the WHO, Bill Gates among others are very important in the fight against killer diseases in Nigeria and Africa at large,” Dr. Michael Agada, who works at the public health department of the University of Nigeria Teaching Hospital, said. “HIV/AIDs, malaria and the likes are being eradicated across developing nations of the world because of the enormous grants from these organisations.” Another challenge according to Ugwu, was accessing the “hard to reach” places in the rural areas. However, he said the agency succeeded in getting to those places by constituting special teams who got access to those areas and administered the vaccines to the target population. The mop-up vaccination also helped in reaching those left out in the first phase of the exercise, he added. Mothers with their children at the immunization centre. Photo credit: Nigeria Health Watch The State government is doing a lot to eradicate polio and address other public health concerns by releasing large funds to the agencies involved, and that is why they are seeing results, Agada said. Ugwu is optimistic that the government’s support to public health in the state will continually improve. “Successive governments in the state have been very supportive in our health campaigns and through interventions in primary healthcare programmes,” he said, adding, “We are hopeful that it continues to get better with subsequent interventions.” Asogwa agrees with Ugwu. She said many more lives would be saved if the government continues with its free public health programmes for the citizens. “I am happy that my child and other children benefited from this and I’m sure it would continue to save more lives in the state,” she said. Do you know other states providing effective health interventions for polio? Let us know on Twitter @nighealthwatch and Facebook @nigeriahealthwatch
https://nigeriahealthwatch.medium.com/getting-to-100-percent-enugus-fight-to-maintain-its-polio-free-status-e50f88bb2b61
['Nigeria Health Watch']
2020-03-02 12:43:17.295000+00:00
['Immunization', 'Routine Immunization', 'Health', 'Polio']
Get started with BigQuery and dbt, the easy way
Watch this to see how we set-up and run dbt and BigQuery on the cloud shell Get started with BigQuery and dbt, the easy way Find here the quickest way to get started with dbt and BigQuery using only free offerings from Google Cloud. I’m a big fan of dbt — an open source project that helps me build data pipelines around BigQuery using only SQL. Get started with BigQuery and dbt There’s a lot already writen about Bigquery and dbt. For example, there’s this official tutorial to set up dbt with BigQuery, with a lot more details than I do here (thanks Claire Carroll). The goal of this post is to share with you some GCP secrets to make the installation as easy as possible. Step 1: Create a free Google Cloud account Good news: You don’t need a credit card to have your own Google Cloud account. You’ll get a free terabyte of queries in BigQuery every month, and also a free shell environment you can use through your browser. While creating your account, you’ll also create your first Google Cloud project. We’ll use the id of it later. Step 2: Welcome to the free Cloud Shell On console.cloud.google.com click on the “cloud shell” icon on top, and a shell environment will be ready for your use in a minute or so: Find the Cloud Shell button A cloud shell will open Step 3: pip3 install dbt Once in the cloud shell, installing dbt is really easy. To avoid problems skip installing the full dbt, but just install the dbt-bigquery parts with: $ pip3 install --user --upgrade dbt-bigquery Notes: pip3 instead of pip , to make sure we are on the Python 3 world. instead of , to make sure we are on the Python 3 world. --user to avoid installing at the root level. to avoid installing at the root level. --upgrade just in case an older version of dbt was installed. Step 3.1: debug If you get an error like /usr/local/bin/python3.7: bad interpreter: No such file or directory , uninstall dbt and reinstall. $ pip3 uninstall dbt-core $ pip3 install --user --upgrade dbt-bigquery Step 4: start your first dbt project $ ~/.local/bin/dbt init first_project Notes: ~/.local/bin/ is the path to the just installed dbt. Consider adding this directory to the PATH env, to avoid the need of prepending it. Step 4.1: try to run dbt $ cd first_project/ $ ~/.local/bin/dbt run That could have run! But it didn’t. We got an error message like Credentials in profile “default”, target “dev” invalid: Runtime Error instead. That means we still need to configure a way for dbt to connect and authenticate to BigQuery. The good news: This is really easy, since we are already in a Google Cloud environment. Step 5: configure dbt to BigQuery Use vim or your favorite shell editor: $ vim ~/.dbt/profiles.yml In this file, replace the default configuration (which goes to redshift… why?), and instead point it to Bigquery: default: target: dev outputs: dev: type: bigquery method: oauth project: the-id-of-your-project dataset: temp Notes: oauth is all you need to say — the authentication libraries will recognize that it’s you connecting through dbt to bigquery within the cloud shell — no further configuration is needed. is all you need to say — the authentication libraries will recognize that it’s you connecting through dbt to bigquery within the cloud shell — no further configuration is needed. the-id-of-your-project should be the id of your project should be the id of your project temp is the name of a dataset you must create in BigQuery to hold your tables. Note that if you are using BigQuery without a credit card, then the tables in your datasets can only live for a certain number of days. Step 5: run dbt ~/.local/bin/dbt run If everything has gone well, now we have run dbt! That means it created a sample view and a sample table in our provided dataset: Check Claire’s tutorial for more details on what happens next. Step 5.1: the magic cloud shell editor While editing your dbt files and configuration, check out the ready-to-use cloud shell editor. Launch it by clicking the “open editor” button in the cloud shell: Open the cloud shell editor Or just click on this link: ssh.cloud.google.com/cloudshell/editor Step 6: the magic of docs dbt is really good about generating documentation for our pipelines, and you can generate and serve it with the following commands: $ ~/.local/bin/dbt docs generate $ ~/.local/bin/dbt docs serve The really magic part is this: If you are running dbt in the cloud shell, and dbt gives you a localhost (127.0.0.1) URL to contact, you might think it won’t be easy to connect. But it is. Just click on the cloud shell link, and magic will happen: Turns out the cloud shell can create http proxies to give you direct access to any web server running on it — and this immediately gives you access to the docs being served by dbt. Isn’t that neat? Next steps Write all your SQL pipelines within the dbt environment, and make sure to keep them under version control. Find a better way for BigQuery ML models creation. How to run dbt every day on a schedule, in a serverless way? Check Mete Atamel solution with Cloud Run and Cloud Scheduler. More resources Hamza Khan medium.com/weareservian/bigquery-dbt-modern-problems-require-modern-solutions-b40faedc8aaf Mark Rittman rittmananalytics.com/blog/2020/2/8/multichannel-attribution-bigquery-dbt-looker-segment Lak Lakshmanan medium.com/google-cloud/loading-and-transforming-data-into-bigquery-using-dbt-65307ad401cd Want more? I’m Felipe Hoffa, a Developer Advocate for Google Cloud. Follow me on @felipehoffa, find my previous posts on medium.com/@hoffa, and all about BigQuery on reddit.com/r/bigquery.
https://towardsdatascience.com/get-started-with-bigquery-and-dbt-the-easy-way-36b9d9735e35
['Felipe Hoffa']
2020-07-30 12:33:12.678000+00:00
['Data Science', 'Google Cloud Platform', 'Dbt', 'Data Engineering', 'Bigquery']
COVID-19 Data Analysis with Python
So, we can see that graph is increasing exponentially each day. Next, we are going to see how many total cases increased each week starting from the first day which is 22nd January 2020. To do that, we have to find every 7th-day data and create a table for that. “copy()” will create a copy of the dataset so it will not change to the original dataset. Now, we will plot the graph to see how confirmed cases per week increased. We plot two graphs, a line graph, and a log line graph. The logarithmic scale is useful for plotting data that includes very small numbers and very large numbers. So we can see all the numbers easily, without the small numbers squeezed too closely. Alongside this, we will see how many new confirmed cases each week. We can see from the data that it is reached at a peak or not. A corona virus’s peak is the day/week on which there is the highest number of cases. We can see that week has the lowest cases but after that, it increased drastically. Now we will plot a bar graph for visualization, We will do same for the death dataset. we create a table for total deaths each week and count of new deaths each week. Let’s plot both line and log line graphs for new deaths each week. Now, we will see the top 10 countries that have the highest number of confirmed cases. We will use ‘groupby’ functionality for this. A ‘groupby’ operation involves some combination of splitting the object, applying a function, and combining the results. We can see the US has the highest number of cases. Total cases only give the latest confirmed cases but what if we want to know when cases started increasing in each top 10 countries. To find out, we will find each week new confirmed cases. Let’s plot the graph to understand the pattern for each week. So after the 7th week, new cases increased every week for these countries. We can see that at the end of the 14th week, a major decrease in new cases for some countries.
https://medium.com/analytics-vidhya/understanding-coronavirus-using-data-analysis-with-python-f6cf726cbba9
['Dhruvil Shah']
2020-05-06 12:24:10.119000+00:00
['Data Science', 'Data Analysis', 'Data Analytics', 'Coronavirus', 'Covid 19']
Robinhood Had One Job To Do
This past Monday morning, as the world braced for the reopening of stock exchanges in the wake of Friday’s partial meltdown, the highly popular mobile investing startup Robinhood went offline. The official statement from the company said that the outage was caused by “instability in a part of our infrastructure that allows our systems to communicate with each other,” which, rather than calming the panic and frustration felt by users who were missing out on the largest daily point gain in the Dow Jones Industrial Average’s history, only made things worse, and led to speculation and calls for class-action suits against the company for the failure of its app. The “infrastructure” issue was resolved late Tuesday night, and the app was back up and running on Wednesday, but the damage was done: The more than 10 million active users of the app had missed out on the opportunity to make trades during the greatest period of volatility since 2009. The spectacular implosion of a startup valued at over $7 Billion raises all kinds of questions about its underlying commitment to its more than 10 million customers. Apps fail, software breaks, servers go offline. These things happen routinely in the modern internet-enabled world we live in. So it’s not that the app went down that should concern Robinhood, or its users. It’s the fact that, so far, the company itself has shown no evidence of fixing the problem, or providing a workaround solution (which, according to this article, could potentially put it in hot water with FINRA). The issue, I would submit is less technical and more personal. Robinhood uses AWS — Amazon Web Services — to run virtually every aspect of its platform. AWS itself never went down, so that can’t be the part of the “infrastructure” that was inoperative. Amazon, as we all know, has countless built in redundancies and server farms to create virtually uninterruptible up-time for its servers. Given the numerous companies, organizations and even national defense partners that utilize AWS and its framework to store and run their websites, AWS is unlikely to be the culprit here. More likely, the problem resides in the makeup of Robinhood as a business itself. It is notoriously lean, even for a tech startup, employing less than 1,000 people, even six years into its existence. Its whole raison d’etre was to disrupt the stuffy old elitist investing business, that required investors to pay for the expertise and licenses of brokers, traders, and financial experts to actually trade their stocks for them (of note, two of the oldest and stuffiest of the investing firms, Schwab and Fidelity, also had online trading problems, but recovered in a matter of hours, rather than days). Robinhood promised to free retail investors from the shackles of the old investment brokerage model and put power back into their hands. Even the name of the company speaks to how it has modeled itself, as the “Robin Hood” of the finance world — putting the power of trading in the hands of everyone, not just the wealthy, to paraphrase their motto. But, for all that empowerment, the reality, as the past week of denial, and confusion from the leadership at Robinhood revealed to us, remains sadly different. Robinhood succeeded spectacularly in its objective to cut out the middleman and empower the average retail trader directly. But it did so at the cost of any sort of fallback measure if its vaunted technological capability failed. If anything, Robinhood failed because it viewed redundancy of any kind as anathema. Its whole stated purpose is to “democratize the financial markets” but now it finds itself on the hot seat because it provided all the access and ease of operation, with none of the corresponding security or redundancy of systems that traditional houses, in adapting to its disruptive emergency, had to put in place. In the end, Robinhood failed, and is continuing to fail, not at the job of democratizing the market, or rendering access to stock exchanges open to the masses. It is failing at taking care of its customers, pure and simple. It is failing at providing basic customer service, and we shouldn’t expect to see it significantly improve on that ability to provide service, even if it fixes its “infrastructure” issue and gets its app back online. The reason we shouldn’t expect to see customer service significantly improve is because, at its core, Robinhood has always been too small, too lean, and too do-it-yourself to provide any valid backup plans. The whole idea of a backup plan intimates that you believe your technology might fail at some point. And Robinhood, along with so many of the other mobile-first, DIY-centric startups that have sprung to life in the past decade, cannot admit that sometimes, it takes a team of people to overcome the limits of an overdependence on technology.
https://medium.com/thinkists/robinhood-had-one-job-to-do-d279ee3f6218
['Jay Michaelson']
2020-03-06 21:11:46.617000+00:00
['News', 'Business', 'Money', 'Economy', 'Startup']
Freelance Writing Ain’t Easy, Especially When You’re Not White
Freelance Writing Ain’t Easy, Especially When You’re Not White Our Voices Only Matter On Special Occasions Image by Antonio Cansino from Pixabay If you discuss how much freelance writers make, income will fluctuate signficantly. Usually though, the further away you are from being white, male and able-bodied, the less value your voice has because society trains us to view white voices as authorities on all subjects—white men even more-so. They are looked to for topics on racism, poverty, PIC (prison industrial complex) and others that they have no personal experience—but hey, they read a book. They are rarely challenged or questioned, especially if the have a PhD before their name because that gives them carte blanche to talk, with authority, about every damn topic. Look at Elon Musk, who talks about COVID and people think since he has a business, and a degree in economics and physics, he must know what he’s talking about. As such, there is little necessity for voices outside of that, and when they do want our voices, it’s usually only one. Most publications either staff no Black writers or PoC writers, or staff one and that’s their go to for only topics related to race, etc. They pigeonhole us. They limit us. They only look for us during times when the topic is hot or Black History Month, Indigenous Peoples Day and any special occasion where they look foolish having a white person write about it. We are constantly fighting to get our foot in the door, especially as film critics where the voices are overwhelmingly white and thanks to the variety of roadblocks we are stifled even more. Screeners Image by Pexels from Pixabay When we request film or show screeners, it’s a roll of the dice whether we will get it. Because a lot of the rules ensure that only certain voices are heard. Often, they will ask if you have a guaranteed publication that you are writing a review or feature for and if the publication is a bit smaller, it’s likely they will say no. If you don’t have a place but want to see the movie and then pitch it, another chance you’ll get a no. Often the people who get their hands on films and shows first are staff and again—they’re usually white. So then, like the films and shows that we decry are overly saturated with whiteness, we get an over-saturation of white critics writing reviews that often sound the same as their fellow critics, overlooking the same things as them. Some publications will even ask if you received a screener for the movie you’re pitching so you can be caught in a catch-22, where you can’t get the screener without a publication and the publication won’t accept your pitch without you screening the film. Film Festivals Image by PublicDomainPictures from Pixabay Costs of film festivals can be prohibitively expensive. Especially for freelancers who have to foot the cost to attend the festivals and don’t have a steady income stream. Even the application process for some of these festivals automatically shut out freelance, diverse voices because they require both the publication you are representing (because you must be part of the bigger machine) and some even require that publication to be at a certain level of traffic to their site—again removing smaller publications which usually have a more diverse group of writers. Some festivals are doing their best to be more inclusive, but it’s still not enough because more writers inevitably give up film criticism as they are tired of hounding people for screeners. Even worse, few places take the initiative to be inclusive. They are happy keeping us out until enough noise is made on social media discussing a film or show that only sent out screeners to a particular demographic, then all of a sudden they fix their “mistake”. The mistake isn’t a mistake, when you hire people who only want to hear white voices review their films, shows. The mistake isn’t a mistake when it takes public pressure for it to be corrected. The mistake isn’t a mistake when it’s designed to force out diverse voices and it does succeed. It’s upsetting, tiring and frustrating to experience and hear fellow writers talk about screeners they emailed a request for and were either outright ignored, told no, or told they will hear back later—but never do. They need to question who they hire and who they are allow to screen films and interview the cast, because there is no excuse that is acceptable anymore. Clueless by choice isn’t clueless, it’s plausible deniability and we are tired of hearing the lies.
https://medium.com/breakthrough/freelance-writing-aint-easy-especially-when-you-re-not-white-d3637f3e1b5d
[]
2020-10-11 16:55:50.117000+00:00
['Society', 'Equality', 'Culture', 'Race', 'Film']
Top 10 Dangerous Things Women Have Done in the Name of Beauty
Top 10 Dangerous Things Women Have Done in the Name of Beauty The pursuit of beauty throughout history Photo by Gabriel Brandton Unsplash Beauty standards have changed throughout human history, varying according to different cultures and traditions, but beauty standards and goals have always existed, putting a lot of pressure on women. In an attempt to reach a certain beauty standard, women have broken the bones in their feet, compressed the organs in their stomach and chest, applied poisonous makeup, and more. Here are the top ten dangerous things women have done in the name of beauty: 10. Foot binding The practice of foot binding is recorded to have begun in China around 960 A.D. and it wasn’t outlawed until 1912. Despite being forbidden, some women continued to bind their feet in secret up until the 1930s. Having tiny feet was a status symbol, and a way to secure an advantageous marriage. “First, her feet were plunged into hot water and her toenails clipped short. Then the feet were massaged and oiled before all the toes, except the big toes, were broken and bound flat against the sole, making a triangle shape. Next, her arch was strained as the foot was bent double. Finally, the feet were bound in place using a silk strip measuring ten feet long and two inches wide.” — Smithsonian Magazine The process was painful, and it resulted in terribly deformed feet that hindered mobility for the rest of the woman’s life. 9. Poisonous eyedrops In Edwardian England, it was considered fashionable to appear with dilated pupils to emulate the look of being in love. To achieve this look, women used belladonna eyedrops, extracted from the poisonous plant of the same name (also known as deadly nightshade). At best, the belladonna eyedrops caused blurred vision, at worst, irregular heartbeat, and blindness. 8. The tapeworm diet Eating whatever you want without gaining weight, that is the dream of many women who are constantly pressured by beauty standards to remain slender at all costs. In the 1800s it was no different, and one solution for this particular problem seemed perfect: tapeworms. The idea was simple, you would ingest a pill containing the tapeworm egg, which would hatch in your intestines and a worm would grow. As you ate to your heart’s content, the worm would consume most of it, and you’d stay thin and beautiful without any effort or sacrifice. The complicated part was getting rid of the worm later since treatments for that sort of thing weren’t as efficient as they are today. 7. Lead makeup Lead was used as an ingredient in makeup from Ancient Rome until the 1800s. It was used mostly to make women’s faces look paler, to cover blemishes, and hide pox scars. Queen Elizabeth I, whose face was badly marked by pox in 1562, is known to have used lead-based makeup to cover the scars and achieve her white-face signature look. Lead, however, is extremely poisonous. Applied to the skin, it slowly poisons the body and can lead to death, in the meantime, it causes dry skin, abdominal pain, constipation, and other unpleasant side-effects (Source: The Cut). 6. Face bleaching and arsenic creams White skin was a beauty standard through much of human history, and women have gone to extreme lengths to achieve that pale look, including washing their faces with bleach, ammonia and using creams made of arsenic. These products were particularly popular in Edwardian England. Before rigorous testing and labelling of beauty products were enforced, manufacturers had free rein to use any substance they wanted and claim their products were perfectly safe, which of course, they weren’t. 5. DIY Botox injections Women have been engaged in the war against aging for centuries. Now, neither arsenic nor lead are used in beauty creams, but toxins are still used in cosmetic procedures to prevent aging, like in botox. Cosmetic botox is made of a toxin, Botulinum A, which comes from the bacteria that causes botulism, a deadly disease. Mishandling botox is, therefore, extremely dangerous. Even doctors run the risk of making a mistake and botching a procedure, so it would be dangerous for a lay person to attempt to do it herself. Yet, women buy botox online and attempt to do it themselves. A badly applied botox injection can have severe consequences. “If the wrong area of the face or neck is injected, the bacteria can disrupt those muscles that allow a person to breathe, chew or swallow.” — Plastic Surgery Specialists 4. Formaldehyde for straightening hair Formaldehyde is a compound used in building materials such as pressed wood products. It’s also used for preserving corpses and in taxidermy. It is considered a carcinogen, meaning that long-term exposure to formaldehyde causes cancer. Yet, women used it for straightening hair. During the hair straightening process, a solution containing formaldehyde is applied to the hair. It’s later sealed in by heating, which causes the product to evaporate. These vapors can cause breathing problems, skin and eye irritations, and prolonged exposure has been linked to certain types of cancer. 3. Tight lacing corsets In the 1800s, the desired figure for a woman was as slim-waisted as possible. To achieve the desired 17-inches-waist, women began using corsets at a young age and only tightened them up more as they grew up. Some women wouldn't even take off their corsets to sleep, or in pregnancy. Tight lacing corsets can cause the internal organs to be squished, and even lose some of its functions. Breathing becomes more difficult, so you tire out easily, and there’s even the danger of piercing a lung with a rib. 2. Extreme dieting The quest for being thin at all costs didn’t end in the 19th century, it still plagues women today. Women will try all sorts of extreme diets to lose weight. From soup-based diets to simply refusing to eat, which can trigger eating disorders. Developing anorexia or bulimia are possible consequences of extreme dieting. Anorexia is a disease that can lead to death. 1. Radioactive beauty creams Radium was considered a wonder element when it was first discovered by Madame Marie Curie. It emanated energy, glowing in the dark, and it enticed the imagination of both scientists and inventors. In the beginning, the long-term dangers of radium were unknown, which opened the door for it to be used in several areas of life, including the making of fluorescent paint and, eventually, beauty creams. The idea of applying radium directly to the face seems completely insane to anyone in the 21st century, but back in the early 1900s, radium was seen as beneficial. Radium was used in all sorts of beauty products, from toothpaste to anti-aging creams.
https://medium.com/history-of-yesterday/top-10-dangerous-things-women-have-done-in-the-name-of-beauty-5332cb277095
['Renata Gomes']
2020-12-14 09:03:26.096000+00:00
['Health', 'History', 'Life Lessons', 'Beauty', 'Lifestyle']
“The most international micro-agency:” How two London bootcamp graduates built a remote collaboration with software engineers in Gaza
“The most international micro-agency:” How two London bootcamp graduates built a remote collaboration with software engineers in Gaza An interview with Joe Friel and Simon Dupree, the first pair of participants in the Founders Programme, the Founders and Coders social impact graduate fellowship When I first sat down with Joe Friel and Simon Dupree to discuss their experience on the Founders Programme, it reminded me of Atul Gawande’s New Yorker piece “Cowboys and Pit Crews,” in which he examines the traditional cowboy role of the doctor and what he considers the more necessary formation of teams around medical issues. In a way, Joe and Simon are two cowboys, successful professionals who through their experience working with developers in Gaza, have learned what it takes to join a “pit crew” — to effectively collaborate across geographic, linguistic, and political borders. Simon (left) and Joe (right) online with Ismail and Hansen in Gaza What is the Founders Programme? “The Founders Programme provided a runway to create a freelance portfolio and the chance to work on socially and environmentally conscious projects with developers in Gaza. It was pretty much an opportunity I couldn’t walk away from!” — Simon Dupree Rebecca: What were you doing before Founders and Coders, the London-based coding boot camp from which you graduated in October 2018? Joe: Most recently, I ran a social media marketing business connecting influencers with brands; before that, I worked for a children’s media brand. Simon: I was living in Berlin, where I was doing research at the University of Berlin in agronomy and environmental data science. Rebecca: With such interesting careers, why make the switch to web development? Joe: All my roles have had a strong digital focus and building things — most recently I oversaw the development of a platform that enables brands to identify, measure and manage influencers — which meant I was spending a lot of time managing digital projects and working with developers. I found this both rewarding and frustrating. Rewarding because it piqued my curiosity about web development, and I started teaching myself to code on the side. Frustrating, because I was relying on people who weren’t necessarily as invested in the ideas as I was, and I just really wanted to be able to build things myself! At the same time, my work was focused on engaging young people, and I saw first hand how the growth of technology — which is, ultimately, morally neutral — was far outpacing regulation. I grew concerned that companies weren’t taking responsibility for their social impact and decided I wanted to work as a developer and focus on projects with a positive social impact. Founders and Coders is an amazing combination of those things, and I was thrilled to be accepted into the programme. Simon: Like Joe, I first started coding on the side, in my case, a bit of Python while working with a team of developers on some data science work. I thought to myself, wow, it’s crazy how much freedom developers have in their way of thinking! You just bring them ideas and they can create something from scratch. I wanted to do more of that. Rebecca: In your own words, what exactly is the “Founders Programme?” Simon: The Founders Programme is an opportunity for a pair of graduates from Founders and Coders to collaborate with developers in Gaza (graduates of the Gaza Code Academy) on pro bono projects for nonprofit organisations and social entrepreneurs. The developers earn a small stipend while building a freelance portfolio that provides them with a runway into founding their own agency or social impact startup. Rebecca: Why did each of you apply to the programme? Joe: I knew I wanted to use my skills as a developer to make a social impact, but I figured I wouldn’t be able to afford this until I did my time in a commercial company. So when the Founders Programme came along, I seized the opportunity to fast-track my desire to get into impact-oriented work. The Founders Programme is a continuation of client projects that we do on the bootcamp through Tech for Better. I’d been quite involved in getting the clients in, and I was fascinated by the kinds of problems these nonprofits and social entrepreneurs brought into the space. Simon: I’ve long dreamed of having the freedom to work independently, so throughout the course, I’d been wrestling with the question of whether to take a full-time job or pursue a career as a freelancer. When I met some people from previous cohorts who shared their successful experiences with freelancing, I knew that was what I wanted to do. Joe and Simon team up “Someone asked me the other day if I would have done this with many other people, and I said probably not. With a learning curve this steep, you need a partner you can trust with your life!” — Simon Dupree Ramy and Marwa with Max Gerber (a Founders and Coders alumnus who was visiting Gaza to mentor on the Code Academy); Joe, Simon, and Elysabeth, the Connect 5 product owner, are online in London Rebecca: When did you realise you wanted to work together? Joe: If you still get on pairing when someone using a German keyboard, it’s a good sign you’re pretty good at pairing together! When we both got into the programme, though we applied individually, we made it pretty clear we wanted to work together. We even worked on our applications together. Because of our feedback, Founders and Coders has tweaked the programme and people now apply as pairs. Simon: Someone asked me the other day if I would have done this with many other people, and I said probably not. With a learning curve this steep, you need a partner you can trust with your life! Joe: Yeah. The Founders and Coders experience was already intense. There were 16 of us in a very hot room (the infamous summer of 2018!) constantly learning all these new things, building things together, nervous about getting a job, and uncertain what we were doing with our lives! Then with the Founders Programme, it was down to the two of us, working remotely with developers in Gaza. Safe to say, from the start it was important for me to know we’d get on with each other. Rebecca: So how did you make it work? Simon: For starters, we’ve become good friends and have interests in common apart from coding. It’s important to have something else to talk about: in our case, we both produce music on the side. We’re also roughly on the same technical level. Though we have different strengths and weaknesses, we figure things out together. In my experience, if your coding partner is way more advanced, it doesn’t work as well. Joe: To be honest, we’ve had our conflicts. But even the few times we got stressed, we were good at reading each other and giving each other space. Simon: At the beginning, there was a bit of FOMO (fear of missing out), where we felt like we had to understand every part of everything. But at some point this became less important, which was helpful. Both of us got better at strategizing where to put our energy, so we could work a bit more independently while still being able to pair. So, gradually, things have gotten less intense. The Pit Crew “It quickly became clear that as a remote team we would live or die by our adherence to agile processes.” — Joe Friel Aaron and George, co-founders of Nightingale, online with Ramy in Gaza. Rebecca: Tell me about what you’ve called the “dual mission of the Founders Programme.” Joe: Well, the first part is about providing solutions for nonprofits that wouldn’t otherwise have the resources to access digital service development. The second part is about providing employment opportunities for developers in Gaza. It’s hard enough finding good freelance work in London, but it’s unimaginably difficult when you’re working remotely from Gaza. It’s about getting London developers comfortable with working with Gazan developers not as an outsourcing pool but as collaborators. Simon: I also hope that the Founders Programme strengthens the ties between Founders and Coders and Gaza Sky Geeks. Unless you’re one of the few mentors who’s been out there, the Gaza Code Academy is this cool thing on a slideshow. The amazing thing about the Founders Programme is that we’ve had the opportunity to make friends with people in Gaza, even though we haven’t physically travelled to Gaza. Joe: Don’t get us wrong, we’d love to visit, and we’re hoping to do so soon. But these remote relationships are important in their own right. Rebecca: Tell me more about working on remote teams with developers in Gaza. What was the setup during the Founders Programme? Simon: We worked in teams of four, so on each project Joe and I worked with a different pair of developers in Gaza. Over the course of four client projects, we worked with Ramy, Asala, Haneen, Ismail and Marwa. It has been a real journey — exciting and fruitful, at times overwhelming and always intense. The process of establishing a well-rounded remote workflow wasn’t always easy, but we’ve learned a lot. Rebecca: What are the most important lessons you’ve learned so far? Joe: Number one, processes are SO important. We learned best practices during the bootcamp — agile, writing proper GitHub issues, scrum. But it’s easy to fall into paying lip service to best practices when the person you’re working with is right there across the laptop and you know them well, having already worked together for three months. On our remote team, we not only work different hours (there’s a two-hour time difference between London and Gaza) but also on different days of the week (they work Sun-Thurs instead of our Mon-Fri). So there’s two days a week when we don’t overlap. And then we have a language barrier. Their English is amazing, but we still have misunderstandings. It quickly became clear that, as a remote team, we would live or die by our adherence to agile processes. Simon: We learned the hard way during our first project. We didn’t track our processes well enough, didn’t plan out the scope very well, didn’t use GitHub to plan the project properly, and so on. We also didn’t plan enough time to get to know the Gazan developers before meeting the client together. Making time to get to know each other is such an important part and beneficial for a harmonious workflow. Joe: It’s interesting. I was just looking at our Tech for Better student projects, and noticed how sporadically we used labels and how infrequently we wrote out full issues. Under those circumstances, it was absolutely fine. But if we did the same thing with our partners in Gaza, I wouldn’t know what Ramy was doing on Sunday, so I could end up working on the same code; or I’d be stalled because Ramy wouldn’t be sure what was okay to work on and couldn’t get ahold of us. Needless to say, freelancing with a remote team has turned us into massive advocates of agile process. Simon: Sometimes agile is taken as a way to make things user-centered and product owner-centered, which is certainly important, but we learned that agile processes are just as important for the team as they are for the users. Agile involves everyone on the team and provides each of us with roles. It’s a ball that bounces back and forth. Rebecca: Walk me through the most important parts of your current processes. Simon: If we’re working with a new pair, we make time for all of the developers to get to know each other. Then we define the user stories into discrete problems that people can work on independently. Joe: We have daily standups in person via Google Hangouts from Monday to Thursday, which goes a long way towards developing a personal relationship with the developers in Gaza. We realised quickly that even with great processes, things still go wrong, so you need to have a basis for trust, so people don’t take mistakes personally. The developers in Gaza feel like the “remote” ones for the client, and can fear they are missing out on some conversations and worry they aren’t aware of something. On one of our projects, for example, the product owner kept changing the scope, and our partners in Gaza thought we might be more aware of what was going on, but actually we didn’t know either! It’s totally understandable they could think we’re having more conversations with the client that we’re actually having. The onus is very much on us to ensure that all those conversations are minuted in GitHub so they don’t worry they are missing out. Simon: Big takeaway: a lack of communication just absolutely kills the project. The most international micro-agency “It’s so important that we can all speak frankly, so you know you’re coming to a final decision as a team, and everyone is bought in.” — Simon Dupree Cat, founder of MyPickle, with Joe, Simon, Haneen, and Ismail Rebecca: Tell me about the developers you’re working with in Gaza. Joe: Ramy Shurafa is a graduate of the Gaza Code Academy who previously studied and worked in civil engineering and GIS. Simon: Ramy is just a better developer than we are! He lives for the code and is super smart, so he’s really good at thinking through the backend and the database work. But even this doesn’t do him justice because he can really teach himself anything. Joe: And, importantly, he teaches us too! Pretty much whenever Simon or I hit a problem, Ramy is there to help us out and find the solution. Simon: Also, what’s great is that as we’ve all gotten to know each other — we’ve worked together for almost half a year now — we’ve become much better at true collaboration in the running of projects. It’s so important that we can all speak frankly, so you know you’re coming to a final decision as a team, and everyone is bought in. Joe: Asala, who actually just recently got a full-time job, is only twenty and one of the youngest people to go through the bootcamp. She’s got a great eye for design and pairs well with Ramy. This is also important because, in a remote team, if people aren’t pairing together when they can, things can quickly become far too siloed, with only one person understanding a certain part of the codebase. Rebecca: Switching gears, tell me a bit about the products you built on the Founders Programme. Simon: We built three apps: Nightingale, an app that aims to help students reflect on their emotions and work out what’s affecting them at school; My Pickle, a connector for people offering support and individuals looking for support; and Connect 5, a mobile app that allows Connect 5 trainers easily to share survey forms with course participants and to collect results. The dashboard on the Nightingale app Rebecca: What would you say was the most important lesson to come out of this process? Joe: One of the biggest lessons I’ve learned is that it’s so important to agree to a narrow scope and stick to it. With one of our projects, our product owner was so enthusiastic that we got really invested in the project, which led us to agree to too many features, and before we knew it, we had a bloated project. We’ve also learned how important it is to make sure the client (or “product owner”) buys into the process of scoping and sticking to it! You have to sort of sell the process by telling the client, “We want to make sure we achieve for you what we agreed. If we add more things you may not get the functionality you want.” The last project we scoped really well and we were positive and firm, so our client understood the reasoning. It really showed us just how important experience is. It’s hard to be firm when you don’t have firsthand experience. Simon: It’s also been interesting to note how having to pay for something changes people’s behavior. On the Founders Programme, the product owners weren’t paying, so they might not have fully understood the consequences of changing the scope. On the other hand, it made it easy to be honest. On the final Founders Programme project, for example, we were building a chatbot, which we had never done before, and it was okay to admit that. The project we’re currently on is a paid project, and you can clearly see how money changes behavior. Our client checks our daily standup logs, which affects the process and makes it even more important to have a consistent team process in place. Rebecca: You guys were literally the “founders” of the Founders Programme. How did your experience inform the next iteration of the programme? Simon: In addition to people applying in pairs, as mentioned earlier, for the next pair the Founders Programme will last a bit longer. We realised pretty quickly that two projects over two months does not provide much of a runway for building a freelance practice, so now it’s three projects in three months. Joe: We’ve also documented the whole process to create an initial handbook. We realised how important the initial “set-up” before each project is. It’s so easy to just think about the app you’re looking to build. But to make the process go as smoothly as possible, there are so many things you need to do — setting up the channels of communication, building a team rapport, giving yourself time to build a pipeline. We made a lot of mistakes! But hopefully that means the next pair are now equipped to do better. And in turn they can iterate on the handbook, and on and on. We’ll also be around, hopefully, to be of help in any way they like. Helping each other out is core to the ethos of Founders and Coders, so it’s important to keep that in the Founders programme too. Rebecca: What’s next for the two of you? Simon: As you know, we recently founded Yalla Cooperative. We’re a collective of freelance web developers across the UK, Gaza and Germany. Currently our team consists of Joe, Michael, Ramy, Asala and myself — we all went through either Founders and Coders or the Gaza Code Academy. We’re hoping to be able to take on more projects so we can establish our agency and bring more work to Gaza! I’m positive that we’ll be able to spin off further micro-agencies such as Yalla that work remotely, diversifying tech and facilitating personal empowerment. Joe: Yeah, covering UK, Gaza and Germany we like to think of ourselves as the most international micro agency out there! Actually, we just delivered our first paid-for project for a leading UK charity called Tempo, and excitingly one of our Founders projects just secured funding off the back of the app we developed for them, so we will be working on that too. It’s kind of amazing — if you’d said that a few months after finishing the bootcamp we’d be doing paid work for our own digital agency, we probably would have laughed!
https://medium.com/free-code-camp/the-most-international-micro-agency-how-two-london-bootcamp-graduates-built-a-remote-3eeda0be1b2a
['Rebecca Radding']
2019-04-10 17:43:21.216000+00:00
['Technology', 'Programming', 'Tech', 'Coding', 'Startup']
How AI can be used to boost the impact of e-learning
In the wake of the recent social distancing guidelines and shutdowns, more and more individuals as well as educational institutions are adopting e-learning technologies. One such example is the recent rise in the use of tele-conferencing tools such as Zoom and Google Hangouts. Such tools connect students and teachers, and have helped avoid discontinuity in education because of pandemic-related lockdowns. Teachers however, are still spending more than 50 hrs per week¹ on an average, and only 49% of this time is spent on their core task, which is direct interactions with the students so as to instruct, engage, coach, advise and develop their behavioral, social and emotional skills. The remainder of more than 51% of their time is spent on routine tasks such as preparation, evaluations and feedback, and administration. Automation of tasks using AI can save up to 30% of the time spent on routine tasks. This time could be spent towards interactions with students and increase impact.
https://medium.com/unitx-ai-magazine/how-ai-can-boost-e-learning-ec5809df3b84
['Kiran Narayanan']
2020-05-20 07:30:29.807000+00:00
['AI', 'Saudi Arabia', 'E Learning Solutions', 'Digital Transformation', 'Roboticprocessautomation']
Top Importance of Impression Management
Impression management is a key important factor for improvement and success. It is also an effective instrument for handling criticism and negativity. Photo by History in HD on Unsplash Impression management refers, the demeanor of any individual that comprises the goal of governing the impression and epithet timbered of that individual by others. In diurnal life, we try to manage our impression to face the world. People manage their impressions for many reasons. In his research, Identities, the Phenomenal Self, and Laboratory Research, David J. Schneider mentioned- There are two types of impression; The Calculated Impression & The Secondary Impression The calculated impression means the embodied sum of the presumption a person wants the target to attract from his self-presentation. The secondary impression refers to an undesired impression from target inferences that a person didn’t specifically intend. ‘The calculated impression’ can define more simply as the desired number of impressions from the target by a person from his or her self-presentation. While ‘the secondary impression’ is opposite in the matter of desire because ‘the secondary impression’ is not desirable by the anchor. ‘The secondary impression’ is a more voluntary approach by the target towards the anchor. Life has two sides. One side is called a personal life, and another side is called a professional life. We all do some roleplay based on situations. The word roleplay is a substitute for impression management. In our personal life, a roleplaying attitude is not necessary for anyone. But professional life is different from personal life. In personal life, we all display our real nature. In our personal life, where we can maintain our privacy from the outside, we don’t need to play any artificial role. We can stay in our unique mood in our private life. We appear in our true nature when we stay, meet, and communicate with our family and friends. Professional life has some guidelines that we all do-follow. The profession is a subject that has a relation with impression management. Profession change or affect individual impression. Nobody will attempt to parade a sullen or gray impression of them in their professional field. In professional life, we can not exhibit the unique nature that we got naturally. In professional life, people have to adapt the different characters to deal with the professional world. To deliver service or govern any specific post, we all act based on our profession. In the profession, an individual will be-restricted from showing his or her real behavior. From a conscious or unconscious mind, we all try to manage our impression based on our trade when we come out from personal life.
https://medium.com/illumination/top-importance-of-impression-management-97df574a92fa
['Mohammad Hassan']
2020-12-26 21:34:36.480000+00:00
['Philosophy', 'Education', 'Business', 'Psychology', 'Life']
Mother Is What I Do, Not What I Am
I love my child; I just don’t resonate much with that role Photo by Jordan Whitt on Unsplash I am the mother of a nearly 21 year old son named Hugh. He’s on the autism spectrum and lives at home with me and my husband James. At this point in our journey together as mother and child, mother is a verb. It’s what I do. I don’t identify much with it as who I am. I love my son with all my heart. In fact, I’ve sacrificed a good bit of my life to ensuring that his can be better, but at this juncture, my identity is primarily in other places — being a woman, and a writer, to name just a few. Being a mother is still in the top 6 or 7 aspects of my identity, and if you look like you might be considering hurting Hugh, you will see a Mama Bear come front and center, with teeth and claws bared. But I’ve given so much to motherhood, and it’s given so comparatively little to me, I just don’t resonate with that role like some people do. Women are expected to get their fulfillment from this role of motherhood, but not only does this limit women (even the ones who find it very fulfilling), it underestimates them. I have many, many friends who are mothers of children of various ages, some young adults like Hugh and some of elementary school children. I also have several friends who have chosen to never have children. They’ve spent their entire adult lives having that choice questioned, by not only family but strangers. Because even today, motherhood is widely considered the primary reason that women are on earth. “A majority of adults, 63 percent, continue to believe that being a mother is the most important job for a woman in today’s world. This figure is virtually unchanged from last year and has remained constant over the past several years. Twenty-four percent disagree, and 13 percent are not sure,” Rasmussen said.” Since Hugh is non-verbal, has high levels of anxiety and OCD, and he’s prone to seizures, it’s likely that he will never live alone or support himself. He is a sweet guy, when he’s not totally anxious and in control mode to try to manage that. James and I have accepted that we’ll probably always have him under our roof — operating at the equivalent level of parenting that is required of a very active 8 year old. It’s not that his IQ is that of an 8 year old — Hugh is incredibly smart, but none-the-less, his ability to self-navigate in the world is about on that level. We still supervise his showers because otherwise he’d forget to wash his hair and just stay in the warm water for hours at a time, enjoying the feel of it on his skin. Mercifully, we no longer have to supervise him in the bathroom, but that only ended about 2 years ago. We love Hugh deeply. He is our only child and as hard as it’s been at times, having him in our lives has brought many gifts. We first learned how to step out of societally imposed boxes by being forced to do that with him, because that box of typical family life was not available to us. Our milestones have not been standard ones. Instead, we’ve celebrated the first time that he lied to us because not all autists are even capable of such a thing. We celebrated the first time he wrestled a friend in school when they were supposed to be sitting nicely — he has a friend; he’s initiating play. We are more present to gratitude than many people, but it’s also taken a lot out of us both; particularly from me. As the stay-at-home parent who was always the point person on Hugh’s life, schooling, therapies, etc., my entire life was consumed with his care. Up until the time that he was about 7, I did nothing else but full-time mothering. James and I put our relationship and the rest of our lives together on the back burner and we both focused on Hugh. Eventually, this wasn’t sustainable any longer. Fortunately, by this time Hugh was in a charter school that really embraced him and he was more stable. I still got weekly calls about various things that weren’t quite working or that needed my attention, and sometimes I had to drop what I was doing and come pick him up, but at least I had a little bit of breathing room. Good thing, because I was just about ready for the funny farm. James was and is a very involved and supportive dad, but I was the one on the front lines every day. I then started to intentionally try to figure out how to put some of what I was giving out back in so as to keep myself from being depleted. James and I began refocusing more on our relationship as a couple, and I started doing a lot of personal growth work to figure out who I actually was and to get better coping skills. Although I started working part time from home doing something that I enjoy and otherwise expanded the parts of my life that are about things that nourish me, I still spent a great deal of time and effort on Hugh’s needs and care. Don’t get me wrong; I am honored to have been able to do this for my son. It’s a bit like a sacred compact that I have agreed to fulfill and I take that very seriously, but it’s more like a job than a relationship. Parents typically give a bit more than children give back, particularly in younger years, but in my relationship with Hugh, I’d say I give 95% and he gives back 5%. That 5% is very sweet and I love it when we are laughing together or he spontaneously wants to hold my hand. He loves me very deeply also, but he just doesn’t have a lot of skills for conveying that. Mostly I give and he takes because that’s what he needs from me, and I take my job seriously. A few years ago I went on a week long vacation with some of my college friends. Hugh and I had never been apart for that long before and he spent the entire week asking about me and obsessing about when I was coming home. Once I got back, he proceeded to ignore me and then had an anxiety-induced meltdown where he yelled at me for about 2 hours. We’ve worked a lot on his anxiety in various ways since then, and made some progress where that kind of thing doesn’t happen to that level any more. But although Hugh is in somebody else’s care during the day, I still spend a fair amount of time coordinating his therapies, activities, and managing his day to day life. We still have to put him to bed each night, because if we didn’t make sure he brushed his teeth and took away his electronics, he’d stay up all night and have nasty teeth. If we want to go somewhere without him, we need to find a sitter, since he can’t be left alone. And like I said above, I’m happy to do that. It’s a sacred calling and I do it to the best of my ability. I am routinely told what a great mom I am. But, particularly the past few years when I’ve found out more and more about who I am beyond that, I feel like mothering is a role that I play and a job that I do. I take it seriously, but it’s not the lens that I look at myself through — at least not very often. I would give my life for my son. In many ways, I already have. But in order to be able to continue to do that for the forseeable future, I embrace the other parts of myself and mother as a verb and less as a noun. I’ll always be Hugh’s mom. When he’s had a bad seizure, I find myself saying, “My baby, my baby.” It’s a completely primal connection that nothing can sever, but the cultural narrative that motherhood is the primary calling of women and the greatest source of their identity just doesn’t serve me or resonate for me.
https://medium.com/inside-of-elle-beau/mother-is-what-i-do-not-what-i-am-8a6eeb2908ce
['Elle Beau']
2020-03-26 11:27:56.615000+00:00
['Patriarchy', 'Motherhood', 'Society', 'This Happened To Me', 'Autism']
AutoAI for Data Scientists: From Beginner to Expert
Data science is a required practice for organizations accelerating their journeys to AI. Businesses are keen on hiring the right talent, acquiring the right tools and evolving the discipline. When it comes to data science projects there are two major problems: 1) There are not enough data scientists. 2) It takes too much time for any data scientist to get to a usable, tuned model. Solving the lack of data scientists' problems requires investment in our employees in terms of time and training. We can’t expect these people to just keep on learning for a year before they can be productive. We need to reach a stage where people know enough to start contributing immediately while continuing to improve their skills. As far as the second problem is concerned, taking too much time getting to a usable and tuned model, we need tools to help us optimize our data scientists' productivity. There are some tasks that are relatively mundane that could be automated, leaving the more challenging and interesting parts to the data scientist. Intelligent automation in data science and AI empowers everyone Enter AutoAI. It recently won the AIconics best innovation in intelligence automation award. Let’s talk about how it addresses our problems. AIconic Award for AutoAI Currently, AutoAI addresses problems related to classification and prediction (regression). These types of problems are at the core of many data science initiatives. If you are an experienced data scientist, you know how to solve them. With AutoAI in Watson Studio, you can quickly see the leaderboard of the various pipelines which help accelerate the model selection. If you are learning data science you can learn how these functions are used. AutoAI processing and leaderboard At the highest level, creating a model involves taking some data, passing it through a machine learning algorithm, and getting a resulting model. Well, it’s not always that simple. Let’s say you have your data as a comma-delimited file (.csv). To start with, all the attributes are character strings. We need to identify all the fields that are numeric and convert them into integer, decimal or floating-point numbers. You also have to consider dealing with missing values and normalization. The character fields also have to be converted to numeric values. Typically, we are talking about categorization. For example, gender, type of payment, and so on. We must admit that this is not the most exciting part of creating a model. Being able to automate this part makes expert data scientists more efficient and helps more junior data scientists avoid mistakes while address the pre-processing of the data even if they are still learning about what needs to be done. See for yourselves: You can try the AutoAI tutorial on the IBM Cloud for free. Which algorithm to use? Do we use a decision tree? An ensemble? There are so many to choose from. Which one is the best for the type of data and problem we have? Curated models available through AutoAI We also have to contend with feature engineering and hyper-parameter tuning. Which new features should you create? Based on what? This takes experience to select the right mix. As for hyper-parameter tuning, this can be tricky. You could end up with a model that works great on training data but not so much on new data. You could also end up with a less than optimal model. AutoAI addresses all those issues and allows you to make an educated decision on which model performs best. Your decision is assisted with evaluation measures such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and others on both the training and testing data (including cross-validation). You can even see the details of how feature engineering was done and the feature importance. This is especially a key part for a beginner to start learning about data science. For expert data scientists, you can validate or adjust some of your assumptions here. Model evaluation Once you decide on the model to use, you can save and deploy it into an IBM Watson Machine Learning service so people can score their data through a simple REST API. Saving an AutoAI model A Perfect Blend of Open-source and IBM Technology Ah, this is a proprietary solution! Not at all! Instead of saving the model to an IBM Watson Machine Learning service, you can save it as a notebook. This way, you can generate the model yourself and decide where to save and deploy it. Since it is a notebook, you can modify it for any reason, may that be adding some transformation or make it fit datasets with additional attributes. And of course, you can use this with an open-source or Watson Studio based tool. Generated notebook One side benefit of generating a notebook could be for education and training. It is always instructive to see how things are done, and beginner data scientists may see some transformations they did not think about for this or other projects. This becomes learning by example. IBM is committed to leading and empowering the open-source community and data science is, of course, no exception! Giving you more time to innovate by minimizing mundane or repetitive tasks We stated that two important problems we want to solve are to make the beginner data scientist productive as soon as possible and remove some burdens from the experienced data scientist so they can be more productive. With AutoAI Experiments, we remove the burden of having to deal with all the details of preparing the data. This way, a beginner data scientist does not need to know all the intricacies of data preparation right away and the experienced data scientist does not need to spend her time on mundane tasks so she can focus on higher-value tasks. Since AutoAI can select the more appropriate model for classification or regression, automate feature engineering and hyper-parameters tuning, and provide measurements on the quality of models, data scientists can focus on the evaluation and selection of the model instead of the mechanics of creating one. Overall, AutoAI democratizes data science and AI — data preparation, model development and selection, execution and deployment. This addresses the shortage of data scientists and gets to a solution faster. By accelerating the data science lifecycle with AutoAI, businesses can focus more on high value-added work and innovative solutions. This is why we are focused on sharing the best practices and playbook in AI. The Future of Work Webinar in data science will be more exciting and dynamic I predict. Ready to learn more about AutoAI? Check out this Website where we built an AutoAI playlist of videos, product tours, and hands-on lab. Or, join us at our live 3-part Virtual Data Science Camp Fall Edition starting on October 31, 2019. You can view the Summer Edition of this popular 3-part series here. If you are interested in other IBM Watson Studio-related webinar, please read the following blog.
https://medium.com/ibm-watson/autoai-for-data-scientists-from-beginner-to-expert-cc6a93bb5c3b
['Jacques Roy']
2020-04-13 15:05:44.089000+00:00
['Machine Learning', 'Watson Studio', 'AI', 'Editorials', 'Data Science']
Taking Down the Princess Club — part 1
Photo by Ben White on Unsplash The Princess Club was ruthless. We named them that because the group of four uppity women who flaunted themselves in front of any wealthy bachelor in town, would step all over anyone who got in their way. “Something needs to be done about them.” I glanced over at my best friend, Tandy. “I agree, but what?” She crossed her arms and glared at the four women who were gathered around the local pediatrician. He was new in town and the ‘deer caught in the headlights’ look he was giving told me he wasn’t liking this attention. “I wish I could do something, anything, to make them uncomfortable. Even if it’s just for a moment. Even if I can’t see it.” An idea started to form. “Today is Tuesday, right?” Tandy raised an eyebrow at me and nodded. “They always head to the spa, followed by brunch at the Country Club afterwards. That gives us about four hours…” “I’m starting to like the sound of this,” Tandy said, rubbing her hands together. “Go on.” “We’re going for a drive. Come on.” We arrived at the first woman’s house, Cynthia. A gardener was working out back, shaping her hedges. I giggled thinking how naughty that sounded. Living in a small upscale town had it’s advantages. For instance, I knew that during the day, no one kept their doors locked. Easy access for anyone who wanted in. Acting as though we belonged there, we walked up the stone path, across the front veranda, and into the three story house. “What are we going to do?” I glanced around, looking for inspiration. “I don’t know. Let’s go to her room.” Cynthia lived alone, the house an inheritance from her grandmother. We made our way up the spiral staircase to the top floor. Talk about a princess in her tower. The entire top floor was dedicated as the master bedroom. We stopped and stood, our mouths agape. Everything was white, and fluffy. Even the silk curtains that billowed in the windows. Next to the king size, four-post bed, was a nightstand, and that’s where my inspiration lay. “I need a knife.” “What are you going to do?” The plan was simple, but I wanted to keep it a surprise. “Just run down and get me a sharp paring knife.” Tandy backed out of the room, and ran downstairs, returning red in the face. “No wonder Cynthia stays so thin. I couldn’t imagine having to come up two flights of stairs every time I needed to go to my bedroom. Here.” Tandy handed over the knife and bent over, trying to catch her breath. I took the small knife from her and walked to the bed and sat down, sinking into the puffy duvet. On the nightstand was a romance novel with a bookmark in it. Cynthia had made it about three-quarters the way through. I opened up the book, and taking my time, sliced off the last two chapters. Over in the doorway, Tandy giggled. “Oh, that’s good. There’s nothing worse than getting into a story and not being able to find out what happens in the end.” I couldn’t keep the grin off my face. “Precisely. We’re done here, on to the next one.”
https://stefanivader.medium.com/taking-down-the-princess-club-part-1-9a716577effc
['Stefani Vader']
2020-05-03 04:12:49.910000+00:00
['Class', 'Revenge', 'Society', 'Fiction', 'Short Story']
Where do we go from here?
photo by Aaron Burden on Unsplash.com The political climate in our nation has become so toxic what can be done to reverse course? It’s no longer acceptable to have a differing viewpoint from anyone anymore. Either you believe what they believe or you get shouted down and cancelled. Death threats and hate-filled rants are directed at people on both sides of the political aisle. Does anyone truly believe that through the shutting down of thought and ideas that this nation will thrive and prosper? That it will be the beacon of light it has been for the whole world since its founding? The United States has had many difficulties and dark times in the past to be sure. And we no doubt have much more to overcome. However, you can not look at where we’ve been to where we are now and honestly say that great strides and much progress have not been made along the way. The constant bickering and nasty fighting back and forth are doing nothing but pulling us backwards. The constant need to make everything a political issue and argument is absolutely tiring. It has in my view alienated and silenced many in this country that just want to live a peaceful life and enjoy the many freedoms this country offers.
https://medium.com/indian-thoughts/where-do-we-go-from-here-46cd138d87bf
['Eric Allen']
2020-10-22 14:01:06.707000+00:00
['Politics', 'Society', 'Culture']
What Is The Fifth Dimension, And Where Did It Come From?
Kick a cube across the floor. Hard. In the few seconds it’s in motion for, you’re seeing four dimensions in play. The first three are hight, width, and depth. You can see these on the cube itself, you don’t even need to kick it to get a feel for them. The last one is time, and you see this one over the progression of its movement. This dimension is time. Although we can’t really see time itself, the progression of distance is probably the closest you’ll get, and it does make sense in this simulation. That’s four dimensions — but most of the time, especially when discussing the properties of spacetime, there’s an additional fifth dimension added in, too. This is considered to be a micro-dimension, rather than one of the full-fledged ones you can see by kicking a cube across the floor. Even with the four pre-discussed dimensions, these are already what we consider to be “the fabric of spacetime”. An example of a four-dimensional graph, which represents the three physical dimensions as well as time. Instead, the fifth dimension came around when physicists were trying to connect all parts of the universe in a way that made sense — or rather, they wanted to try and connect all the fundamental forces known in the universe, in a way that would make sense. This became known as the Kaluza-Klein theory, which has the ultimate goal of connecting gravity and electromagnetic force into a fifth dimension. The Mathematical Stance Of The Fifth Dimension Later on, these calculations were found to be slightly inaccurate, but they provided basis for a later mathematical claim which surrounds the fifth micro dimension. The first calculations, done in the Kaluza-Klein theory, involved rolling the fifth dimension into a dense loop, which would’ve been about 10 to the negative 33 centimeters big. From this point, Oskar Klein figured that light was something that occured more in the fifth dimension, and what we saw of it was a more diluted version. Think of it like when you swim underwater in a pool, and there are ripples on the still surface. You’d perceive the ripples as shadows, rather than the ripples they actually were. This is how Klein thought of light, having the majority of it occur in the fifth dimension. It was in this way he was able to draw up a connection between the two main forces of gravity and electromagnetism, which otherwise seem indirectly related in our perceivable universe. Later, this whole idea phased into the idea of superstring theory and supergravity, which later evolved into M-theory. Ultimately, the Kaluza-Klein theory has become more of a gauge theory, meaning it fits into a certain type of field theory in which the Lagrangian doesn’t change under local transformations. An example of how this fifth dimension might be represented. When we actually look at what suggestion to a fifth dimension there is, it isn’t a lot. The fifth dimension would be incredibly difficult to see, at any rate, because we’ve already established that it’s one that isn’t “perceivable to us”. Just like swimming underwater, you don’t see the ripples. Same here — you won’t be able to see the fifth dimension because it’s above you, on a different plane. However, that doesn’t necessarily mean that it’s time to give up all hope — rather, it’s time to check out the evidence that’s already been acquired, mostly by the Large Hadron Collider, one of the largest particle colliders in the world. Knowledge We’ve Got Suggesting The Fifth Dimension The most valuable thing to note here is that we’re still suggesting a fifth dimension. While it’s pretty well accepted in the physics and mathematical communities due to the amount of sense it makes when talking through equations, we still can’t really observe and fully confirm its existence. However, with the help from the Large Hadron Collider, there’s been more suggestion to its prominence in the universe. A look into electric gravity, a theory which suggests the combination of these two big fundamentals. Here, there’s the idea that collisions of subatomic particles result in the production of additional particles, one of which could be the theoretical graviton. In this scenario, the graviton leaves the fourth dimension, and “leaks” into a fifth dimensional bulk. Through this, there’s more of an explanation which comes from why gravity is so weak. Although we consider gravity to be relatively strong, it’s considered to be weak due to how easily it can be overcome in certain circumstances, especially by other forces. Think about building an electromagnet. With the electromagnet, you can lift objects. This ultimately is pulling them from the gravity, making the electromagnet a stronger force. By suggesting a five-dimensional space, having gravity be weaker force makes slightly more sense, which is why it’s widely considered to be a theoretical construct, especially when discussing physics. Not only that, but it’s said to be a micro-dimension due to the fact that it doesn’t have full access to us — since we aren’t able to see it, even though we interact with it. Think of it in reference to what we know as colonists in colonial America as compared to the rule of Great Britain. In this case, colonial America represents the four dimensions which are interacted with. However, England (or GB) didn’t really have the same nature of being present in colonial America, and although they interacted with it somewhat, it wasn’t there for much of the time. Lastly, it makes way for other theories that do need more than one dimension and that need this seamless tie between the fundamental forces. Later on, the Einstein-Maxwell theory worked with the fifth dimension, trying to get it so that it would be derived from a distance. Initially, the idea had been to try and fit electromagnetism into its own slot in what we know as spacetime, but that didn’t quite work. Up to this point, the suggestion and usage of the fifth dimension is the best we’re going to get. What This All Means: A Little TL;DR The fifth dimension is a micro-dimension which is accepted in physics and mathematics. It’s here to have a nice and seamless tie between gravity and electromagnetism, or the main fundamental forces, which seem unrelated in the regular four-dimensional spacetime. As of now, we can’t see the fifth dimension, but rather, it interacts on a higher plane than we do. It’s because of this that we can’t really study nor fully prove it’s existence. Despite this, there are theories which have been run through the Large Hadron Collider, which have helped support and suggest the idea of having gravitons transition from the four dimensions to the fifth one. Still, the fifth micro-dimension stays, because it’s able to help along and support other physics theories which make more sense when you take a look at how the dimensions themselves are constructed. Thank you so much for reading this, and I hope you had fun getting a chance to (hopefully) learn something about the fifth dimension! If you’d like to talk (I always would love to!) please email me at amesett@gmail.com, or find me on LinkedIn under Amelia Settembre.
https://medium.com/swlh/what-is-the-fifth-dimension-and-where-did-it-come-from-1296487fafcf
['Amelia Settembre']
2020-05-03 17:44:29.375000+00:00
['Mathematics', 'Quantum Mechanics', 'Space', 'Physics', 'Science']
Flexbox — The Animated Tutorial
Flexbox — The Animated Tutorial In my previous tutorial I dumped all of the flex diagrams in one place to show you flex box bird’s eye view — but it’s not enough. Here’s a list of my best web development tutorials. Complete CSS flex tutorial on Hashnode. Ultimate CSS grid tutorial on Hashnode. Higher-order functions .map, .filter & .reduce on Hashnode. You can follow me on Twitter to get tutorials, JavaScript tips, etc. To get a good idea of how flex works try flex layout editor on this page. If a picture is worth a thousand words — how many more animation? Flex cannot be efficiently & fully explained by text or static images. To cement your knowledge of flex I created these animated samples. Notice overflow: hidden type of behavior is the default here because flex-wrap is unset. By default flex will not wrap your items. It works a lot like overflow: hidden; The first thing you will probably learn about flex is flex-wrap. Flex Wrap Let’s add flex-wrap: wrap to see how that changes flex item behavior. Basically — it will simply expand container height and wrap items down a line. Note: container height is not specified (auto / unset) but still expands. wrap This is a common pattern when you have an unknown number of items with unknown content size and you want to display them all on the screen. Reverse actual order of items with flex-direction: row-reverse. row-reverse Perhaps this can be used for content with right-to-left reading order. You can “float:right” all of your items that fit on the same line with flex-end. This is different from row-reverse because the order of items is preserved. Order is reset per line whenever item breaks occur. Justify Content The justify-content property determines the horizontal align of flex items. It looks a lot like the previous example… except the item order is preserved. flex-end In the following example (justify-content: center) all items will naturally flock to center of the parent container — regardless of their width. It’s similar to position: relative; margin: auto on regular elements. center Space between means that there is space between all inner items: space-between This next one seems almost identical to the one above. That’s because it’s an entire alphabet we’re looking at here. With less flex items, the effect would appear more distinct. The difference is margin on the outside of corner items. Property space-between (above) has no outer margins on corner items. Property space-around (below) creates equal margins around all items. space-around The next is the same example but with a wider middle element. space-around As you can see here. you still have to experiment with flex in the context of your own content in order to achieve results that make sense for your layout. This is why I decided to make this tutorial. Even animated examples are limited to item size. Your results may differ based on your content dimensions. Align Content All of the examples above dealt with justify-content property. But you can also align things vertically in flex even when it comes to automatic rows. Properties justify-content (all examples above) and align-content (below) take exactly the same values. They just align in two different directions: vertical and horizontal with respect to the number of items stored in flex container. Let’s explore how flex handles vertical align… align-content: space-evenly A few observations about space-evenly: Flex automatically allocates enough vertical space . . Rows of items are aligned with equal vertical margin space. Of course you can still change height of the parent explicitly and everything will still be properly aligned. Real-Case Scenario In an actual layout you will not have a bunch of alphabet letters in a straight line. You will be working with unique content elements. I just wanted to quickly demonstrate how flex works by animated examples up to this point. But when it comes to actual layouts, you will probably want to start using flex with less items that are larger in size. Something that resembles actual website content. Let’s take a look at a few ideas… Combining Vertical Align and Justify Content At some point you’ll probably need your content to be center-justified. Space Evenly Using space-evenly for both align-content and justify-content will produce the following effect on a set of 5 square items. When it comes to out of the box responsive areas with flex… first make sure to keep the width of your items the same wherever possible. Note that because the number of items in this example is odd (5) this case will not produce an ideal responsive effect you’re probably looking for. Using even numbers can solve this subtle problem. Consider same flex properties working together with an even number of items: Responsive in a much more natural way with an even number of items. Using an even number of items you can achieve cleaner responsive-like scaling without having to use CSS Grid or JavaScript magic. Centering items inside an element vertically has been a huge problem in layout design for over a decade. Finally solved with flex. (Uhh.. you can do it in css grid too.) But in flex using space-evenly value in both dimensions will automatically space your content even with variable item height: Perfect vertical align with multiple items and varying item height. Above is the depiction of the most commonplace use of responsive flex and for the next 10 years (joke.) If you are learning flex you will discover that this is perhaps the most useful set of flex properties in general. And finally here are all possible values in one animation: flex-direction: row; justify-content: [value];
https://jstutorial.medium.com/flexbox-the-animated-tutorial-8075cbe4c1b2
['Javascript Teacher']
2020-10-24 02:11:31.099000+00:00
['CSS', 'Design', 'Web Development', 'JavaScript', 'Web Design']
Cancel the Magazine.
Cancel the Magazine. Who is watching your social media highlight reel? Photo by Alexandra K on Unsplash Hi. Hi you. Yes you. You were always over the top. You really looked like a model in your IG photos. You had the angles, the sunglasses, the big hats, and the shiny…reflector…light…disc thing. You did it all. Okay, I must admit that before the pandemic, we were all sort of like you a little bit. Things were changing, but it was slow. Your skin was always flawless, your waist snatched. You always cropped out any girls who could be considered prettier than you. There’s a photo of you at the racetrack, at the opera, in front of the Met, on a boat in front of a tropical island, and on a throne being held up by shirtless men. I just need to know: Who is this magazine for? Your determination to be perfect carried over into the pandemic — Silly photos of you baking bread, crafting your own cocktails, and, toward the end of summer, a properly-timed black tile with a simple #BLM hashtag. You know, nothing too controversial. Just something to let people know you care. You’d never think of posting anything openly critical of the police, of the military, of the government. Your social media is so…corporate. But you’re not promoting anything. You’re not making any points. From what I can tell, you’re not making any money. I’ve been around you long enough to know it isn’t for fun. At first, I thought it was just sort of like a hobby. But you don’t have fun. You take the same photo 30, 40, 50 times. It ruins your whole day. You’re snarky when you see a woman who is thinner than you are. You’re snarky when you see a woman who is bigger than you are…and is more successful. You’re not with this new body positivity movement…you talk about 90s and early 00s models with adoration, you know, “back when you had to really work for it.” By that I think you mean “starve.” Does it bother you that your attitude is…old? That your perfect social media ages you? That it’s a sign that you’re no longer with it? That you’re dating yourself…and becoming everything you feared you’d become? Does that ever cross your mind? Your reluctance to download new apps like TikTok is telling…you think you’ve mastered the art of social media, but now there are people getting famous just by being silly. By embracing their bodies. By being themselves. Does it make you mad that they enjoy their lives and have a following? Does it make you feel better to pretend to be above it all? The pandemic has accelerated the trend. People are more open about their beliefs, about therapy, about overcoming their eating disorders and enjoying sex. Everyone sees your account and the effort you put into it…and everyone knows it’s not real. We have all the same filters, all the same apps, all the same technology. We can all look like plastic dolls if we wanted to. For your own mental health — If this isn’t fun or stress-relieving for you, you can stop. You can stop trying to be perfect on camera. Everything is hard for everyone right now. I don’t know who or what you’re doing this for…It sort of made sense when you were in your early 20s and trying to become an IG model, but I don’t think that’s your goal anymore…and also, the market has changed. People want to see weakness, vulnerability, flaws. When everyone can be photoshopped to “perfection,” that standard becomes meaningless. This perfection is a weight you’ve been carrying that’s far more damaging than a few pounds on your hips. You have beautiful freckles that no one sees. You have a front tooth that is slightly crooked. You have great facial expressions, but you go into model-pose mode online, and all of that is lost. Basically, I miss you — the real you. Please, if you can — Cancel the magazine.
https://medium.com/are-you-okay/cancel-the-magazine-dab201bb6f9f
['Lisa Martens']
2020-10-19 14:44:18.129000+00:00
['Humor', 'Social Media', 'Beauty', 'Society', 'Satire']
Danganronpa V3: Killing Harmony’s Ending Guilts You for Playing the Game
One of the characters in the story, Tsumigi Shirogane, the Ultimate Cosplayer, reveals to the surviving cast that everything they’ve seen and experienced is fictional. While the Danganronpa series has lightly brushed up against the fourth wall, this abrupt destruction of the barrier between fantasy and reality comes harshly. When the player looks back on the story, this notion of a pseudo-reality begins to be more plausible. One of the giveaway moments for me as a player was coming across a book discussing the plots of the first and second games. At first, it seemed like a tool for the mastermind (who seemed to be committing a copycat killing of the first two games) to slowly reveal the story. Instead, it was a loosely thrown safety net by the developers to prepare us for the incoming blow. Another attempt to warn the player of this revelation came from the existence of Rantaro Amami. This character’s “Ultimate Survivor” talent wasn’t revealed until the player neared the ending of the game, which alludes to him being a survivor of a past killing game. Danganronpa V3 does not stand for “version 3,” as many assumed. Instead, Tsumigi reveals to both the player and cast of characters that the game they are playing is the fifty-third iteration of Danganronpa, cleverly disguising the title with a roman numeral. The killing game occurring in Danganronpa V3 was, in the story, orchestrated by the real-world game development company Team Danganronpa. The students that participated, such as protagonists Kaede Akamatsu and Shuichi Saihara, volunteered themselves to be contestants on the fifty-third season of Danganronpa. This resulted in their memories being wiped. Students are also granted their own “ultimate” talents, such as Kaede being the “Ultimate Pianist.” While every trial in Danganronpa involves an investigation into the murder of a fellow student, the final trial becomes a fight between the surviving characters and Team Danganronpa. Tsumigi Shirogane, the Ultimate Cosplayer, shapeshifts into different characters from the past few games, mocking Shuichi and the rest of the surviving characters. Both Monokuma and Tsumigi mock the cast, saying things like “Yelling for help is useless for fictional characters anyway.” Tsumigi also tells Shuichi that the strand of hair sticking up is an antenna for the player to control him. A horrifying declaration. Source: Spike Chunsoft. It’s at this point that Monokuma turns an eye to the player themselves, claiming that the depression the characters are put through is the appeal of Danganronpa. This is all occurring while the leftover cast is being thrown through a depressive state, “despair,” as the game puts it. This whole scene is heart-shattering, especially as Claude Debussy’s “Clair de Lune” graces the atmosphere. The game then begins to shut down as Shuichi reflects upon everything that occurred throughout the story, including his feelings for the late protagonist Kaede, realizing they were all just lies. Yet, the show goes on. The audience demands a different ending through K1-B0, the Ultimate Robot, and the game boots up again. He fights for “Hope,” while Tsumigi fights for “Despair”, two concurrent themes battling throughout the series of Danganronpa. Shuichi declares that, instead of choosing hope or despair, he is going to neglect both, discontinuing the killing game. This final trial in Danganronpa V3 feeds off of the excitement you, as the player, feel while playing the game. It utilizes that hype and turns it into a weapon against the player. Shuichi exclaims that even if the world around him is fictional, the sadness and pain he feels is real. He claims to reject the world (the player). There’s a constant back and forth between the protagonist and Tsumigi. At this point in the story, the audience is being played through K1-B0, fighting Shuichi for hope. This entire trial develops into a fight between the player and the characters within the game. The characters want to end the game because of the depression it instills in them, and by continuing to play, the player is feeding into the tragedy of the story. “We will reject Danganronpa.” — Shuichi Saihara The characters all abstain from voting, taking power for themselves, and ending the killing game. The voices of the audience cross the bridge into Danganronpa and back to the player through the game. “This isn’t my Danganronpa”, “I’m gonna be pissed if there isn’t a happy ending”, “I’m not here for a damn lecture”, and “No way fiction can change the world, lol” These statements are pushed through the screen by the audience to Shuichi, who is fighting with his entire being for his volition. As the player, you are once again controlling Shuichi, struggling with him to end the game as the rest of the world pushes for it to continue. This egregious entitlement isn’t far-fetched, either. Some of the statements being said by the fictional Danganronpa audience resonated with my original thoughts as I played through this ending, but being forced to see my behavior allowed me to shift my perception to be more empathetic towards Shuichi. This trial was nothing short of a roller-coaster for the audience. The trial ends with K1-B0 and the rest of the characters refusing to vote and destroying the school as the audience tunes out of Danganronpa, giving the characters the power to end the game once and for all. The characters convince the player and the audience to destroy Danganronpa and end the killing game. By making this choice, the characters assumed their lives were forfeit. But Shuichi, Maki, and Himiko grow comfortable dying, knowing they were able to change the world through fiction. This sudden dissonance between the fake Danganronpa and the real-world makes sense as a narrative tool when taking an objective look at the history of the series. Source: Spike Chunsoft. Danganronpa: Trigger Happy Havoc arguably hosts the most realistic killing game. The students of Hope’s Peak Academy in Trigger Happy Havoc look, speak, and feel more like real people. Even their ultimate talents are more grounded in reality, providing a sensible and genuinely scary atmosphere. The big secret of the story also hasn’t been revealed to the audience yet, which makes the first game’s mystery more rewarding and enticing to uncover. Source: Spike Chunsoft. In Danganronpa 2: Goodbye Despair, the tone of the story is very different. The characters take a step towards being more whimsical and exuberant. Even the setting, a tropical island getaway, is much less intimidating and believable. By the end of the story, it is revealed that the entire killing game was part of an artificial program and that the characters are actually simulating the murders. This simulation is taking place within the world of the original Danganronpa. The next iteration of Danganronpa is Danganronpa Another Episode: Ultra Despair Girls, which sets out as a side-story. It plays as a third-person shooter and tells the tale of the main protagonist’s sister, Komaru Naegi. The writers of Danganronpa had to shift genres to flesh out another story within the Danganronpa universe. Finally, the Danganronpa team decided to wrap up the story of these games with not one, but two anime series that detail the events before and after the second game. This granted the upmost flexibility for storytelling, foregoing gameplay mechanics. Danganronpa’s story was able to finish cleanly through these anime seasons. Attempting to detail more of Danganronpa’s world through the same genre of visual novel killing game wouldn’t bear great results, considering the second game stretched the world thin through a virtual program. When fan demand didn’t die down for more of the killing game, Team Danganronpa answered with Danganronpa V3: Killing Harmony. The commentary on the series’ oversaturation suddenly makes much more sense.
https://medium.com/super-jump/danganronpa-v3-killing-harmonys-ending-guilts-you-for-playing-the-game-e34decea2cd0
['Paul Lombardo']
2020-10-11 08:44:01.736000+00:00
['Gaming', 'Anime', 'Features', 'Creativity', 'Culture']
Photography and the Art of Seeing the World
“The camera is an instrument that teaches people to see without a camera.” Dorothea Lange Digital life has changed everything, including photography and how we take photos. Everybody can take a photo, just pick up a plastic cased digi-machine and point and shoot. The results appear to be astounding, flabbergasting. For a while digital photography, it’s amazing capacity to balance the light and darkness, to tone the colours and pick up the smallest details of an object that caught our eye, made us all go nuts and buy some sort of digital camera. Smartphone or DSLR or Mirrorless, take your choice and enjoy photographing the world as you see it. The question is, are we really photographing the world as we see it? Image: Sean P. Durham, Berlin, 2020 Do we take the time to get over the amazing technology and stop long enough to look at the object that caught our eye? The world is full of techies, gear-freaks, and early adopters who seem to be hell bent on using the latest, bestest equipment that’s on offer. They swear blind that their camera, the latest technology, is what you need if you want to take great shots and if you want to be taken seriously as a photographer. Digital photography has overcome many problems that photographers used to love working out for themselves. How to set the shutter speed, the aperture size and the distance, the shadow, the light, the trick of adjusting the ASA (ISO) rating up a notch to trick the camera into working as if you had a fast film roll inside. Selfie of You and Me Now, all you do is twist the dial and set the whole camera onto “P” setting if you don’t want to control things. It’ll take great shots when you push the button. My question is, why are you and me taking photographs? The camera type doesn’t matter so long as you have a lens that leads to a box of tricks that will capture the moment. Many a street photographer has proved her or his worth by using a camera that most gear-heads would sneer at. Today, many photographers find that they can get the shot with a smartphone that has high quality. There a many good examples of street photographers and portrait photographer’s work that were taken with simple cameras. The lens on the phone was just right and the basic settings were enough to get the shot. The real thought is that you are seeing something important. You feel something about a little corner of the world that you bumped into and so your mind begins to focus, to engage with the object, and when you feel that you have understood it, you frame it and keep it — in your thoughts. Man attempting to Capture a Thought Photography allows us to frame the moment that we have considered deeply and capture it so that we can go back and take another look. This allows us to ponder our own ability to see things correctly, to ask questions about why we thought the object or person in the photo was worth keeping. Maybe, we even get to point where we ask “what on earth was I thinking?” when we realise that our momentary ‘stopping’ and focusing was probably a stupid idea, at that time. Taking photos changes our view of the world. A perspective is a mentality and the person involved in looking makes decisions about what is important to them based on their biases. Digital photography has led to the snap-shooter who can click off several hundred shots in a very short time. There’s no processing costs to think about, and if you shoot enough rolls of pixels a pretty interesting shot might show up amongst the several hundred shots that day. A photographer soon discovers that cheap shots can lead to a lot of work, sorting and culling photos, hoping that the little gem is in there someplace. And when the little-gem turns up in your workflow, you tag it with glee and start adjusting the colours and the exposure and the rest. For some reason, you saw something interesting, the little-gem, and took a shot that turned out to be interesting. Then during the after-shot workflow process you decided to change what you saw. We have a tough time just looking at things. The world is an ever changing fast flow place. To stop and look, is hard. When we have an interesting shot and can remember the reason why we stopped to photograph it, we should accept the decisions that we made at that moment, the framing decisions and the feeling about it, and keep it. It’s when we sit down at the computer and see the photo that seemed so interesting that we allow a new judgement to kick in. Our mind becomes objective and critical about the shot taken. The new thought is a piggyback idea of why we took the shot. We start to judge our own ability to make a decision about what’s a good photo and what’s not. We then begin to experiment with the controls in the software and try and bend the world into our own biased opinion of what we saw. At that point, we are in danger of becoming fiction makers. Photography has always been a principle medium for recording images of the world. Today, we look at photos that were taken during the Great Depression, and other photos taken of small moments of joy in the lives of normal, everyday people. We hope that what we see in these photos is an honest recording of history. “Migrant Mother” by Dorothea Lange Photo: Sean P. Durham ,Berlin,2020 How often have you come across a photograph of a man or a woman, an unknown person who is laughing at something out of frame, a glass on a table next to a half full bottle of wine? And asked yourself who is this person, why are they laughing and what are they celebrating? The photo elicited emotions in you. You began to engage and ask questions. At which point the photo becomes important. The photo of cheer and joy reflects many feelings about our own lives. Joy is something that all sensible people seek. A photo of a mother and child who are starving and maybe close to their end, reflects on our fear of how cruel the World can be. Flowers and Sparkling Wine, By Sean P. Durham Some people have travelled the World to discover the true meaning in a photo that they have found. Others have travelled to far and distant places in search of a meaningful photograph. Taking photographs is a passion, a hobby and a profession. Regardless of the reason, there is a responsibility involved about what you decide is a worthy moment for memory and how much thought you put into framing the shot and adjusting the colour and light. All of these things will affect the outcome. The moment is fleeting and your chance for a great shot, whether it be a wedding photoshoot, a portrait or a major event in public, is a matter of skill. How skillful are you with your thoughts? How often have you taken a shot knowing it won’t be up to scratch, but you have the backup of Photoshop to support correcting laziness or mistakes? Photography requires a person to be a thinker. Your main tool is grey-matter between the ears. At the time of taking a photo the photographer should already have made a decision at the attempt to save that particular moment. And hoping that it went right, that the end result will be satisfactory of that moment, and this has a lot to do with being conscious in the moment. Digital photography encourages a person to document their daily lives, who knows, it might be worth it. But the mindless snapping of selfies isn’t necessarily a recording of the self. Doing what? Interacting with self? Self interacting with the world around is a legitimate way of remembering one’s own experiences without having our own face in every shot. If anyone who is a Selfie Addict should look back at their life through photos, I think they will become bored within minutes. The photographs that we treasure have magic in them. Being conscious of the taken moment, knowing the feeling and recognising that something is important is a reflection of the magic within ourselves. To take a photo at this moment, to know that skill is at hand, and to put mind, body and soul into this small and fleeting moment, is to take a photograph that includes self, somewhere, in the image too. Artists are always seeking the magical moment, and sometimes they find it and keep it. But to do this, to work magic, the magician must train herself to work the tools that transform the base idea into the splendour of light that human magic really is. Magic; that particular thing in a photograph that can’t be named with words. It’s just there — and everybody knows it. Photography is one way to look at the world we live in. There are other ways. If you take photographs and want to be better, to take great wedding shots, portraits or street shots, then developing the skills of thinking while working are as important as is the basic equipment needed to technically capture the shot. All the gold in the world… Photo: Sean P. Durham, Berlin, 2020 To be a good photographer is to be a thinking person. There are people who hate thinking — they hate using their brains because it requires introspection and patience especially when you discover that you are just as capable of being an idiot as the next idiot. The real thinker knows that mistakes are inevitable when working with a rough stone that isn’t yet ready for the building work. All the technical equipment in the world won’t make you a great photographer. Time and patience, enjoyment of the process and journey, and the ability to see deeply into your own environment and interact with it, is the path that leads to great photographs.
https://seanpatrickdurham.medium.com/photography-and-the-art-of-seeing-the-world-1c000ca4229c
['Sean P. Durham']
2020-04-23 19:52:02.007000+00:00
['Visual Art', 'Creativity', 'Thinking', 'Portrait Photography', 'Photography']
Bonus Episode: Live from BETT 2019
Bonus Episode: Live from BETT 2019 What will it take to inspire the next Ada Lovelace? Whilst we are busy working on season 3 we wanted to share this conversation recorded live at BETT 2019. Subscribe on Apple Podcasts, Google Play, Stitcher, Soundcloud, TuneIn, or RadioPublic. Titled ‘If She Can See It, She Can Be It’, host Anjali Ramachandran chats to Chiin-Rui Tan founder and CEO of Rho Zeta AI, and Elena Sinel Founder of Teens in AI, and asks what will it take to inspire the next Ada Lovelace or Rosalind Franklin? What do we need to do to transform fields like AI and help create a more fair future of work. And how can educators help to recognise, develop and promote the female innovators and leaders of tomorrow? Anjali Ramachandran — Ada’s List Elena Sinel — Teens in AI Chiin-Rui Tan — Rho Zeta AI We’ll be releasing more bonus episodes throughout March and April and season 3 will be landing this summer.
https://medium.com/nevertheless-podcast/bonus-episode-live-from-bett-2019-28132b34844f
[]
2019-03-08 12:05:21.620000+00:00
['Tech', 'Podcast', 'Education', 'AI', 'STEM']
[PostTGE] Week 20 and 21: Marketing and the Nearest Strategies of GraphGrail Ai.
21.05- 03.06 Strategy May 25, GraphGrail Ai completed another important milestone — successful completion of TGE. Now, the next product period is about to begin. GraphGrail Ai have sent the newsletter describing company milestones for the last seven months and their plans for the future. This way the team shared some details of future development: We plan to complete development of main components of GraphGrail platform by December 2018 thus delivering the ready-to-use product. What do we have to do for this? To complete four active projects and to place scalable solutions at MarketPlace. 2. To create a user-friendly interface of the application designer to make it possible to get AI solutions without having any expertise in programming. 3. To build the system of platform scalability with new AI solutions. 4. To create the model of attracting new users to the platform. Currently, the team is actively working at their further investor relations strategy Marketing Before completing TGE, GraphGrail Ai made the token air drop. It means that these tokens were sent to about 2,000 most active addresses. This distribution of a small amount (1 token) to wallets was a part of advertisement of the completing Tokensale inviting for possible investments. To attract customers online, the team launched context ads in Google Adwords. Now a search query, such as GAI token returns the needed information: gai.graphgrail.com, What is GAI Token (GraphGrail AI) publications, an official website of GraphGrail Ai, etherscan.io data etc. The team has also met with the CEO of a large marketing agency to negotiate shaping the project promotion strategy in Russia and worldwide. Important aspects of the negotiations included: product positioning at post TGE stage, market segmentation and identifying key products, developing the communications strategy of interacting with the platform community and attracting users. After all tokens and bonuses are paid, all tokens that were not sold will be burnt according to TGE terms and conditions. To recap, the first wave of burning tokens has already taken place. 715 million tokens have been burnt before the end of TGE. Thanks to all participants of Tokensale! Softcap has been raised successfully! Access to the exchangeis expected any time soon! Always yours, GraphGrail Ai!
https://medium.com/graphgrailai/posttge-week-20-and-21-marketing-and-the-nearest-strategies-of-graphgrail-ai-f68573b12f32
['Graphgrailai Llc.']
2018-06-16 20:20:19.882000+00:00
['AI', 'Weekly Digest', 'Blockchain', 'ICO', 'Bitcoin']
Building a culture around metrics and anomaly detection
Anomaly detection is a very broad term. Usually it means that you want to see if things are running as usual. This could go from your business metrics down to the lowest level of how your systems are running. Anomaly detection is an entire process. It’s not just a tool that you get out of the box that measures time series data. Similar to DevOps, anomaly detection is a culture of different roles engaging in a process that combines tooling with human analysis. “Are our expectations wrong or has the world around us changed?” says Alexander Pucher, an expert of modern anomaly detection at big internet companies. I recently had a chance to interview Alexander on an episode of The Little Tech podcast about a unified, comprehensive process for reporting and data analysis. Anomaly detection is not just a single level of insights. As you go down in the hierarchy of events and metrics, different parts of an organization are interested in different insights. Eventually, you get down to the desire to have something that does anomaly detection on real-time data. We can consider this “smell of smoke” to be the first step of anomaly detection, and it can be costly without the right culture and tools to be aware of early indicators that lead to problems. Anomaly detection is a part of a bigger process. For example, let’s say I have an organization and there’s a definition of business as usual. Then suddenly, a problem comes along. Whether or not that problem is being monitored, you get a smell of smoke coming from either customers or users. This is the first step of knowing that something is off, and it is oftentimes the slowest part of the whole problem resolution process. We can consider this “smell of smoke” to be the first step of anomaly detection, and it can be costly without the right culture and tools to be aware of early indicators that lead to problems. Alexander is a researcher and open source developer that helped create a tool called ThirdEye for anomaly detection at LinkedIn. ThirdEye is a part of the Apache Pinot ecosystem of projects, which both came from early lessons learned at LinkedIn. Investigating anomalies using the ThirdEye tool Alexander says that “you need an additional tool that helps you understand whether a change in time series data is actually meaningful.” ThirdEye as a system is a platform that allows you to integrate your metrics (quantitative information) with events (knowledge or qualitative information) and combine the two so you can distinguish between meaningless anomalies and those ones that matter. As a business, overall, you need to make sure that you’re making the progress you’re expected to make. The business starts with an approximate expectation of where things are going. The insights that go along with these metrics are being observed by business folks at many different levels. When those folks find something in the metrics that have diverged from these expectations, you’ll get questions about why these anomalies happened. “At LinkedIn, our data analysts or a dedicated ops team would be the ones that have to answer these questions” said Alexander. “The answers are often not as satisfying or clear enough to be worth implementing a change to avoid the issue.” The goal of much of Alexander’s work at LinkedIn was discovering the kind of answers that could be automated versus the ones that required creative exploration. Spending less time on repetitive analysis for the things that can be automated allows engineers to stay focused on creating differentiated value for a business. Alexander goes on to say “whenever you look at data, it’s extremely important to know or try to understand the process that generated the data that you’re looking at.” If you take this process of interpreting data in a business sense, one thing that Alexander has learned, perhaps the most critical thing, is “to understand whether or not an anomaly actually has an impact on the business”. Alexander points out that domain expertise is an enormously important part of how different groups and roles understand the meaning of a metric as well as an anomaly. “You have to include the human element of observing and interpreting the meaning of that event. It’s a collaborative process of a machine and multiple humans to figure out what is going on. If we can keep the process online and find the root cause early, it’s much less stressful for everyone” says Alexander. You have to include the human element of observing and interpreting the meaning of that event. When it comes to the COVID-19 pandemic that has continually surprised U.S. politicians, scientists, and the public since the first infection was reported in early May, time series data and charts have largely dominated the conversation around public policy. The news media has focused much of its scrutiny based on the charts, and it becomes politicized to justify the narrative around reporting and public policy. “What does a case actually mean? The cases are defined differently for every U.S. state” says Alexander, “usually there is one entity that controls the decision making for what a page view is. Many different parts of the business have different definitions for what an entity is.” Here, Alexander hearkens back to some of the fundamental ideas behind domain-driven design, which is an entire process and culture about how the business contextualizes the meaning of certain domain entities used in APIs. As a part of my conversation with Alexander, there are some key takeaways worth mentioning. There seems to be a poorly understood organizational process and methodology around designing metrics for analysis. In software engineering, we have methodologies, such as DDD or DevOps, that help developers understand how to collaborate with the business when developing software. When applied to the process of measuring and analyzing data, these organizational practices are left to experimentation and self-guided research. Perhaps what we need to improve analytics across the board is a more broad approach to designing, collecting, and reporting metrics. The entire conversation with Alexander can be listened to on The Little Tech Podcast. To chat with Alexander and other members of the Apache Pinot community, please join the conversation on Slack.
https://medium.com/apache-pinot-developer-blog/building-a-culture-around-metrics-and-anomaly-detection-da740960fcc2
['Kenny Bastani']
2020-07-27 18:31:14.866000+00:00
['Software Engineering', 'Software Development', 'Data Science', 'Programming', 'Analytics']
Is Q4 A Good Time To Start An e-Commerce Business?
4 reasons why Q4 2020 is perfect to start an e-Commerce business 1.) Demand for online shopping is high With the Q4 period (Oct-Nov-Dec) coming up, it’s traditionally touted as the haven for all retailers — where major holidays are coming up and winter season is around the corner. Folks living in the Northern and Southern Hemisphere are all in the festive and giving mood to shop for gifts for their loved ones, friends, and for themselves. And with this coronavirus pandemic seeing no end in sight still, one can expect this behavior to surge higher and move online. 2.) Easier to test products and concept With a higher demand for ready buyers, it also means it’s easier to test any products and marketing strategies that I may have. This helps me to learn buyer behaviors and get ready for 2021. 3.) There are no such things as a bad time I learn that when it comes to online buying-and-selling, there is never a bad time. Because the e-Commerce market is 24 hours 7 days a week in operation, there will always be someone buying and someone selling. Even a global pandemic has not stopped online purchases. The only scenario I can think of that it’s a bad time to begin is when all the network servers in the world all crashed at the same time! Talking about a real virus! 4.) When the student is prepared, the teacher will appear I have been preparing myself with drop-shipping by reading and watching hundreds of hours of the real stories behind many successful drop-shippers, in particular, a really successful drop-shipper on YouTube for the past five months (due to privacy reasons, I shall not disclose the person’s name). And besides watching almost every single video and reading every article my brain can handle, I had also set my mind to get into the mentorship program they have to accelerate my learning curve. My past experience with e-Commerce 13 years ago has taught me that the right mentors and the right marketing are the critical factors in the success of a drop-shipping business. These propelled me to apply for the mentorship the moment I got the chance.
https://medium.com/an-idea/is-q4-a-good-time-to-start-an-e-commerce-business-bd3c20a66a62
['Yan H.']
2020-11-17 17:38:44.874000+00:00
['Retail', 'Business', 'Ecommerce', 'Online Shopping', 'Entrepreneurship']
Do we Need Math? Imagine our Life Without It
Do you remember your math classes in school? I definitely do, and on my mind was always: “Why do I need this formula? Where will I use that theorem?” You heard or even asked the same questions. After that, the teacher used to explain that we can use integral to find the area and derivative to get the acceleration from speed. “And what?” — the question always appeared but was newer asked. Most people will never be trying to find a minimum/maximum of the function in daily life. However, all of us need math, and it’s essential in your career, decision making and any field of life. The reason is abstractions Do you remember that day when the teacher started writing letters instead of numbers? Was this easy for you? Photo by Roman Mager on Unsplash I was always pretty good at math, but it took me a lot of effort to stop fearing letters in formulas. Why do I write fancy letters in formulas instead of numbers? A familiar feeling? But the idea is that everything in math is an abstraction. You understand what is apple, but what is the number? Right, it’s the abstraction to help us to count apples. The same with any other idea in math, either it’s multiplication or Newton–Raphson root-finding algorithm Photo by Ben White on Unsplash Math is abstract because numbers are not real entities. They are purely imaginary concepts. We cannot experience numbers. We can make up stories about them, such as “1+1=2”. But, we can never experience such operation since there is no such thing as ONE of anything in our experience. If there is no such thing as ONE of anything in our experience, there can be no such thing as TWO of anything in our experience, etc. When we do math, we are playing a game in a world of imagination. Like fairy tales, numbers are the characters in that imaginary world, while the operations are the activities that those fairy tale characters perform. And just like fairy tales, math can be beneficial, even though neither numbers nor fairy tale characters really exist. Berj Manoushagian, Philosopher. Why are abstractions crucial? Because it’s an amazingly powerful tool. Even more powerful than mathematics itself. Photo by Hunter Harritt on Unsplash Let me back up a little bit. Forget mathematics. When was the last time you made pasta? You know how to make pasta, right? Boil water, add pasta, wait a little bit, then take out the pasta. If you’re feeling fancy, add some other stuff. Already, you’re taking advantage of abstractions. How do you know how long to boil that pasta? Isn’t it conceivable that one noodle of pasta differs in some way from another noodle of pasta, even in the same box? But you avoid those questions by abstraction. You don’t worry about the specific noodles of pasta you have, but you treat it like this abstract collection of stuff that you only need to boil for, say, 10 minutes. Photo by Christine Sandu on Unsplash As you go forward with your pasta making, you learn that there are yet other shapes. Maybe you boil those shapes differently, but you realise that they’re pretty interchangeable from a culinary point of view. In other words, if you have a recipe for bucatini all’amatriciana, and you have all the ingredients for the dish except you have spaghetti instead of bucatini, by the power of abstraction you can still make an excellent dish: spaghetti all’amatriciana. It goes on: what if your all’amatriciana recipe calls for guanciale and you have none? You know that pancetta is pretty close, while tofu is not close at all. And on, and on. When you study cooking, you don’t even understand a recipe in terms of ingredients, you understand it in terms of roles. You identify the balance of sweet, salty, acid, fat, spiciness, umami, or other flavour categories. In other words, you understand a recipe abstractly. The abstract approach lets you identify solutions quickly.
https://medium.com/swlh/do-we-need-math-imagine-our-life-without-it-c458fda152b3
['Andrew Zhuravchak']
2020-01-19 18:08:02.315000+00:00
['Technology', 'Mathematics', 'Tech', 'Learning', 'Science']
What Is The Daily Life of a Sex Worker Really Like?
It’s the same with emails and updating statuses on Twitter and any other social media; even sites like OnlyFans and ManyVids. Sure, I need to be discreet, and I’m good at that. It’s a must when you’re a sex worker. What’s shocking to most people, who don’t have any experience with sex work, is the fact that actual sex isn’t always a part of what I do. This isn’t the case with everyone who identifies as a sex worker, of course. Everyone is different. And there was a time when sex was part of what I did, at least sometimes. But these days, physical intimacy is something that happens only with myself and my husband, though we’re actively engaging with a woman who we hope will become a part of our lives in that way soon. My sex work today is more about providing intimate experiences for my clients. Conversations, photographs, videos, and education. It’s no less work than when I was a Professional Dominatrix, or when I was a booking agent for an escort service or a phone sex operator. I have to market myself and my products on social media, constantly hone my skills as a writer, practice customer service skills on a daily basis, interact with clients regularly and maintain relationships, flex my creative muscles regularly, all while being a wife and mom who takes care of her family. Because don’t get it twisted and think I have the ability to hire a nanny or maid to take care of my home in the process of doing everything else. Maybe one day! My husband works full-time outside the home, and is a wonderful help when it comes to keeping our son occupied so I can create adult content when he’s home. He’s an exceptional partner and we share the household duties, but there’s a lot I do during the day while still working. Being a sex worker isn’t always sexy. I don’t sit around in lingerie all day touching myself. Though I will admit, on the days my son is with his aunt or grandmother, I get a lot of masturbation done! Sometimes, I even record it for posterity.
https://medium.com/sexography/what-is-the-daily-life-of-a-sex-worker-really-like-357045e8bdc1
['Demeter Delune']
2020-10-15 22:13:56.062000+00:00
['Work', 'Women', 'Sexuality', 'Society', 'This Happened To Me']
3 Obstacles That Are Holding Back Your High-Performing Team
Obstacle 2: Authority alone Second, those that practice using their title to achieve results needlessly add risk to their organizations. Authority alone is not enough to achieve the long-term results we crave. Authority alone . . . it doesn’t scale. It can however lead to increased suspicion among employees and even tempt us in using coercive practices. Photo by Matt Benson on Unsplash Increased Suspicion Employees are suspicious of leaders today because they see us intervene at the slightest hint of a problem. Some even swoop in and take over in what I’ve been told is called a seagull attack. This is where the leader flies in and back out, leaving the team in a worse condition. The saddest part of the story. They thought they were helping. Coercion Worse, some managers coerce others to get things done their way. Most of us have encountered this type. Make no mistake. Under the leader who is looking to grow their kingdom through the accumulation of power, we will pay; one way or another. Over time their behavior infects the work environment. When the organization becomes too corrosive people leave, just as the author of The MECE Muse, Christie Lindor asserts. In a recent interview published by Bently University, she says, “Most turnover is the direct result of a broader system, the organizational culture.” Good News: The environment is changing It is no surprise that those who choose to micro-manage in the face of the tidal wave of change have become increasingly ineffective. Those that have read Team of Teams by General Stanley McChrystal (McChrystal Group) understand that even the military has given up on this approach. They are trading the strict authoritative approach to decentralize decision-making. In his book, Turn the Ship Around, David Marquet (L. David Marquet) provides a new framework for decision making called leading by intention to drive decisions to the lowest possible level. He writes, “Leadership should mean giving control rather than taking control and creating leaders rather than forging followers.” ~ David Marquet In my experience, I’ve seen when leaders begin to see their people differently, expecting people will follow, many are ready. They are just waiting for the right leader.
https://medium.com/swlh/3-obstacles-blocking-high-performing-teams-shepherd-leadership-5bf6ca264eba
['Eric Peterson']
2020-11-16 15:02:31.334000+00:00
['Leadership', 'Team Building', 'Business', 'Personal Development', 'Entrepreneurship']
Karen O, the Yeah Yeah Yeahs, and a night spent in the cold, curious glow of remembrance
Before the second song of their set had ended, the lead singer of one of my favorite bands boldly stepped toward the edge of the Aragon Ballroom stage, where, in one motion, she spit-spewed her drink into the air and stomped on a device that blew forth a flurry of miniature pink Y’s that began slowly descending into, and onto, the audience. It was my first time seeing the Yeah Yeah Yeahs in person, and all I could think was, F*** yeah. I spent a good portion of the next day in a coffee shop in Chicago, seated at a table that was screwed into the floor, fumbling over how best to describe the experienced of the previous night. Total sensory overload, in the best sense. It was one of the Yeah Yeah Yeahs’ last stops on their Fever to Tell anniversary tour, and it was a night during which I was fighting off the remnants of my own flu-based fever, which by the start of the concert had downgraded into sporadic sniffling fits. I came to Chicago in search of an Experience. I needed verification of the reason I’d decided to embark upon what I’ve decided to call the Year of Living Vaingerously™️: that the only way to face up to the fact that in less than nine months I will turn 30 (Yipes!) is to try and remember everything that was good about being young. It’s not so much a nostalgia tour as an attempt to ground the next stage of my life in the virtues of youth, which I feel I would be loathe to forget. It’s what James Murphy said he did before re-starting LCD Soundsystem last year: what would 15-year-old me have wanted the present-day me to do? To keep those two personalities in balance, to recognize that I need not bristle at the fact that I am growing older, and that there are responsibilities attendant to that fact. To not lament the thought that I was a boy not long ago, wondering all the while where the time went when it seemed as if it had been going oh, so slowly. Sizable chunks of life are measured in this manner. I found myself grappling with versions of a younger me throughout this concert. I stared at me in high school; I was angry at the fact that I’d not allowed this wonderful band’s music to impact me when it might have mustered the greatest possible resonance. When I was more of a kid. When it could have fueled a tank of courage with which I could have pursued different dreams. I can only really recall their music hitting me ‘round the head in my final days of college, when Skeletons filtered onto a YouTube playlist as I worked on my final project in the Journalism lab on the top floor of the main academic building. It was a Sunday, I think; if so, it was sunny outside. So I was kicking myself in the foot, watching the kids clustered around me toward the front of the venue, the experience a more weighty one for them surely. They’d parse over the tiniest details and revel in the experience on the way home. Then I’d snap out of my reverie, realizing that maybe this didn’t really matter all that much, anyhow. I was not at the age of ultimate serendipity, as far as this concert was concerned—like I’d been when the sixth Harry Potter book was released, and Harry’s age in the story coincided with my own in real life. This was different, but in some ways, maybe it was better. I thought of the film Only Yesterday, about a Japanese woman in her mid- to late-twenties, stalled in life and circling back to memories of youth as she figured what route forward to take. How that dynamic could lend added resonance to this occasion, for me. It filled me with the first frisson of hope I’d felt in months. Outside of the euphoria attendant in concert experience, I’ve been dangerously drifting, as if real life wasn’t enough. Something I’ve always respected Christopher Nolan for grappling with in his films, namely The Prestige and Inception — the perfection one senses in the presence of pure creation, and how re-entry into normal existence feels like life slapping a duller filter onto you. The first time it, whatever this it was that night, thought it most definitely was an It, hit me came during the performance of Maps. As I’d checked the group’s set lists during their recent string of performances, I’d wondered why they played the song midway through the show. Wasn’t it a show-stopper ripe for the encore? I found out why. It allowed Karen O a comfortable time frame with which she could provide an extended introduction, explaining how it had changed shape in the decade and a half of its existence. Maps has become a teenager steeped in wisdom and experience. She dedicated the song not just to past loves unrequited, its initial vestige, but also to the different, powerful forms she’d found that love assumes. The love she felt in the venue that evening, perhaps most palpable in the ever-present waves of adulation rising toward a group that has been such an instrumental force in people’s lives. The love she feels as a mother, and the respect she has gained for all mothers; the love she felt for the dearly departed, the most poignant example being Stewart Lupton, lead singer of Jonathan Fire Eater, who’d died the day before the show at the age of 43. It put me in mind of the film, Her, for which Karen O provided a song. How that film was directed by Spike Jonze, one of her past beaus, and how the message at the end of that film, earned because of the turmoil throughout it, explained something truly eternal about this force, love. We have such a paltry understanding of it. It is the most powerful element. Seeking to control it, or mold it to the preconceived notions and expectations we bring into relationships is to do it the ultimate disrespect. That the way we stumble through life, bottling up what we feel, denying ourselves a chance at completion because of our petty jealousies. The way they poison good things because we cannot comprehend that love cannot be put in the cage of our own expectations. How love, when felt and provided in its purest form, allows the heart to expand. When that lesson is learned, the heart does not stop expanding. It was the perfect progression from the Fever to Tell documentary the group had made for this string of shows, chronicling an early-aughts tour through the United Kingdom. More so than the raw energy of what where then more intimate settings to play in, I was struck by the unstinting portrayal the band allowed of themselves: Karen O’s heavy drinking, the band’s utter bewilderment in the face of stream-of-consciousness questions from journalists, the first iterations of some of the interdynamic-al fissures that would almost break the band apart, a few years down the line. This is a glimpse into who we were, they were telling us. This was three kids who loved music trying to figure out their place in this whirlwind that’s blown up around them. This is mixing growing up with a heartfelt desire not to sell out. I could feel my eyes brimming. The seamless segue from Modern Romance, which plays as the documentary ends and the live show begins, drew a line between the then and now. So much time has passed, and yet it seems as if it were a crumpled paper that someone snatched from your hand. The concert feels like a blur, as only the best shows do. I remember standing, I remember jumping, I remember pumping my fist at lyrics and sound that meant a whole lot to me. As I’d walk home later, to a cozy B&B near Andersonville, I’d find myself incapable of putting my finger on the sense of loss that hovered alongside like a sickening cloud. I love walking home after a show for this exact reason. There are so many thoughts and feelings that come spilling out of the venue alongside you; so much to parse over. The steady rhythm of one foot after the other a continuation of the sound of the concert that serves as a rhythmic backdrop accompaniment. I fell under a spell, noticing after several blocks I’d been alternating humming along to Poor Song and Soft Shock. I remembered walking past a young woman who’d been standing against a wall, simply smiling. That was all she needed to say about how much this concert had meant to her. Something about being on the cusp of 30; not yet old enough to be taken under by nostalgia’s undertow; still close enough to my youth to feel, perhaps most acutely because the memories are still rather fresh, what it was I was struggling with. To quote a Stars song, it’s what happens when you realize you must say goodbye to the youth you think you had. Who I’ve become, and whether a rock show can, or should, still move mountains within me as music once was able to do. I used to spend afternoons swimming in the thoughts and emotions drawn from a fantastic song. Something that Will Sheff said resonates, that nostalgia can be a response to the fear we feel in modern society. But it does not have to consume you. You can enjoy the way it makes you feel without wanting to cry over opportunities missed. Then, it simply becomes something you keep in a corner of the room of your mind’s eye, and you are able to move forward. I noticed at the concert that there were kids wholly present; seizing experiences the way I wish I could still do. But that’s alright. It’s good to see the torch get passed. The kids always know what to do with it. And it’s never to late to dream. It’s never too late to remember why music is paramount. It can still help you through dark times. It can help you find the nerve to do something new. I once mused upon this quality of great art as I left a cafe in Paris in spring, that eternal bastion of hope, walking on air thanks to something brilliant I’d just read. This is enough, I thought then. I thought of my parents, both doctors. I thought of everything practical people do in terms of professions. I had so much respect for the immediate ways they impacted and aided society. But I knew then, and I think I’ve always known, that I cannot do things like that. I wanted instead to write stuff, to make stuff, that makes people feel like I felt right then. That might give them the courage to turn away from darkness and live. Like what the YYYs made me feel on a warm Tuesday night in Chicago. I was thousands of miles from home, and yet I felt totally safe. I felt good. I remembered the reason Karen O’s power finally came through, for me. She could detail the pains of love, the desire to be left alone (Body, on her solo album Crush Songs, is one of my all-time favorites), while always making known that just as it is necessary to allow those feelings their space and time, you cannot let them overcome you. You must bring yourself back. So, as I made my way home that warm Tuesday night, I thought of this utter contentment I was feeling. I wanted to delve right back into this band’s catalog. I wanted to listen to them as I wrote what I was going to write next. How this thing, this very good thing, had filled me with a belief. That this is a step toward the next stage of something good. There’s a photo Nick Zinner, the YYYs guitarist, took of Karen O backstage before the show. She’s sitting on a couch, quietly reflective, a bit of sunlight streaming past. That she can still muster the energy to be everything an audience needs her to be...No, that’s not right. Like love, it can’t be on our terms; rather, it should be a celebration of what she decides to be. Which is always so cool. To grow with her and this band as they enter into the next stage of their existence. Isn’t that exciting? An ability to not turn away from the pain attendant in life; but rather, to stare it full in the face and say, Yes…that gives me the desire to live.
https://alleywhoops.medium.com/karen-o-the-yeah-yeah-yeahs-and-a-night-spent-in-the-cold-curious-glow-of-remembrance-7c11537f6a44
['Alley Whoops']
2018-06-07 16:50:54.327000+00:00
['Karen O', 'Concerts', 'Music', 'Yeah Yeah Yeahs', 'Culture']
Sharpe Ratio, Sortino Ratio and Calmar Ratio
Sharpe Ratio Revisited Sharpe ratio is the ratio of average return divided by the standard deviation of returns annualized. We had an introduction to it in a previous story. Let’s take a look at it again with a test price time series. import pandas as pd import numpy as np from pandas.tseries.offsets import BDay def daily_returns(prices): res = (prices/prices.shift(1) - 1.0)[1:] res.columns = ['return'] return res def sharpe(returns, risk_free=0): adj_returns = returns - risk_free return (np.nanmean(adj_returns) * np.sqrt(252)) \ / np.nanstd(adj_returns, ddof=1) def test_price1(): start_date = pd.Timestamp(2020, 1, 1) + BDay() len = 100 bdates = [start_date + BDay(i) for i in range(len)] price = [10.0 + i/10.0 for i in range(len)] return pd.DataFrame(data={'date': bdates, 'price1': price}).set_index('date') def test_price2(): start_date = pd.Timestamp(2020, 1, 1) + BDay() len = 100 bdates = [start_date + BDay(i) for i in range(len)] price = [10.0 + i/10.0 for i in range(len)] price[40:60] = [price[40] for i in range(20)] return pd.DataFrame(data={'date': bdates, 'price2': price}).set_index('date') def test_price3(): start_date = pd.Timestamp(2020, 1, 1) + BDay() len = 100 bdates = [start_date + BDay(i) for i in range(len)] price = [10.0 + i/10.0 for i in range(len)] price[40:60] = [price[40] - i/10.0 for i in range(20)] return pd.DataFrame(data={'date': bdates, 'price3': price}).set_index('date') def test_price4(): start_date = pd.Timestamp(2020, 1, 1) + BDay() len = 100 bdates = [start_date + BDay(i) for i in range(len)] price = [10.0 + i/10.0 for i in range(len)] price[40:60] = [price[40] - i/8.0 for i in range(20)] return pd.DataFrame(data={'date': bdates, 'price4': price}).set_index('date') price1 = test_price1() return1 = daily_returns(price1) price2 = test_price2() return2 = daily_returns(price2) price3 = test_price3() return3 = daily_returns(price3) price4 = test_price4() return4 = daily_returns(price4) print('price1') print(f'sharpe: {sharpe(return1)}') print('price2') print(f'sharpe: {sharpe(return2)}') print('price3') print(f'sharpe: {sharpe(return3)}') print('price4') print(f'sharpe: {sharpe(return4)}') As you can see in this example, I have 4 test price time series. First one, price1, is simply a straight line (never mind the wiggles, it’s due to weekends that were left out). The second price time series, price2, has a flat region. The third price time series, price3, has a downward sloping region. The last time series, price4, has a slighter larger downward slope. The sharpe ratio for each price is calculated below: ''' price1 sharpe: 78.59900981328562 price2 sharpe: 7.9354707022912825 price3 sharpe: 3.61693599695678 price4 sharpe: 3.151996500460301 ''' As you can see, price1 has a very high sharpe ratio, due to almost non-existent volatility. Price2 has a sharpe ratio of almost 10 times less, due to the presence of a flat region. Price3 and Price4 are about two times lower than price2 due to the downward slope, and Price4 sharpe ratio is slightly lower than price3’s sharpe ratio. Looking at these numbers, one thing that might seem a little strange is how much the sharpe ratio decreased simply due to the presence of a flat region. After all, the total return didn’t change, and there is no drawdown at all. For an investor, price1 and price2 are not all that different. On the other hand, price3 and price4 have sizable drawdowns, yet the sharpe ratio decreased by about half, compared to a 10 times drop between price1 and price2. This seems to be a deficiency in the ability of sharpe ratio to tell us how desirable the price time series is.
https://towardsdatascience.com/sharpe-ratio-sorino-ratio-and-calmar-ratio-252b0cddc328
['Shuo Wang']
2020-11-18 05:35:28.107000+00:00
['Portfolio Management', 'Quantitative Analysis', 'Python', 'Data Science', 'Data Visualization']
Migrating React Native app to React Hooks
React Hooks are slowly taking over the frontend world. They have straightforward syntax and you can: use them inside functional components mix them with the old syntax inside class components Until recently the new syntax wasn’t available for React Native developers. Luckily with React Native 0.59, we can use hooks on mobile as well. If you want to start using them today, just follow along! At Onfido we use React Native for our ‘Demo App’, which is a multi-platform application showcasing our SDK. It helps clients integrate with our identity check solutions. Its small codebase was the perfect place to battle-test React Native 0.59 as soon as it was released. Updating to React Native 0.59 The update itself is mostly straightforward but requires a manual update of some of the Android project files. In addition, you should update your Babel preset (if used) and if you test your app using Detox, you need to be update to at least version 12.3.0 : Updating Android files To use React Native 0.59 you must update com.android.tools.build to version 3.3.0 in your build.gradle file. You must also update Gradle to 4.10.1 in both build.gradle and gradle/wrapper/gradle-wrapper.properties . Updating iOS files You don’t need to do anything 🎉. Enabling ‘inline requires’ in Metro bundler React Native 0.59 introduced an experimental feature called ‘inline requires’. When enabled, Metro will analyze your package in order to slice it into lazy-loadable packages. It should speed up the launch time of the app. In the case of our Onfido Demo App it didn’t make any difference — but in our case the app is relatively small. Updating deprecated modules Several native modules were extracted to the separate community repositories in order to reduce the size of the base React Native repository (as part of “Lean Core” effort). Depending on which module you use, updates to the community package might be as straightforward as adding a new dependency and linking it, or it might require much more manual work. This time 6 modules have been deprecated and will be removed from the core repository soon: AsyncStorage, ImageStorage (replaced with expo-file-system or react-native-fs), MaskedViewIOS NetInfo Slider, ViewPagerAndroid. If you are using any of these, you will now see a deprecated warning when you try to use them. Next up some of the problems we had. Issues with the migration Migrating some code to the new community equivalents might be harder than you think. In our case we couldn’t update AsyncStorage yet because of unresolved issues with the package with iOS (recursive dependencies in Podfiles is one of them). Not updating the package resulted in a strange issue we started encountering during our tests. Because the package has been deprecated, the app started showing a warning at the bottom of the screen. Unfortunately they were rendered during the Detox automated tests and it was covering the button our tests were trying to click. Our automated tests trying to click the button. To detect issues like that in failing tests you can add --record-videos=failing to your detox test command. It will generate a video file for each failing flow so you can replay it and see what exactly happened. Once we located the issue, we decided to turn warnings off in the testing environment. You can do this easily by setting the environmental variable IS_TESTING=1 . You can also disable specific warnings instead. I am not recommending using deprecated methods in general, but if there is no way to migrate out of them yet— that’s the way to go. Creating ESLint linter rule Even though hooks are really easy to use, there are several rules you have to follow. You can use them only inside the React Components and they cannot be invoked inside conditional logic (official Rules of Hooks explanation is a really good source if you want to learn why). If you make an error like that, it is really hard to debug it. Fortunately, there is a eslint-plugin-react-hooks package that can notify you whenever you make a mistake like that. Each time you call useState in incorrect place, you will be warned. The first rule makes sure you don’t make mistakes described above. Rule exhaustive-deps is more experimental - it makes sure that all the variables used inside the useEffect callback function will be included in the list of dependencies (second argument). For me, it works around 90% of the time. In the rest of the situations I, have to explicitly disable it for a specific effect because I know I don't want to rerun the function for change of a certain parameter - but having to add a comment makes these decisions more explicit. React Hooks can be used alongside the regular stateful components, but if you decide to go full in, I recommend setting up linter options to prevent you from using class components anymore. This way you can enforce consistency in the project and make sure all other collaborators adapt to the convention. Currently there’s no plugin dedicated to that, but we can simply implement it using generic no-restricted-syntax to define custom rule. ESLint will make sure you are not using old syntax anymore This rule disallows inheriting from either Component or React.Component . It is helpful both while refactoring an app to React Hooks as well as preventing the team from adding new class-based components in the future. The selector is an esquery syntax which is really comprehensive way to query AST's (Abstract Syntax Tree) nodes that ESLint calculates. You can read more about that in ESLint Developers Guide. Not all lifecycle methods are covered You can disable the rule if you want to make an exception. React Hooks are great but they are not the solution for all problems. They do not cover (at least not yet) all the component lifecycle methods. If you use methods like componentDidCatch or getSnapshotBeforeUpdate , you cannot rewrite your component to use only hooks just yet. If you want to implement linting anyway, you will have to explicitly disable the rule for such components. Next Steps
https://medium.com/onfido-tech/migrating-react-native-app-to-react-hooks-12915844e50
['Kacper Kula']
2019-05-22 08:01:00.883000+00:00
['React Native', 'Mobile', 'React Hook', 'React', 'JavaScript']
Why Civility Matters
I left the US eighteen years ago this month. I mark every June with a shout out to my former life, and a grateful pat on the back for past me who made the decision to go. It wasn’t an easy decision. Leaving behind our lives, careers, friends and family to start anew in New Zealand was damned risky and more than one person told us so. “You’ll be back in a year,” a couple smug would-be soothsayers claimed. Their prognostications proved faulty. One of those self-same soothsayers contacted me a while back to ask me what it would take for them to immigrate to New Zealand. I mercifully did not remind them of their snarky prediction so many years ago, but instead pointed them at the NZ Immigration website with some helpful tips. They, like me, have watched in dismay at the fundamental breakdown of civility and fair treatment in US politics and public life, and an advent of viciousness and indelicacy which is not only patently embarrassing, but is tearing the US apart. Be civil to all; serviceable to many; familiar with few; friend to one; enemy to none. — Benjamin Franklin Why Did We Leave? There are any number of reasons we decided to say our goodbyes to the United States. My partner and I had both lived outside the US before. I lived in countries all around Asia; she had traveled extensively through Europe. Before our kids were born, we lived on an island in the Cyclades (Greece). Our kids had reached school age, and we again longed for adventure rather than a mortgage. We had saved enough money to either indenture ourselves to a bank for a mortgage, or take the money and use it to set off on a new journey. We chose the latter. (I really, really, REALLY don’t like owing money to banks.) Career opportunity was a small but contributing factor to our decision as well. While the pay in New Zealand was, with the exchange rate, significantly less, the number of skilled labourers (particularly in our fields) were in short supply. San Francisco was (and I understand still is) so overrun in job applicants, it was an ever-losing uphill battle to get and keep stable work. Sure, those were all contributing factors, and I don’t dismiss them out of hand. Yet in the end, the biggest factor by far was the unsettling trend we saw taking hold in America. The populace — particularly where politics and opinion were concerned — were growing foul-tempered and growling with invective. Civility, it seemed, was on its way out. Discourse and rational debate were in increasingly short supply, quickly being replaced with a nasty spite and ravenous need to shout down everyone who failed to agree. I was raised in the 60s, 70s, and 80s in Northern CA. While politics had its ugly sides, there was this thing called civil discourse. My parents and teachers showed me — in word and in deeds — that someone who debased themselves by resorting to name-calling and expletives was hardly worthy of calling themselves a member of a democratic society. I was taught that while arguments will always take place, it is our duty as citizens and as people to carry ourselves with as much calm and intelligence as possible, and to listen to an opponent’s arguments fairly, to debate and discuss with equanimity. My partner and I were at a crossroads in American history, and somehow we instinctively knew this. Something was coming, something which we wanted no part of, a destructive, unstoppable force growing in the hearts and minds of our fellow Americans. We could see how public debate was fast devolving into slander and name-calling. Sides were being chosen. Elections were growing nastier by the year. Political ads were devolving into trashy videos bashing opponents, rather than information on a candidate’s positions and qualifications. Politics in Washington were turning into a bad episode of Jerry Springer. The Rush Limbaughs were getting louder and growing in popularity. The importance of decorum was already starting to fall by the wayside. It was well and truly the time to skedaddle. Speak not injurious words neither in jest nor earnest; scoff at none, although they give occasion. — George Washington All the Worse Now That was eighteen years ago. President George W Bush was in office, and the Iraq War had a lot of tempers high. Yet by today’s standards, things were damned placid and restrained. Presidents acted presidential. Debate, while nasty, was still debate, and the rules of political and public engagement were much the same as they’d been since the founding of the US. At the time what had seemed to us as churlish and uncouth, nasty and disrespectful now seem like the pinnacle of gentility. The ugliness we had seen was, we were to learn, only the beginning of the fear and loathing that would become the American public stage. These days, much of the world looks on in baffled horror (and none too little bemusement) while present day America appears to becoming unhinged. To the rest of the globe, what is happening is little more than screeching and clawing at one another over every slight imaginable. It looks from the outside like children squabbling on the playground. Not that this isn’t totally unexpected. The election of Donald Trump was a clear sign of things to come, and a direction that was already well under way when Trump blustered his way to power. Beyond all his other failings as a businessman and a human being, Trump is the picture of boorish loudmouthery, a man who has no time or inclination for etiquette or courtesy. He is an anathema to common decency and good manners. Hardly the picture of a statesman or a diplomat, Trump fundamentally lacks the qualities so needed when negotiating on the international stage. And indeed he is neither statesman nor diplomat. Trump was elected by his base fundamentally because he was not diplomatic or civil. They wanted a swaggering braggadocio, a man who would say and do anything, who would put everyone — including other nations — in their place. That is precisely how he has acted, still the hollering star of his own “reality” show, but now at the head of the most powerful nation on earth. Most world leaders think little to nothing of Trump, as was shown when a group of them at the G8 mocked the man while they thought no one was listening. Most are likely waiting in eager anticipation for Trump and his ilk to disappear into their sordid chapter in history. Trump and those like him have managed in less than four years to isolate the US on the international stage. His lack of good grace and tact has turned the US into the 800-pound gorilla no one wants to have around. Trump’s followers — and the miasma of far-right celebrity goons which rile them nightly — trample every modicum of decency and restraint one can put in their paths. Such TV “personalities” as Alex Jones, Sean Hannity, Tucker Carlson and many more fill the airwaves with hate speech and bold-faced fabrications in an attempt to whip their viewers into a lather at every turn. Such blatant and mean-spirited demagoguery is inconceivable in other democratic nations. It’s all but impossible to explain to non-Americans what the hell is happening. And let us not heap it all on your conservatives, either. Progressives gleefully hope for the suffering and failure of any and all conservatives. Left-leaning television personalities are interested in mocking and deriding for ratings, chasing laughs at the expense of anyone and everyone more often than actually creating a real sense of engagement, of presenting the complexities of issues. Far left-leaning news outlets paint everyone right of center as fascists, dictators, racists, warmongers, capitalist pigs. No one is spared the wrath of the high priests and priestesses of Political Correctness and Cancel Culture. Step off the approved liberal line, and you’re one of the enemies of equality and social justice. Real impartial journalism struggles these days, since truth impartially told requires readers who can accept complexity and varied positions. Journalism requires an acceptance and respect for rational facts. Those have been thrown out in favour of what looks and sounds like intentional equivocation and propaganda. Social media is one big screaming match over the top of a digital divide. I’ve seen more politesse at house parties than what we’re seeing today on Facebook and Twitter. Most conservatives and progressives don’t even bother talking anymore. Insulting tweets and comments lobbed over the line are the primary means of engagement, adding more angry fuel to the fire. Even a pandemic has failed to unite America. While other nations are rallying together to weather this painful and frightening time in history, America appears to be carrying on with its perpetual mudslinging. Few are speaking rationally, and those who are might as well be talking to a stone. The art of listening has lost its place in post-modern America. Logic, rationality, and yes, civility appear to have packed their bags and emigrated elsewhere in a hurry. Meanwhile, most Americans are fist-fighting on the deck of the Titanic. Civility is not a tactic or a sentiment. It is the determined choice of trust over cynicism, of community over chaos. — George W. Bush Why Civility Matters I’ve learned a lot from my adopted homeland of New Zealand. I understand Rugby now, and I can cook a pavlova. I even know a few phrases of Maori. Yet one of the most important lessons my fellow Kiwis have taught me is that for all their disagreements —across all aspects of society — civility and decorum are never abandoned. It is civility in NZ society which allows such a wide number of political parties to work together to form a coalition government. It is decorum which keeps the debates rational and without histrionics or cheap theatrics. It is this very civility and trust in our government and in each other (no matter our political differences) which have helped us to weather the covid-19 pandemic so successfully. Was there disagreement in parliament and in society as to Prime Minister Jacinda Arden’s decisions on the handling of the pandemic? Of course, but it never turned nasty. Letters to the editor and commentary appeared in newspapers. MPs spoke their minds against her approach. Yet it never took on the kind of invective we see pouring out the US in nigh endless supply. Do we have uncouth and uncivil windbags in New Zealand? We do, of course. No society is complete without some percentage of indelicate oafs. Yet theirs is a minority voice, one which is looked upon with distaste and even condescension. Such lacking in propriety and reasonableness is not given much credence, since why diminish yourself and your society by giving such behaviour a place at the grownup table? Civility matters because it is the only place from which one can find compromise. Restraining ourselves so that we always treat our opponents with inherent dignity is how we begin to acquiesce that another’s opinion is as valid as our own. Decorum matters because it is the line in the sand beyond which we do not cross. Not because we can’t, but because to do so sullies and debases everyone, ourselves included. I cannot say that I’ve never been discourteous or rancorous in the public forum. I have — for any number of banal or petty reasons — made cantankerous or vindctive remarks both on and offline. Yet I’m reformed. I’ll debate. I’ll argue. I’ll point out hypocrisy and lack of ethics where I see them. I may even quip and have a go, but no longer with any intent to harm or humiliate. My adopted country has reminded me of what my parents and teachers taught me all those years ago. I have learned once more the value of hearing others, and letting them have their voice same as I wish to have mine. I have been reminded of how important it is to live in and be a member of a civil society. Will I always be heard? No. Will I be shouted down by loudmouths and cacophonous boors? Likely, especially in the short term. Yet this is a long game we’re playing. By bringing civility and decorum back, we’re seeking to restore human dignity and rational debate to all levels of society. We are using the age-old weapons of rationality, courtesy, and logic to trounce the nincompoops and louts by showing everyone how this game is really supposed to be played. Not cheating, not throwing our toys when we don’t get our way, but by engaging in the real game of intellect, of thought and ethical reflection. In bringing back etiquette and grace to our social contract, we are helping to make our voices heard and in doing so help others to be heard as well. What more do any of us want than that?
https://medium.com/intelligence-challenged/why-civility-matters-32f9f88f9606
['Christopher Laine']
2020-05-26 11:17:17.947000+00:00
['Philosophy', 'Social Media', 'Debate', 'Society', 'Politics']
Dennis Wilson’s Best Beach Boys Songs: 10 Overlooked Classics
Jamie Atkins Photo © Capitol Records Archives So much more than just The Beach Boys’ drummer, Dennis Wilson (born 4 December 1944) contributed raw ballads and charged blasts of rock’n’roll that were highlights of the group’s albums from the late 60s until his untimely death, at just 39 years old, on 28 December 1983. While initially underestimated thanks to his pin-up looks and penchant for mischief, Dennis’ early songwriting and production demonstrated a deep and instinctive talent, which developed as his elder brother Brian’s influence on the group waned. Celebrating some lesser-known corners of The Beach Boys’ work, here are Dennis Wilson’s ten best Beach Boys songs. Think we’ve missed some of yours? Let us know in the comments section, below. Listen to the best of The Beach Boys on Apple Music and Spotify, and scroll down for our ten best Dennis Wilson songs. Dennis Wilson’s Best Beach Boys Songs: 10 Overlooked Classics 10: ‘Do You Wanna Dance?’ (1965) Early in The Beach Boys’ career, it had become apparent that the majority of the lustful energy stirred up at their gigs was aimed squarely at the animated figure behind the drum kit. It made sense, then, to take advantage of Dennis’ heartthrob status by having him sing lead vocals on a 1965 single that would open album — a stomping version of Bobby Freeman’s 1958 hit ‘Do You Wanna Dance?’ The band harnessed the power of Phil Spector’s Wrecking Crew — all crashing drums, surging saxophones and surf guitar solos — for a backing track that was nearly as exciting as hearing Dennis sing, “Squeeze me, squeeze me, all through the night.” 9: ‘In The Back Of My Mind’ (1965) Dennis was also called upon to sing lead on … Today! ‘s closing song proper, the meandering and lovely ballad ‘In The Back Of My Mind’. Dennis was an inspired choice: his soulful, plaintive vocals bring added depth to one of the group’s most vulnerable early songs. Fans who screamed to ‘Do You Wanna Dance?’ swooned to this one, an early indication of the two sides of Dennis that would be revealed as his writing developed. 8: ‘Little Bird’ (1968) The first Dennis-penned song to be released (initially as the B-side to ‘Friends’, in May 1968, and, the month after, on the album), ‘Little Bird’ was a co-write with the poet Stephen Kalinch and featured an uncredited helping hand from Brian. Musically, it’s brooding, with sunny intervals, and owes a clear debt to ‘Child Is The Father Of The Man’, a song from the group’s SMiLE sessions. Kalinch’s lyrics are a joyful celebration of nature, sung tenderly and with heart by Dennis. The surfer of the group was growing up quick. 7: ‘(Wouldn’t It Be Nice) To Live Again’ (1971) Unreleased until the 2013 box set Made In California, ‘(Wouldn’t It Be Nice) To Live Again’ should have graced 1971’s . An alleged disagreement with Carl over the album’s running order, together with pressure to keep material for a solo album that was allegedly close to completion, meant that this sumptuous wonder was shelved. From pastoral beginnings (with shades of The Beatles’ ‘Fool On The Hill’) and a peaceful vocal from Dennis, to a grandstanding, emotive chorus, the fact this song remained shelved for so long beggars belief. 6: ‘Slip On Through’ (1970) The opening track of was a heady, soulful rocker that saw Dennis deliver one of his finest non-ballad vocals for the group over an energetic, irresistible groove. The lyrics may amount to one massive come-on, but when it’s this much fun, we’re not complaining. 5: ‘Celebrate The News’ (1969) While the June 1969 single ‘Breakaway’ was a hit for The Beach Boys, its B-side, ‘Celebrate The News’, is arguably the better song. Co-written by Dennis and his pal, songwriter Gregg Jakobsen, it shifts masterfully through the gears until the ecstatic mantra, “I’ve got news for you, there ain’t no blues,” beckons in a rampaging end section, complete with exuberant, gospel-tinged vocals. 4: ‘It’s About Time’ (1970) A propulsive, funk-driven stormer of a song, with lyrics by Bob Burchman, a poet acquaintance of Dennis’, ‘It’s About Time’ was the first song recorded after the band’s new label, Warner Reprise, had rejected an early iteration of the album that would become Sunflower. Concerns that the group were not “contemporary” enough were quickly disproven by Dennis’ dynamic production work here — all stinging guitars and frenzied percussion — that that brings the best out of his brother Carl’s gutsy vocals. 3: ‘Cuddle Up’ (1972) Originally recorded for a 1971 solo album that failed to materialise, ‘Cuddle Up’ was re-recorded when The Beach Boys — short of material for a follow-up to Surf’s Up — came calling. One of a batch of songs co-written with Daryl Dragon, of Captain And Tennille, ‘Cuddle Up’ was a highlight of The Beach Boys’ 1972 album, Carl And The Passions — “So Tough” (along with Dennis’ other contribution to the album, ‘Make It Good’). The song begins in intimate fashion, with softly-played piano and Dennis’ careworn vocals to the fore, before stirring strings and background vocals build to a climax steeped in equal parts anguish and ecstasy. Never one to shy away from wearing his heart on his sleeve, the sumptuously melodramatic ‘Cuddle Up’ might be the song that best sums up the incurable romantic inside Dennis. 2: ‘Be With Me’ (1969) The Beach Boys’ 1969 album, 20/20, is a disjointed affair perhaps best thought of as a collection of distinct and disparate songs pulled together. ‘Be With Me’ was the pick of Dennis’ contributions: an opulent arrangement pulled down to Earth by a tender vocal that announced the arrival of a remarkable talent. Check out the 2001 rarities set, Hawthorne, CA, for the staggering backing track to the song. 1: ‘Forever’ (1970) This stand-out from The Beach Boys’ 1970 album, Sunflower, sees Dennis at his most direct and loveable — a puppy-dog-eyed declaration of eternal faithfulness. The production is elegant and sumptuous, with beatific backing vocals from The Beach Boys (Brian, in particular — just check the fade-out). But it’s Dennis’ lead vocal that steals the show. When he sings, “If the song I sing to you/Could fill your heart with joy/I’d sing forever,” he sounds every inch the vulnerable romantic, convinced he can make it all better with the sheer beauty of his music. All these years later, his songs still touch hearts everywhere. Looking for more? Discover the best Beach Boys songs. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/dennis-wilsons-best-beach-boys-songs-10-overlooked-classics-759c4a441f9
['Udiscover Music']
2019-12-09 09:50:41.085000+00:00
['Pop', 'Music', 'Lists', 'Culture', 'Pop Culture']
SPACs are the Future of AV Startups
LIDAR is a core technology that powers self-driving cars. It’s also what I’m naming my new cover band where we do 90’s themed remixes of Lionel Richie songs. LIDAR systems are usually (not always) mounted on top of self-driving cars/other autonomous vehicles (AVs): Besides being a very fashionable top hat for you car, LIDAR systems are core to helping an AV “see”. A LIDAR system is like a group of TIE fighters strapped to your car roof — they’re constantly shooting out lasers. Most used to spin, but modern ones are smaller and will still provide a 360° view. “Pew pew pew” Thousands of laser pulses are sent out every second. Every SECOND. When they hit an object and bounce back to the LIDAR, the reflection points are recorded to build a 3D point cloud. This can be done because we know how fast light is and how far it travels. That cloud can then be turned into a 3D representation of the car’s surroundings. Here’s a look at how Uber tackles the problem: Velodyne is a huge player in the LIDAR market. They’re one of the OGs in the space and they’ve kept up with industry innovations via sexy, smaller lidars. As you can see, LIDAR is pretty great at telling the car how far away things are, but not really great at telling the car what they are. Because of this, most current AV approaches combine LIDAR data with camera + visual recognition data.
https://medium.com/datadriveninvestor/spac-secures-self-driving-car-accessory-startup-800d2bfd659d
['Murto Hilali']
2020-07-21 16:16:53.310000+00:00
['Finance', 'Technology', 'Business', 'Startup', 'Lidar']
Hamilton — Mythology, Misogyny, and Martyrdom
Mythology For nearly 250 years, efforts have been made to add these key figures such as George Washington, Thomas Jefferson, and even Benjamin Franklin, into an American Pantheon. These ‘gods’ have states and cities named after them, statues erected in their likeness, and have found their way into our textbooks as infallible heroes. Statue of Ben Franklin. The Washington Monument. The Jefferson Memorial. Increasingly, I’d like to imagine that the American collective is in a phase of re-examining the character of our Founding Fathers. There is more discourse surrounding their status as slave-owners or wealthy tycoons. This deconstruction also has to do with disassembling the fundamental American values of ambition, free-market enterprise, and even racism. Instead of challenging the character of our Founding Fathers, I believe that Hamilton positions itself as a love letter to them, particularly to its namesake. The main characters are portrayed as heroic, ready to lay their lives down for the revolution. But what exactly does their revolution stand for? Hamilton implies it has something to do with ‘freedom,’ a stance most Americans would corroborate. In The Story of Freedom, some of the more fanciful and heart-warming lyrics of the play are sung, “Raise a glass to freedom / Something they can never take away / No matter what they tell you.” This is even reprised in Hamilton’s last words later on. But that isn’t true, is it? Freedom is something that can be taken away. The Founding Fathers who are imaged to stand for the value of ‘freedom’ in their mythologies, do not in fact believe in absolute freedom. They owned slaves for their benefit, or at the very least, support the system that perpetuates slavery. Washington’s teeth were slaves' teeth. Another value of The Story of Freedom is one of ‘legacy’. Around a table, Hamilton and his friends enjoy some drinks. Mulligan and Lafayette suggest having ‘another round,’ seemingly content and in-the-moment. Hamilton, instead of agreeing, imagines how future Americans will remember this evening (They’ll tell the story of tonight). This suggests that Hamilton, and perhaps some of his political peers such as Washington who makes similar comments, care more about how well people will remember them than the actual experience of life now. It’s an ascetic obsession. This asceticism also ties into another value perpetuated in general American mythology which is ‘unquenchable ambition’. Our beloved Founding Fathers were ambitious, nothing was enough for them. They only wanted the best for our country. We are told to be like them, to want to grow up to be President or a CEO or marry rich. Even though it is ambition that kills Hamilton, it remains a noble trait. These popular characters and popular values are integral to the mythological ‘canon’ of the founding of the United States of America. Likewise, these stories, people, and values seem to sneak their way into Hamilton in a fairly positive light. This is an American musical after all. However, I would expect a modern production to reexamine the problematic dimensions of this hypocrisy, self-obsession, and insatiability. Instead, Hamilton, the focal character of the play, doesn’t critique any of these values. He is the living embodiment of them. It is difficult to watch the Hamilton production without seeing it as an affirmation of this hazy, shallow value-system perpetuated by the mythology of America’s founding. The Schuyler Sisters from the Broadway production of Hamilton. Misogyny I watched this play for the first time with my partner. After Satisfied was performed, I turned to him and said, “I sure hope there are female characters that do something other than swoon over Hamilton.” I was sort of hoping for a Founding Mothers' storyline that would support the Founding Fathers' backbone we are familiar with. But instead, my hunch was correct. The women in Hamilton are used purely to support or contrast the men in each scene. Eliza loves Hamilton, marries him, has a kid with him, and ‘lives to tell his story’. She is not important because she built the first orphanage. She is important because she’s Hamilton’s wife. Angelica loves Hamilton, and Hamilton sorta has the hots for her too, but they agree not to do anything about it. Angelica is almost given a feminist storyline (because I guess she’s done some Thomas Paine reading), but instead, she gets married off to a man she doesn’t like, moves to England, and comes back to fully illustrate how bad Hamilton ends up getting. Maria Reynolds literally just has an affair with him. One second she’s weak and helpless, next, she’s a lustful temptress. Peggy is there for comic relief and apparently doesn’t have a personality. It would be one thing if the play had no women… perhaps I could justify that this was because they were seemingly absent from this part of history. Maybe Hamilton is trying to criticize this absence of women in our accounts. But there are women. And, unfortunately, they’re all incredibly shallow characters, unable to stand on their own two feet. Burr Shoots Hamilton. Internet Archive Book Images. Martyrdom There are so many parallels between Hamilton and my favorite play of all time, Jesus Christ Superstar. There are two main characters. One of them is good, a savior figure, blessing mankind. One of them is bad, short-sighted, and kills the other. Like Hamilton, Jesus is obsessed with self-image and legacy until the final moment. Like Burr, Judas, is tempted by the new looming ‘institution’ which corrupts him, and subsequently kills his counterpart. They both begin as a sort of friend and later become enemies. In Jesus Christ Superstar, both Jesus and Judas become martyrs. Jesus dies because the State does not like the idea of a charismatic citizen with the ability to inspire hope in subjugated peoples. Judas dies because he cares for his homeland so much that he would rather push Jesus to ‘stop his shenanigans’ than see Jerusalem in flames. It is this decision, which arguably he had no choice in, that wracks him with so much pain that he hangs himself. Hamilton is also depicted as a martyr from the beginning of the play. The play condenses the historical Hamilton’s 47 years into a 2.5-hour production; an honor to his legacy, a good legacy. He repeatedly tells us he is willing to die for his country — though, he doesn’t deserve to die. In the beginning, it is in the context of literal war. In the end, it is a type of ideological war. He is more or less upset at Burr’s ambition; Hamilton and Burr have swapped places. Even though Hamilton shoots up, Burr still shoots him and he dies. He’s not throwing away his shot. But aren’t martyrs supposed to be role models? Saints? People with strong, self-less values? As discussed before, Hamilton instead is the embodiment of some of the key American mythological tropes: a shallow belief in freedom, an obsession with self-image and legacy, and an unquenchable ambition. It is these unhealthy attributes that kill him — not some noble cause. And in the end, he does not die for anyone or anything. He puts himself in this position because he can’t get over himself. Even in a few of his last thoughts: “If I throw away my shot, is this how you’ll remember me? / What if this bullet is my legacy?” Luckily right before he dies he is able to wisen up and think about more pressing things… like his wife. Hamilton does not seem to acknowledge the toxicity of this martyrdom complex. The music tempo, tone, pitch — everything — seems to indicate that we are meant to feel an astounding loss, a loss likened to the killing of a saint. Burr is told he better hide. Similar to Judas, who historically we know very little of, Burr tells the audience, “History obliterates / In every picture it paints / It paints me and all my mistakes… He may have been the first one to die / But I’m the one who paid for it.” Because of the martyrdom complex, not only are the lives of his loved ones shaken but so is the legacy of his comrade.
https://medium.com/interfaith-now/hamilton-mythology-misogyny-and-martyrdom-3fa8274f5585
['Allison J. Van Tilborgh']
2020-07-11 16:40:21.857000+00:00
['Religion', 'Music', 'History', 'Politics', 'Film']
Unlock a GitHub Secret: Profile Bio Readme
So What’s the Secret? If you create a repository with your own GitHub username, it automatically turns into a special repository: Isn’t that cool? Once you initialize this repo with a Readme.MD , whatever you write in the Readme file will turn into a Readme for your whole GitHub profile. Make sure you create a public repository for this to work. So when someone visits your GitHub account, they will see a bio-like description. This means that you can simply turn your GitHub account into a resume: You can give an overview of your projects, talk about your work/education, boast about your achievements, mention your social media accounts, or write literally anything you want! Do you know about any other GitHub secrets that are still waiting to be discovered? Let me know in the comments below.
https://medium.com/better-programming/unlock-a-github-secret-profile-bio-readme-8035dec3b3bb
[]
2020-07-15 17:38:27.510000+00:00
['JavaScript', 'Python', 'Github', 'Programming', 'Git']
Things About Menopause That Have Made My Life Better
Renewed and Rejuvenated It’s amazing what stabilized hormones and no period can do for a girl! American anthropologist and author, Margaret Mead, defined it as “post-menopausal zest”, or PMZ. Studies have shown that many women experience new found energy and resolution. Dr. Mead described PMZ as “a physical and psychological surge of energy.” She wrote, “there is no more creative force in the world than a menopausal woman with zest.” During Menopause, the last thing I was feeling was “zesty”. It was a crazy time for me and everyone around me. But now that I have made it through, I have a renewed outlook on life. For a change, I am able to pursue my interests. The time I have now to pursue my interests makes me happy, therefore I don’t feel a sense of depression as often. It’s amazing what a good night’s sleep can do for you too. Raising six children means many years and nights of broken sleep. When I was finally able to sleep a full night without a child waking me up, I started waking up in the middle of the night feeling like I was going to spontaneously combust. For a period, I was waking up at 1:30 a.m. every single night. It didn’t matter if I went to bed early or late, 1:30 a.m., my eyes would open the brain would flip on and it was off to the races. I get more sleep now. I can go to bed when I want to, which is usually as soon as it gets dark. Not kidding. I no longer wake up sweating making my poor husband sleep in a room with the heat off and a fan blowing. I’ve noticed the dark circles under my eyes have lightened. I’m not as fatigued all the time. I’ve become more creatively inspired. Post-menopausal zest appears to be both circumstantial and biological. Kids grow up and move on at about the same time our hormones begin to stabilize. Dr. Marilyn Glenville, author of Natural Solutions to Menopause, explains that “after menopause, there is a change in the balance of hormones. Estrogen drops and testosterone becomes more dominant.” The two hormones coming into balance gives us more energy, drive and motivation. More “zest”. A Time to Take Stock Menopause signifies mid-life, and what more fitting time to take stock of our lives. Mortality is imminent and in closer view. I started counting on my fingers how many more years I may possibly have here on this earth. In the perfect scenario, with no unforeseen issues, living to the age that my people typically live to, I sadly realized it was less than I have already lived. There’s things I want to do, see, and learn. There’s a mountain of books I want to read. Reaching the half-way point of life has given me a new perspective and made me assess where I am in this life of mine. It’s made me question what more I want to accomplish. Time really is of the essence. I’ve really started taking a good look at is how I spend my time. Is it meaningful? Am I putting my energy into things that are worthwhile and interesting? Are the things I’m doing with my time beneficial? Who am I spending my time with? Are my relationships beneficial or detrimental to my well being? We all have people in our lives that can suck the life right out of you. I’ve found myself becoming less available to those types and have pursued connections with more like minded, emotionally mature people. I reassessed my profession of 30 years and made a career change. When I turned 50, I had an opportunity to make a career change for significantly more money. I was worried that I couldn’t learn this new job. It was in an industry I knew nothing about. But I took the leap and got an excellent job with a great company that is benefiting the environment for our future and the future of our kids and grandkids. I believe in what this company is doing and am happier in my career than I’ve been in a long time. One of the most difficult things I’ve had to assess is my marriage. Sadly, we aren’t in a great place right now. There’s a lot that goes into that better suited for another story, but recently, I’ve decided that the only thing I can do is focus on what I can do to make things better on my end. I really was in Mean-O-Pause. I felt angry and mean most of the time. My unpredictability and anger alienated me from my husband. Sometimes I would say or do things to just to scratch some nasty little itch I had. I can’t explain it, but misery loves company I guess, and I really was miserable. I’m so glad that nonsense is over. We don’t fight as much anymore, but we have definitely grown apart. More to come on that.
https://medium.com/the-chronicles-of-menopause/things-about-menopause-that-have-made-my-life-better-36eb22c48461
['K Bennett']
2020-01-14 19:40:10.751000+00:00
['Health', 'Womens Health', 'Ninja Writers', 'Women', 'Menopause']
IBM’s Watson joins doctors in fighting lung cancer with cloud-based medical app
IBM’s AI super computer, Watson, is now being used to help doctors treat patients with lung cancer. IBM has partnered with Wellpoint and Memorial Sloan-Kettering Cancer Center to “teach” Watson how to understand and interpret complex clinical information. This includes “more than 600,000 pieces of medical evidence, two million pages of text from 42 medical journals and clinical trials in the area of oncology research. “ The goal is to not replace the doctor, but to arm them with the information they need to make the best decisions possible. Read more at theverge.com or watch this video:
https://medium.com/sample-3/ibms-watson-joins-doctors-in-fighting-lung-cancer-with-cloud-based-medical-app-776f416f4511
['Patients Loremipsum']
2017-01-05 16:48:39.275000+00:00
['IBM', 'Healthcare', 'Watson', 'AI', 'Tech']
What is World Happiness Index ?
The World happiness is the global happiness that ranks 156 countries by how happy their citizens perceive themselves to be. The World Happiness Report is an annual publication of the United Nations Sustainable Development Solutions Network. It contains articles, and rankings of national happiness based on respondent ratings of their own lives, which the report also correlates with various life factors. Now the next question is “How the happiness calculates from 156 different countries?” The rankings of national happiness are based on a Cantril ladder survey. Nationally representative samples of respondents are asked to think of a ladder, with the best possible life for them being a 10, and the worst possible life being a 0. They are then asked to rate their own current lives on that 0 to 10 scale. The report correlates the results with various life factors. If you want full code or dataset click on this link: https://www.kaggle.com/dgtech/world-happiness-with-basic-visualization-and-eda Which parameters or factors are used for calculate Overall rank (Rank of the country based on the Happiness Score) of a Country? Score: A metric measured in 2015 by asking the sampled people the question: “How would you rate your happiness on a scale of 0 to 10 where 10 is the happiest.” GDP per capita: The extent to which GDP contributes to the calculation of the Happiness Score. Health life expectancy: The extent to which Life expectancy contributed to the calculation of the Happiness Score Freedom to make life choices: The extent to which Freedom contributed to the calculation of the Happiness Score. Perceptions of corruption: The extent to which Perception of Corruption contributes to Happiness Score. Generosity: The extent to which Generosity contributed to the calculation of the Happiness Score. World Happiness 2019 dataset In the data only ‘Country’ is in the text all others features are numerical. So let’s calculate mean, standard deviation, five quantiles etc… Statistics of numerical data How is the Happiness Score is distributed? As you can see below happiness score has values above 2.85 and below 7.76. So there is no single country which has happiness score above 8. Distribution of Happiness Score Let’s see relationship between different features with happiness score. 1. GDP per capita Relationship between GDP per capita(Economy of country) has positive strong relationship with happiness score. So If GDP per Capita of a country is high than Happiness Score of that country also more likely to high. Happiness Score vs GDP er capita (Economy) Top 10 Countries with high GDP (Economy) Top 10 Countries with highest GDP 2. Perceptions of corruption Distribution of Perceptions of corruption rightly skewed that means very less number of country has high perceptions of corruption. That means most of the country has corruption problem. As we know Corruption is a very big problem for World. Corruption is a cancer: a cancer that eats away at a citizen’s faith in democracy, diminishes the instinct for innovation and creativity; already-tight national budgets, crowding out important national investments. It wastes the talent of entire generations. It scares away investments and jobs . -Joe Biden How corruption can impact on Happiness Score? Perceptions of corruption data is highly skewed no wonder why the data has weak linear relationship, but as you can see in scatter plot most of the data points are on left side and most of the countries with low perceptions of corruption has happiness score between 4 to 6. Countries with high perception score has high happiness score above 7. Top 10 Countries with high Perceptions of corruption 3. Healthy life expectancy He who has health has hope, and he who has hope has everything. -Arabian Proverb Healthy life expectancy has strong and positive relationship with happiness score. So If country has High life expectancy it can also have high happiness score. This make sense because anyone who has very long healthy life he/she is obviously happy. I will be also happy if i get long healthy life. What you say? Top 10 Countries with high Healthy life expectancy 4. Social Support When a person is down in the world, an ounce of help is better than a pound of preaching. -Edward G. Bulwer-Lytton Social support of countries also have strong and positive relationship with happiness score. Also relationship with happiness score needs to be strong because more you will help socially more you will be happy. Top 10 Countries with high Social Support 5. Freedom to make life choices We are the creative force of our life, and through our own decisions rather than our conditions, if we carefully learn to do certain things, we can accomplish those goals. — Stephen Covey Freedom to make life choices has positive relationship with happiness score. This relation make sense because of more you will get freedom to make decision about your life more you will be happy. Top 10 Countries with high Freedom to make life choices 6. Generosity Generosity has very weak linear relationship with Happiness score. Suddenly we get question in our mind that, Why the generosity has not linear relationship with happiness score? Generosity score based on the countries which gives the most to nonprofits around the world. Countries which are not generous that does not mean they are not happy. Top 10 Countries with high Generosity How one feature is related to another feature? Below heatmap shows correlation between features. Happiness score is highly correlated with GDP per capita > Social support==Healthy life expectancy>Freedom to make life choices>Perceptions of corruption Happiness score is not much correlated with Generosity. Top 10 Countries with high Happiness Score So Finland is world’s happiness country. Currently Finland’s government considering to apply a new idea which is to work only 4 days per week (for more detail click here). No wonder why Finland is world’s most happiness country. This idea needs to apply in all country what you say ? 😆 You can see above world’s top 10 countries in happiness they don’t have much difference between the happiness score. Now Let’s compare the top 5 countries in happiness with different features. As you can see above, The first country in happiness Finland has high generosity, GDP and healthy life expectancy than other 4 countries. For Visualize happiness score between 2016 to 2019 on the geographic map go here : https://www.kaggle.com/dgtech/world-happiness-with-basic-visualization-and-eda
https://medium.com/analytics-vidhya/what-is-world-happiness-index-f5744490701f
['Denil Gabani']
2020-01-10 08:18:40.926000+00:00
['Machine Learning', 'Country', 'Data Science', 'Data Visualization', 'Exploratory Data Analysis']
Understanding Git under the hood
Adding Sequential Commits, & Nested Directories / Files This just had one file, and no directories. Lets add more blobs & trees to give clearly understanding of how files & directories are stored in object database. $ mkdir files $ echo "aaa" >> files/aaa.txt $ echo "bbb" >> files/bbb.txt $ git add . $ git commit -m "Add files" $ git log It is intentional that aaa content is also added for the new file. Lets check upon commit content by reading that git object → git cat-file 5c45ebb -p . tree 21e6981939eae6277ad2128753e2984b552868cf parent 239d1a0f75b596d7d67e23721f11066abf144982 Important changes include parent attribute that refers to parent commit. and then observe that tree has changed since files and directories changed. Lets go one by one and check content of trees & blobs to build files & directories. 100644 blob 72943a16fb2c8f38f9dde202b7a70ccc19c52f34 a.txt 040000 tree 75d27669a0c4e9dd702c71c6ac3307d533493ba5 files Git object for a.txt remains same as content remains same. But within the parent tree object, there is another tree which represents that its a directory. When checking content of this nested tree: git cat-file 75d2766 -p 100644 blob 72943a16fb2c8f38f9dde202b7a70ccc19c52f34 aaa.txt 100644 blob f761ec192d9f0dca3329044b96ebdb12839dbff6 bbb.txt If blobs are checked it will have expected content. Note that here aaa.txt will have same object of a.txt cause the content of the files are same containing aaa . However file name change didn’t effect this as it was not stored in blob itself but rather it was stored in its containing tree. Git Object Model It can been seen how git reused common blob containing aaa between two commits. Only 7 objects .git/objects/ ├── 21 │ └── e6981939eae6277ad2128753e2984b552868cf ├── 23 │ └── 9d1a0f75b596d7d67e23721f11066abf144982 ├── 37 │ └── 057b2e8a9041ef88b805a5b7c4e0e668a03be4 ├── 5c │ └── 45ebb2052428ce037e2fbf760cb0ec9a18f6e2 ├── 72 │ └── 943a16fb2c8f38f9dde202b7a70ccc19c52f34 ├── 75 │ └── d27669a0c4e9dd702c71c6ac3307d533493ba5 ├── f7 │ └── 61ec192d9f0dca3329044b96ebdb12839dbff6 Interpret working directory from commit This currently have two commits, and user can checkout to either of these commits. For either of the commits, its perspective of files and directories might differ. For a commit it will ignore connected commits, and build up files and directories using trees and blobs it is connected to Here it will show how its seen by two commits
https://medium.com/swlh/understanding-git-under-the-hood-b1aeae1d02f5
['Udara Bibile']
2020-04-28 18:04:29.635000+00:00
['Development', 'Github', 'Git', 'Programming', 'Coding']
A Highly Biased Review of C# Changes from Version 1.0 to 9.0
A Highly Biased Review of C# Changes from Version 1.0 to 9.0 We’ve lived with C# for two decades. How much has it changed? When C# first appeared in the summer of 2000, it was immediately clear that something was different. At the time, Microsoft was a massively dominant software company, but it wasn’t known for crafting its own programming languages. Instead, Microsoft preferred to build its own implementations of standard languages (like Visual C++ for C++, or Visual J++ for Java), and amplify them with proprietary frameworks (like MFC). Truth be told, they hadn’t created a language that was entirely their own since BASIC. But Microsoft’s “great platform reset” — the shift from COM to .NET — unleashed new ambitions and gave rise to two major new languages. One (VB.NET) evolved slowly over the years before eventually slipping into near-irrelevance. The other was C#. This year is C#’s 20-year anniversary. It’s a massive milestone, two decades after the language was first introduced to an audience of mostly corporate Windows-based developers. My, how things have changed. This year is also the moment we get C# 9.0, a mature iteration of the language that continues to push forward. But to understand where the language is going, it helps to know where it’s been. Here’s the story of C#, in ten highly subjective snapshots. A humble beginning C# 1.0: Focus on OOP Release date: As a preview in 2000, as a full release in 2002 In a nutshell: Microsoft was nervous about the exploding popularity of Java, but had been barred from creating their own non-standard extensions in J++ (and sued by Sun for millions). Their solution was to build a whole new language that stole the best bits of Java. Like Java, C# had a familiar C-based syntax, type safety, a garbage collector for memory management, and a class library stocked with features. The official philosophy was to give developers a streamlined language for object-oriented programming. The unofficial goal was to give Windows developers all the best parts of Java without leaving Microsoft’s tight embrace. Notable features: Although C# was deep in the shadow of Java, it didn’t hesitate to make a few changes. Properties and events became official language constructs, not just naming conventions, like they are in Java. (Mads Torgersen, the current C# lead language designer, still muses about whether events were a good idea.) Hidden gem: C#’s metadata system, which allows you to extend code with attributes, was a brilliant move toward open-ended extensibility. A few years later, Java introduced a similar feature in Java SE 5.0. C# 1.2: All Quiet on the Western Front Release date: 2003 In a nutshell: C# 1.2 was released with .NET 1.1, continuing Microsoft’s long tradition of inconsistent version numbering. The changes were minor tweaks. Most notable feature: Stability. Microsoft had gained a bad reputation for frequently replacing The Next Great Developer Thing with The New Next Great Developer Thing. The fact that C# wasn’t being reworked from scratch was reassuring. C# 2.0: More Flexible Classes Release date: 2006 In a nutshell: C# 2.0 was somewhat overshadowed by the hype about .NET 2.0, a major release that added massive new feature areas to the framework (particularly in ASP.NET). But C# wasn’t sitting still. It added its own major enhancements with partial types (allowing you to split a class definition across files, which was useful for design-tool code generation), anonymous methods, and generics. Most notable feature: Generics, easily. Before C# 2.0, many projects began with a similar tradition. First you created a bunch of data classes ( Customer , Product , Order ). Then you created other classes that use these data classes, including collections (like CustomerCollection , ProductCollection , OrderCollection ). It was boilerplate, but it could swamp even trivial applications with type declarations. Then came generics — a way to parameterize the same class to work with any type. One List<T> later, and the collection madness was over. C# 3.0: Queries in the Code Release date: 2007 In a nutshell: Barely a year later, it was clear that C# wasn’t going to sit still. Microsoft enhanced all its .NET languages to support LINQ, allowing developers to write data queries directly in their code. At the time, some felt that LINQ was little more than a gimmick to support object-relational mapping tools, like Microsoft’s new Entity Framework. (In other words, they thought that the C# language support was simply there to make these giant code-generation tools workable. After all, who wants to use SQL-like syntax to fetch data from an ordinary in-memory collection?) But in the years since, it’s become clearer that expression support marked a shift in C#’s focus—and its future. Most hyped feature: Query expressions. Real winning feature: Lambda expressions. When C# 3.0 was released, all its expression features were lumped together. But lambda expressions show a key feature borrowed from functional languages — the ability to free code from officially declared functions and use it in more flexible ways. C# 4.0: A Brief Lull Release date: 2010 In a nutshell: Three years had passed since C#’s most ambitious enhancement, and the language developers were focusing on a small set of targeted refinements dealing with interop scenarios. Notable features: None. (Controversial, I know.) There was a nicer syntax for optional function arguments, and some improvements for managing COM data types, although most developers had left that legacy world behind years ago. Some developers argue that the dynamic keyword, introduced to give C# a small island of freedom to break the rules of type safety, was a significant enhancement. But it’s probably more honest to see it as a backdoor that’s most useful when interacting with loosely typed languages. C# 5.0: Elegant async code Release date: 2012 In a nutshell: Since its earliest version, C# has supported asynchronous programming patterns. In fact, the concept is built right into its delegate type, and used in the earliest version of SOAP-based web services from .NET 1.0. The problem was that asynchronous code could get messy, and messy code is opaque, and opaque code gives shelter to a thriving population of bugs. C# 5.0 cleaned things up with new language keywords for asynchronous support. Notable features: The async and await keywords. Interestingly, JavaScript borrowed the same syntax years in ES7. And although C# certainly didn’t invent this pattern, it still marks a significant evolution. The language that started off echoing Java and hustling to catch up with features like generics was now confidently asserting its own path, and inspiring other languages. C# 6.0: A Grab Bag of Minor Refinements Release date: 2015 In a nutshell: Unlike C# 2.0, 3.0, and 5.0, there was no marquee feature in version 6.0. Instead, we got some quiet improvements to the way await works in exception handling blocks, a nifty little nameof operator that acts like a kind of lightweight reflection, and smart exception filters. But the most dramatic change was the C# compiler, which had finally been completely rebuilt in C# — a canonical milestone for any mature language — and released as open source. For more insight, check out Mad’s Torgersen’s account of the Roslyn project. Personal favorite feature: String interpolation solves no problem. In fact, it gives you a way to accomplish something (concatenate strings and variables) that you can already do at least two different ways. But it’s hard not to love the way that it makes long string manipulations refreshingly clean and readable. It just goes to show that the C# designers won’t stop tinkering with even the oldest parts of the language, and looking for ways to improve them. C# 7.0–7.3: The Rise of Pattern Matching Release dates: 2017–2018 In a nutshell: The .NET world was in the middle of a revolution, splitting into two branches — the traditional .NET Framework and the new open-source and cross-platform .NET Core. Amid the unrest, C# switches to a system of more frequent point releases. Notable features: Pattern matching represents the start of a new direction. Looking back, pattern matching in C# 7.x feels like half a feature. It’s still too modest to really brag about. But it established a foundation that the next two releases of .NET could build on. There’s also plenty of great language trivia topics in C# 7.0 (tuples, anyone?), but nothing truly earth-shaking. C# 8.0: A Steady Trickle of Improvements Release date: 2019 In a nutshell: By C# 8.0, it’s clear that something is different . Over the last few years, the cadence of releases and the pace of language changes has stepped up. No longer are we waiting three years for a dramatic new feature. Now, there’s a steady pipeline of enhancements, often targeting difference niches. For instance, C# 8 adds a default interface method feature that lays the groundwork for increased compatibility with Java, even if it risks tempting developers with easily abused power. There’s also the using declaration that automatically calls Dispose() on objects that implement IDisposable when they go out of scope. (It’s like the traditional using keyword, but with no block structure required.) Add to that more specialized features like nullable reference types and asynchronous streams, and it’s clear we’re looking at a mature language that hasn’t stopped growing. Most notable feature: Switch statement expressions. Borrowing an old, creaky C# keyword ( switch ), Microsoft smuggles in a completely different feature. It’s a kind of functional-programming-done-light. C# 9.0: Borrowing functional features Release date: 2020 In a nutshell: C# 9.0 is the first version of the language to be released with the newly reunified .NET 5. It continues to creep closer to functional programming languages like F# (the compiler teams talk often). In fact, it often seems that F# has more influence in the software development world by inspiring C# than it does as a standalone language.
https://medium.com/young-coder/c-sharp-language-changes-from-1-0-to-9-0-b2282e8e30fd
['Matthew Macdonald']
2020-10-16 19:03:37.200000+00:00
['Dotnet', 'Microsoft', 'Csharp', 'Programming', 'Programming Languages']
Excel to Python
Excel to Python Building front-end Excel workbooks for Python tools During the Build 2016 conference, Microsoft announced that 1.2 billion people around the globe were using Excel [1]. That same year, the estimated population of Earth was 7.4 billion [2]. That is 16.2% of all people on Earth. Python comparatively boasts a mere 8.2 million active developers according to a 2019 report [3] — which is 0.001% of the Earth’s population. With those numbers in mind, it may benefit us to encourage more interactivity between Excel and Python — opening up the floodgates to a swarm of new users for Python-built tools. The opportunity for Excel front-end’s for Python is huge. In this article, we will take a look at how to do this, and implement a ‘typical’ finance Excel setup sheet.
https://towardsdatascience.com/excel-to-python-79b01638f2d9
['James Briggs']
2020-05-01 19:03:28.818000+00:00
['Data Analysis', 'Excel', 'Python', 'Data Science', 'Programming']
Zynga is Presenting at Game Developers Conference (GDC) 2019!
The Game Developers Conference (GDC) is one of the biggest events in the game development industry and this year it’s taking place in San Francisco between March 18–22, 2019. At GDC, there are presentations and panels about the fun and challenging aspects of game development as well as how different game development teams work and implement new ideas. Zynga GDC Presentations This year at GDC, there are six speakers from Zynga’s studios with presentations covering topics from how to drive engagement for VIPs in games, to AI and Machine Learning applications, to massive database scaling. If you’re going to GDC this year, check out the speaking sessions and presentations from Zynga:
https://medium.com/zynga-engineering/zynga-is-presenting-at-game-developers-conference-gdc-2019-a6a29359c3b5
['Zynga Engineering']
2019-03-12 20:14:30.409000+00:00
['Machine Learning', 'Game Development', 'Game Design', 'Artificial Intelligence', 'Software Development']
Monitor GC stats with a startup hook
.NET core startup hooks is a feature I really like, and I had a lot of fun with it in the past. Still, I had yet to find a legitimate use for them, and the opportunity finally came a few days ago. What are startup hooks? Let’s start by a quick catch-up, for those who don’t know what startup hooks are. The feature was introduced with .net core 2.2, and allows to execute any arbitrary code in a .net process before the Main entry point has a chance to run. This is done by declaring a DOTNET_STARTUP_HOOKS environment variable, pointing to the assembly you want to inject. The target assembly must declare a StartupHook class outside of any namespace, with a static Initialize method. That method is the entry point of the hook. My use-case Back to the story. If you follow me on social medias, you might know that I joined Datadog a few weeks ago. I’m working on improving the performance of the .net tracer. As with any performance work, one of the first steps is to setup tests to measure the impact of the optimizations. Datadog already has a reliability environment, where the product is tested against popular applications, and key indicators are measured such as response time, CPU usage, or memory consumption. This was a very good start, but I also wanted to get stats about GC, and more precisely the number of garbage collections. How to measure this? From inside of the process, it’s just a matter of calling GC.CollectionCount . From outside of the process it gets a bit trickier, as performance counters are not available for .net core applications. You can instead use ETW or event-pipes, as my former coworker Christophe Nasarre wrote back in the days. But this is quite a bit of work, and I was looking for a quick win. I needed an easy and unobtrusive way to inject my code inside of the applications we test. That’s when I remembered of startup hooks. Using a startup hook to monitor GC collection count The Datadog agent exposes a StatsD interface that can be used to push any arbitrary metric. My plan was to inject a thread in the target applications that would poll the number of collections and push it to the agent. Once you know about startup hooks, this is surprisingly straightforward to implement: The code makes use of the DogStatsD-CSharp-Client nuget package. From there, it was just a matter of adding a DOTNET_STARTUP_HOOKS environment variable, pointing to the hook, to start monitoring any .net core application. Or so I thought. The catch Loading an arbitrary assembly intro a process that has no prior knowledge of it comes with (at least) one tricky part: handling references. My startup hook depended on the DogStatsD-CSharp-Client library, which itself had its own references, and all of those weren’t known to the target application at compilation time. This brought its fair share of dependency errors at runtime. Rather than trying to reconcile the errors on a case-per-case basis, I needed a way to isolate my dependencies from those of the target applications. .NET core does not support AppDomain , but brings a worthy successor: AssemblyLoadContext . To take advantage of it, I separated my project into two assemblies: GCCollector , that starts the background thread and pushes the metrics, and GCStartupHook , which is the entry point of the startup hook. Inside, instead of directly referencing GCCollector , I load it through a dedicated AssemblyLoadContext , so that all of its dependencies are isolated: In the implementation of the AssemblyLoadContext , I needed to load all required dependencies. Rather than re-implementing the assembly resolve logic, I took advantage of new gem brought by .net core 3.0: AssemblyDependencyResolver . The way it works is very straightforward. The resolver is given the path to an assembly, in this case GCStartupHook.dll . Whenever ResolveAssemblyToPath is called, it’s going to use the associated deps.json file in order to resolve dependencies just like if that assembly was a standalone application. Incredibly convenient for plugins… or for startup hooks.
https://kevingosse.medium.com/monitor-gc-stats-with-a-startup-hook-55aa03dedea3
['Kevin Gosse']
2020-06-23 17:45:18.208000+00:00
['Software Development', 'Garbage Collection', 'Csharp', 'Dotnet', 'Dotnet Core']
How banks use AI to catch criminals and detect bias
Imagine an algorithm that reviews thousands of financial transactions every second and flags the fraudulent ones. This is something that has become possible thanks to advances in artificial intelligence in recent years, and it is a very attractive value proposition for banks that are flooded with huge amounts of daily transactions and a growing challenge of fighting financial crime, money laundering, financing of terrorism, and corruption. The benefits of artificial intelligence, however, are not completely free. Companies that use AI to detect and prevent crime also deal with new challenges, such as algorithmic bias, a problem that happens when an AI algorithm causes systemic disadvantage for a group of a specific gender, ethnicity, or religion. In past years, algorithmic bias that hasn’t been well-controlled has damaged the reputation of the companies using it. It’s incredibly important to always be alert to the existence of such bias. For instance, in 2019, the algorithm running Apple’s credit card was found to be biased against women, which caused a PR backlash against the company. In 2018, Amazon had to shut down an AI-powered hiring tool that also showed bias against women. Banks face similar challenges, and here’s how they fight financial crime with AI while avoiding the pitfalls. Catching the criminals Fighting financial crime involves monitoring a lot of transactions. For instance, the Netherlands-based ABN AMRO currently has around 3400 employees involved in screening and monitoring transactions. Traditional monitoring relies on rule-based systems that are rigid and leave out many emerging financial threats such as terrorism finance, illegal trafficking, and wildlife and health care fraud. Meanwhile, they create a lot of false positives, legitimate transactions that are flagged as suspicious. This makes it very hard for analysts to keep up with the deluge of data directed their way. This is the main area where AI algorithms can help. AI algorithms can be trained to detect outliers, transactions that deviate from the normal behavior of a customer. The data science team of ABN AMRO’s Innovation and Design unit, headed by Malou van den Berg, have built models that help find the unknown in financial transactions. The team has been very successful at finding fraudulent transactions while reducing false positives. “We are also seeing patterns and things we did not see before,” Van der Berg explains. Instead of static rules, these algorithms can adapt to the changing habits of customers and also detect new threats that emerge as financial patterns gradually change. “If our AI flags a transaction as deviating from a customer’s normal pattern, we find out why. Based on the available information we check whether the transaction deviates from the normal pattern of a customer. If the investigation does not provide clarity about the payment, we can make inquiries with the customer,” van den Berg says. ABN AMRO uses unsupervised machine learning, a branch of AI that can look at huge amounts of unlabeled data and find relevant patterns that can hint at safe and suspicious transactions. Unsupervised machine learning can help create dynamic financial crime detection systems. But like other branches of AI, unsupervised machine learning models might also develop hidden biases that can cause unwanted harm if not dealt with properly. Removing unwanted biases Data science and analytics teams at banks must find the right balance where their AI algorithms can ferret out fraudulent transactions without infringing on anyone’s rights. Developers of AI systems make sure to avoid including problematic variables such as gender, race, and ethnicity in their models. But the problem is that other information can stand as proxies for those same elements, and AI scientists must make sure these proxies do not affect the decision-making of their algorithms. For instance, in the case of Amazon’s flawed hiring algorithm, while gender was not explicitly considered in hiring decisions, the algorithm had learned to associate negative scores to resumes with female names or terms such as “women’s chess club.” “For instance, when AI techniques are to be used to identify clients suspected of criminal activity, it must first be shown that this AI treats all clients fairly with respect to sensitive characteristics (such as where they were born),” van den Berg says. Lars Haringa, a data scientist in van den Berg’s team, explains: “The data scientist who builds the AI model not only needs to demonstrate the model’s performance, but also ethically justify its impact. This means that before a model goes into production, the data scientist has to ensure compliance regarding privacy, fairness, and bias. An example is making sure that employees don’t develop biases as a result of the use of AI systems, by building statistical safeguards that ensure employees are presented unbiased selections by AI tools.” The department that’s responsible for the outcome of the transaction monitoring analyses also takes responsibility for fair treatment. Only when they accept the work and analyses by the data scientist can the model be used in production on client data. ABN AMRO’s transaction monitoring team measures potential bias upfront and periodically to prevent these negative effects. “At ABN AMRO, data scientists work with the legal and privacy departments to ensure the rights of clients and employees are safeguarded,” van der Berg tells TNW. Balanced cooperation One of the challenges companies using AI algorithms face is deciding how much detail to reveal about their AI. On the one hand, companies want to take full advantage of joint work on algorithms and technology, while on the other, they want to prevent malicious actors from gaming them. And they also have a legal duty to protect customer data. “To safeguard algorithm effectiveness, like all other models within banks, there are several critical stakeholders in model approval: besides the model initiator and developers, there is Model Validation (independent technical review of all model aspects), Compliance (e.g. application of regulation), Legal, Privacy, and Audit (independent verification of all proper processes, including the integrity of the entire chain of modeling and application),” van der Berg says. “This is standard practice for all banks.” ABN AMRO does not publish the details of its anti-crime efforts, but there is a strong culture of knowledge sharing, van der Berg says, where different departments put their algorithms and techniques at each other’s disposal to achieve better results. But at the same time, there are high restrictions on the use of customer data and statistics. ABN AMRO is also sharing knowledge with other banks with the same restrictions. Where there’s a need to share data, the data is anonymized to make sure customer identities are not revealed to external parties. Banking, like many other sectors, is being reinvented and redefined by artificial intelligence. As financial criminals become more sophisticated in their methods and tactics, bankers will need all the help they can get to protect their customers and their reputation. Sector-wide cooperation on smart anti-financial crime technologies that respect the rights of all customers can be one of the best allies of bankers around the world.
https://medium.com/abn-amro-developer/how-banks-use-ai-to-catch-criminals-and-detect-bias-4a261f6de4ad
['Abn Amro']
2020-12-07 15:57:43.076000+00:00
['Abn Amro', 'Innovation', 'AI', 'The Next Web', 'Future Of Finance']
What Is Martial Law and How Could Trump Declare It
To begin with, martial law isn’t described in the Constitution. Declaring it is a prerogative, on a Federal level, of the President and Congress, but there isn’t much more to go with. Its specifics aren’t outlined anywhere. Quoting a Syracuse Law professor, “If someone has declared martial law, they’re essentially saying that they are the law.” Martial law is intended to replace failed civilian institutions. In dire moments, when the very fabric of civil society is collapsing, when civilian jurisdictions cannot administer justice, martial law can be declared to provide the country with a structure and ensure its continuity. Trump keeps on suing, and he keeps on losing. Why? Martial law has been declared at least 68 times in American history. Whether to enforce Federal laws against local resistance (in the South after the Civil Rights Act), to restore peace after riots, or to ensure the safety of cities having been ravaged by natural disasters, the applications of martial law are broad but have always been geographically limited. Never has a state of Federal martial law been declared. The main consequence of martial law is the suspension of civil liberties. Under such a regime, “the right to be free from unreasonable searches and seizures, freedom of associations, and freedom of movement” can all be suspended. Most importantly, the writ of habeas corpus can be suspended too. This means that there would be no more legal recourse against wrongful imprisonment. Whoever declares martial law is free to arrest and detain anybody for any reason. Martial law has some limitations. The Posse Comitatus Act prevents federal troops from enforcing domestic law. A major exception to note is that National Guard units are exempt from the Posse Comitatus Act because they take their orders from the state governors. The Insurrection Act, however, provides for a loophole that allows the use of troops for federal law enforcement in cases when “rebellion against the authority of the U.S. makes it impracticable to enforce the laws of the U.S. by the ordinary course of judicial proceedings.” The Insurrection Act isn’t martial law. While they appear similar, they are two very different mechanisms. The Insurrection Act is designed to allow military enforcement of civilian laws and jurisdiction, not to replace them entirely. Martial law operates in a very grey area. To quote the Syracuse Law professor again, “One of the problems, of course, is that there’s nothing to prevent the president or a military commander from declaring martial law. They can do it. It’s not sanctioned by law.” A few people are key to prevent martial law. Chief among them is the Defense Secretary. Until recently, this was Mark Esper, who was quoted in a memo to the troops as saying: As citizens, we exercise our right to vote and participate in government. However, as public servants who have taken an oath to defend these principles, we uphold DoD’s longstanding tradition of remaining apolitical as we carry out our official responsibilities. Mark Esper was fired on November 11 by Donald Trump. Another key figure is the Chairman of the JCS, Mark Milley, who stated in an NPR interview:
https://ncarteron.medium.com/what-is-martial-law-and-how-could-trump-declare-it-35945135d91f
['Nicolas Carteron']
2020-12-20 20:49:20.882000+00:00
['Election 2020', 'Trump', 'Politics', 'Law', 'Society']
On the other side
I didn’t fall to break, rip your beating heart with lies and excuses; with selfish cravings like beasts of the wild. I fell to heal, to taste of wholeness’ bliss bound to you in happiness; a primal connection like finger and nail. I fell too hard, broke love’s fountain of mesmerizing purity; now to the pieces, I grovel to mend in repentance.
https://medium.com/a-cornered-gurl/on-the-other-side-1f814b94b93c
["Ngang God'Swill N."]
2020-06-01 11:16:00.879000+00:00
['Poetry', 'Healing', 'Music', 'Heartbreak', 'Love']
Adapting a Truck to Be Autonomous
Nandhini Botta, Starsky Robotics’ Electrical Engineer, on building self-driving trucks, challenges of being a woman in the tech industry, and some best practices of how to constantly grow yourself. Nandhini, you’re an electrical engineer working on autonomous trucks. Can you share a little bit about what it is that you do? What a typical day for you is like? At Starsky, our design process mostly involves adapting trucks to work with our self-driving system and that’s where I come in. As an electrical engineer, I work on the truck most of the day, doing design, researching, and repairing. We want our system to be able to work on different models of trucks, which means we need to know them really well and learn what went behind the design process of all those models. Trucks are complexly engineered systems, and even two vehicles of the same vendor might be very different. So, a huge part of my job is figuring out why that engineering design process took place. Why did they do this or that? I need to understand why it was built that exact way, learn the different features that are in the truck, and figure out how we can safely put our system on it. The opportunity provided by Starsky to adapt the different versions and models of trucks has given me a great insight into seeing the evolution of electronics in trucks. I like to look at it as an expedited education for me to learn about the different failures or issues the truck manufacturers have seen and transformed their design for it. Not only does it help me to design better but also it helps me evaluate best practices across different approaches. Electronics in an automobile are built to increase the reliability of the system. The complex electronic system that controls a truck can be broken into more simple redundant and reliable parts that would cause the truck to safely stop or raise a warning if any module fails. There is elegant simplicity in the systems that has been exciting to learn. Did you always know that working in the tech space was what you wanted to do? What made you decide to pursue a career in engineering? And how did you end up with driverless trucks? That was a very natural decision for me. I’ve always been inclined towards science, and math in particular. Actually, when I was a kid, I wanted to be a veterinarian, but I didn’t have a knack for biology. So, I decided to develop on my math skills further and study engineering. Later, I adopted a dog and joined Starsky, which is a dog-friendly workspace. This allowed me to combine my love for animals and engineering. I had a choice between computer and electrical engineering. But I always liked building things and seeing things. So, doing electrical engineering was like a sweet spot for me because I could still code but also be able to build something tangible. That’s something I still enjoy about my job. The reason I chose to work on driverless trucks is that it’s an innovative and challenging space. It’s a completely new field, so I can’t just go online and find all the information I need. I have to figure it out by myself and then build it from scratch. The fact that I can do something that nobody else has ever done is very exciting. It’s no secret that many women in the tech industry feel their gender affects the way that they are perceived. Have you ever been in a situation like that? If so, how did you handle it? True, there are still fewer women in the tech industry than men. Actually, Starsky has more female engineers than any other company I worked at before. For example, at my previous company, I was the only woman on the team, and there were just two women in a department of about 60 engineers. Overall, I think that things are getting better in the industry. People are more careful these days. However, there are still some annoying situations happening here and there. For instance, if you got a job, there’s always this one person who will say, “Oh, she got the role only because she is a woman.” But that’s something that constantly forces you to work hard and prove that you got the job because you deserve it. I’m also an introvert. I speak softly, and I know I can be spoken over. That’s something I stay conscious of. One of the techniques I used a lot in my previous workplace is to find people that support you. If I need to get into an argument with someone who already has an unconscious bias because I’m a woman, it’s always good to know that you have the support of others. It really helps over a period of time that people have confidence in you. Don’t shut yourself off because one person does not agree that you should be there at the table. Just keep going and gain confidence. At Starsky, it’s all very different. We have a very diverse team of professionals. There’s an inclusive company culture and a great collaborative environment where everyone’s opinions and ideas matter. What’s more, Starsky’s management sets this model inclusive behavior and empowers team members at every level to contribute. I think this is an amazing example of how it should be done in any company regardless of its size. What advice would you give to those who are considering a career in the tech industry or autonomous space in particular? Maybe what you wish you had known? I think it’s common that, as soon as people graduate and get out of college, they’re done studying. I have to admit that I had that same assumption when I started. But very soon, I realized that you have to constantly grow and always keep learning. When you’re in college, you have pretty clear goals every year, like finishing five subjects or passing this or that exam. Graduation only means that, from now on, you need to set the same personal goals for yourself. This will help in your career a lot. Otherwise, once you join an industry and get very involved with a company, very soon you will probably realize that you’re not making progress any longer. Simply doing your job well is not enough if you want to keep growing. Never forget about your own professional development. It will benefit not only your career, but also the company you are working at. That’s something that I personally learned from my experience. Initially, in my career, I was too focused on the company’s projects, going far too above and beyond towards my team’s goals. But soon I realized that yes, my team is doing well, but personally, I’m getting too linked to the team and I’m cutting off the options that I have out there. So, keep your experience diverse. Search for new learning opportunities at work, go to workshops, attend conferences, participate in meet-ups, take classes, keep reading research papers, be curious about what other people are doing. There are so many other broad spectrums out there. Can you talk a little bit about your personal goals or projects that you’re doing? I have my personal pro bono project where I help people with disabilities communicate. I design low-cost devices to make computers more accessible for children and adults with limited motor abilities. I’ve been doing this for a pretty long time. My mom is the principal of a school for kids with disabilities in India; she has been working there for 20 years. So, I used to spend a lot of time there. I literally grew up at that school, and I was a teacher there for six months between high school and college. So, I know the use case very well. Although I’m so far away now, I still serve as a consultant and help guide engineers. For example, I give them a design and get people to connect so that they can complete the project. I do this mostly on weekends. I’m very grateful for this opportunity to use my knowledge and expertise to make a difference. I like doing these kinds of projects, it gives me happiness. *** If you want to join the Starsky team and help us get unmanned trucks on the road, please apply here.
https://medium.com/starsky-robotics-blog/adapting-a-truck-to-be-autonomous-56cf00f2646
['Starsky Team']
2019-05-20 18:02:14.094000+00:00
['Engineering', 'Tech', 'Trucks', 'Autonomous Cars', 'Self Driving Cars']
Democracy and the Principle of Imminent Collapse
Democracy and the Principle of Imminent Collapse Avoiding The Nudge field of dominoes The Principle of Imminent Collapse states that, by circumstance or consequence, everything in our universe is on the verge of failure and all it takes is a nudge to bring it down. This principle extends upward and outward to the Universe and inward and downward to terrestrial and mundane examples of river pollution and celebrity personalities shooting their mouths off. That latter example also embodies the statement “give a man a long enough rope and he’ll hang himself with it.” The existence of democracy itself is an improbable phenomenon in a heavily populated global civilization. Today a multitude of factions vie for recognition and participation in the global forum. Even the inventors of democracy, the Greeks and later the Romans, did not allow just anyone to cast his vote for or against a law or action of the state. Athens was full of Athenians who voted and a slave class of people who did not. One did not merely move to Athens or any of the other city-states and sign up as a voting citizen. Then, there was a ruling class of learned and/or powerful people who constituted the intelligentsia who could plan and implement the business of the state. The blacksmith, the stone masons and the shepherds first had no interests of state nor and say in the matters of state. Slaves certainly were not valued as contributors to the running of the state. Today, we hold as self-evident that everyone should have a say in the matters of selecting representatives who will ultimately conduct the state’s business. The plebiscite is only for the lesser offices not the decision making process that determines when and with whom we go to war, who we obliterate in bombing raids, how we reduce the enemy to a quivering mass of humble flesh. Our learned and powerful representatives meet to make a show of supporting or condemning each and every law, penalty, budget item and regulation. Men and women stand for election and re-election to be the ones who cast their votes for and against the things that are to be for the good of the people. It is difficult however to determine what is truly good and not good for the people. A tax might allow the community to have a safe bridge to cross into the next county or state, but the very same tax might send marginal family incomes into a fatal downward spiral to insolvency. Reducing tax burdens may allow taxpayers to buy essential goods and services, but the lack of funding of healthcare leads to poorer aggregate health and higher costs later. This fiscal relationship may impact the same individual directly. Lack of spending for public transportation services for people who do not drive causes their friends and families to have to bear the cost of the transportation themselves. In the 4th to 3rd century BC, there were no multi-national corporations that earned multi-billion dollar profits while neglecting human needs and exploiting cheap labor and the masses’ need for some commodity. Well there was salt; and salt mines; and people who were sent to the salt mines. People needed salt; wanted salt; were sometimes paid in salt. And if there were wildly profitable corporations in ancient Greece, they certainly would have felt justified in paying the campaign expenses of a Senator or Caesar or two. Democracy has become synonymous with Capitalism in the American sense. Ironically, it has become synonymous with Communism and Marxism in South America. In the USA sense, bringing democracy to the people of a country means deposing a dictator and opening their markets to American investors. But all of that aside, democracy must spring forth on its own from its intrinsic nature, from the unified will of those people who are the ‘demo’ of the ‘cracy’. One cannot force a people to become democratic. When a people finally do decide that a voting system for representatives is what they want, it is a fragile thing indeed. Young democracies typically see a dozen or more parties and affiliations emerge to promote a candidate to tilt at the windmill of the incumbent who probably seized power in a coup and held that power for many years before being pressured to step down or is deposed by CIA backed insurgents. A few assassinations of prominent candidates or a couple of improvised explosive devices tossed into the polling places can unset any gains toward stabilization of a new government. Everyone has his particular agenda to promote in these latter days of democracy. Just imagine even the 13 states which were the original 13 revolutionary colonies of England trying today to agree on the Constitution, the Bill of Rights, and the whole three branches of checks and balances of the Federal government along with its expanded powers and authorities as acquired since the first continental congress. Brittan would have had ample opportunity to sneak back in and re-establish colonial rule while we were bickering over who had the prettiest powdered wig and curls or the flashiest shoe buckles. We name-call and degrade the personal attributes of they who would otherwise be wise statesmen and women. We mash together a bit of truth with a dash of lie, then fold in a whole lot of opinion then blend it all with innuendo and revisionist history, and half bake it. For all that effort we get a democracy soufflé that is fluffy, puffy and light. If we are not careful a single nudge will make it fall in on itself. We have all been busy erecting the thin scaffold of party politics in a race to get one over on the competition all the while neglecting its long term stability. No body sneeze just now. The acid rhetoric between candidates of even the same party let alone candidates of opposing parties has eroded the stone foundations of our nation. Senators and Congress-persons cannot debate the merits of any proposed legislation without smearing the reputations of every legislator in opposition. Where is the end of this contention? Will we reach Extinction or a New Equilibrium? While legislators and their backers spit vitriol at each other, the interest on the national debt grows by the hour. That doesn’t even include the increased spending and the increases of obligation for budget items for which we have a long standing commitment. The American Presidents of recent administrations, along with all of their advisors, financial backers and mouthpieces have lead the nation to the precipice and will be only a nudge to send us tumbling into the morass. The Principle of Imminent Collapse postulates that it takes only The Nudge to manifests the imminent collapse. The Soviets discovered the Principle of Imminent Collapse as they kept trying to compete in the Cold War against the much larger and more sophisticated economic machine of the West. They were drawn into intractable military conflict in Afghanistan and bankrupted themselves. The Soviets put themselves in the position of immanency and the West provided the Nudge. Today the financial sector of Western Capitalism has put itself in that very same position and little groups of dissenters plot as how to create that Nudge. There are so many possible Nudges to use. One or more in combination are so easy to employ when the tower is so flimsy and lousy with rot. Democracy and the Western financial system are extensively intertwined. The true electorate is the businesses which buy and sell influence to create and maintain a climate which is conducive to the long term survival of enterprise life even as the atmospheric climate becomes hostile to biological life. In the Terminator movie franchise, it is the rise of sentient machines that takes over the world. Orwell wrote of Big Brother who rules the future (now the Past) of 1984. In both worlds we the people failed to heed the prime warning: Make a system as powerful and extensive as you want, but do not let it have control of its own on/off switch. Switching off the machines and enterprises upon which we have come to depend may be painful but when they run amuck it sometimes becomes necessary. Either way we have to live with the consequences even when the clock of progress is set back too. One significant flaw inherent in democracy is that the people can vote to do something that is totally wrong and destructive. The people can vote to stop being their “brother’s keeper” and let the less fortunate among us sink. When we allow that to happen, we have already set in motion our own decline.
https://modalchoice.medium.com/democracy-and-the-principle-of-imminent-collapse-c827cf65652b
['Robert Carlson']
2020-02-09 19:15:33.916000+00:00
['Dominoes', 'Society', 'Politics', 'Democracy']
Making Your React Rails Application Update Real Time
Many applications that are commonly used need to update real time, and we can’t achieve this using the HTTP protocol alone. Meet WebSockets, a computer communication protocol that allows real data time transfer between server and the client. Unlike, the HTTP protocol where a communication happens via a request and response, WebSockets maintains a constant connection for data transfer. This allows the server to send content to the client without it being first requested by the client, facilitating real-time data transfer. In this blog post, I will be creating a real-time message board using React/Rails client/server and using Rails ActionCable and JS native WebSockets class. The application will not include Redux for simplicity. For anyone interested in integrating WebSockets with React Redux, I found this article very helpful. The application (GitHub) lists the current messages and any new posts will be updated real-time: What is Rails ActionCable? In Rails, ActionCable is what allows a connection to the server via WebSockets. Users connect to the ActionCable server via the route specified in config/routes.rb — we will get to that in a second. Once connected, the client or the consumer can create multiple “subscriptions” to one or more different “channels” using that connection. For example, a user connects to ActionCable server by visiting your website. Once connected they can subscribe to the “RoomsChannel”, “UsersChannel”, and/or “MessagesChannel” to receive live data from. These channels then “broadcast” data, which gets sent to anyone that is subscribed. This is called a publisher/subscriber communication paradigm. Anyone subscribed to a channel will receive the data when the publisher broadcasts data — there is no specification of individual recipients of data. Rails API I generated new application called “websockets” with resource message, which has attribute :content. Database has been setup with basic seed data. Make sure to enable CORS. rails new websockets --api bundle install rails g resource message content:string We need to tell Rails how client will connect to ActionCable server: /cable will be routed to ActionCable server. #config/routes.rb Rails.application.routes.draw do resources :messages mount ActionCable.server => '/cable' end Next, we need to create a channel that the consumer will subscribe to: MessagesChannel. rails g channel Messages This will generate messages_channel.rb in app/channels folder, with the methods subscribed and unsubscribed. This is what my MessagesChannel looks like after adding the actions I will need for subscribing and creating a new message. #app/channels/messages_channel.rb class MessagesChannel < ApplicationCable::Channel def subscribed stream_from "messages" messages = Message.all ActionCable.server.broadcast("messages", {type: "current_messages", messages: messages}) end def unsubscribed # Any cleanup needed when channel is unsubscribed end def create_message(data) message = Message.create(content: data["content"]) messages = Message.all ActionCable.server.broadcast("messages", {type: "current_messages", messages: messages}) end end There are two methods to note: subscribed, and create_message. subscribed() will be invoked when consumer subscribes to the Messages Channel. subscribe() will begin streaming from the stream called “messages,” which means that now anything that gets broadcasted to “messages” will be received by this subscriber. Also, it will make an initial broadcast, so that once the consumer is subscribed they receive an initial broadcast of all the messages. ActionCable.server.broadcast takes 1st arg: the stream to broadcast to and 2nd arg: the content. In this example, the content object specifies the “type” and “messages”. Next, the create_message action will be used to create a new message using form input from the client, and also *********rebroadcast the new updated array of messages to the “messages” stream. This is the core of real-time updates: rebroadcasting the new instance (or whatever changes) to all the current subscribers upon creation.********* Create Client Created React application called ‘client’. Starting server and client. npx create-react-app client rails s -p 3001 cd client && npm start Connecting and Subscribing using WebSockets and React The overall flow of connecting and subscribing client side: When App component mounts, it will start connecting to Rails ActionCable via the url, ws://localhost:3001/cable. via the url, ws://localhost:3001/cable. The WebSocket connection will be made using native JS class WebSocket. By attaching handlers like onopen, onclose, and onmessage, we can instruct what to do when the connection is successful or when connection receives data from the server. we can instruct what to do when the connection is successful or when connection receives data from the server. Using a Promise, I will subscribe to MessagesChannel AFTER the WebSocket has successfully connected. Once subscribed to the channel, any data that gets broadcast will be handled by the onmessage handler. Here, the callback function for onmessage will set state.messages to the received data, which gets rendered by component. // /client/src/App.js class App extends React.Component { state = { newMessage: "", messages: [], socket: null } submitHandler = event => { event.preventDefault(); //=> send state.newMessage to channel via WebSockets- will get to later } changeHandler = event => { this.setState({ newMessage: event.target.value }) } render() { return ( <div className="App"> Messages Board: <ul> {this.renderMessages()} </ul> <form onSubmit={this.submitHandler}> <input type="text" onChange={this.changeHandler} value {this.state.newMessage} name="newMessage"/> <input type="submit" value="Post"/> </form> </div> ); } } export default App; This is the component App that handles all the connections and rendering. I have renderMessages(), which will render state.messages as <li>. I have a state controlled form that will send the state.newMessage to the server via WebSockets. States have attributes messages, newMessage, and socket. ***********EDIT: I realized the WebSocket JS class is not very compatible with Rails ActionCable. It wouldn’t allow for basic things like unsubscribing from a ActionCable channel. Therefore, I had to switch over to actioncable to create the connection and create subscriptions. Check out the edited component here => App.js, I also found Daniel’s blog post very helpful in utilizing actioncable in my components.*************** // /client/src/App.js //... in App Component connectToActionCable = host => { return ( new Promise ((resolve, error) => { //create and connect let socket = new WebSocket(host); //handlers socket.onopen = () => {resolve(socket)}; socket.onclose = () => {console.log('ws closed')}; socket.onerror = errors => {error(errors)}; socket.onmessage = event => { let payload = JSON.parse(event.data); if (payload.message){ switch (payload.message.type) { case 'current_messages': this.setState({ messages: payload.message.messages }) break; default: break; } } }; }) ) } componentDidMount(){ this.connectToActionCable(`ws://localhost:3001/cable`) .then(socket => { this.setState({ socket: socket }) const subscribe_info = { command: 'subscribe', identifier: JSON.stringify({channel: "MessagesChannel"}) } socket.send(JSON.stringify(subscribe_info)); }); } ... # app/channel/messages_channel.rb ... def subscribed # stream_from "some_channel" messages = Message.all stream_from "messages" ActionCable.server.broadcast("messages", {type: "current_messages", messages: messages}) end ... On componentDidMount(), I will call connectToActionCable, which returns a Promise. The promise creates a new WebSocket connection with handlers onopen, onclose, onerror, onmessage. Take note of onopen and onmessage as we will need these later. onopen will resolve the promise passing in the socket connection. The objects broadcasted by channel (ActionCable.server.broadcast(“stream_name”, data_object)) will be in event.data.message. onmessage is called whenever data is sent to the connection from the server. Here, it is parsing data as JSON and using the data.message.type and a case statement to decide what to do — this structure is helpful for later on when you will be receiving multiple different types of data from ActionCable server i.e. “delete” or “create”. The “current_messages” case will set state.messages to messages received. Once connection is successful, the chained .then() will set state.socket for later use, and also use socket.send() to subscribe to the MessagesChannel. Remember once we are connected to ActionCable, we must subscribe to specific channels. The initial subscription will start stream and do an initial broadcast, which will trigger the onmessage handler, setting the state and being rendered. Creating A New Message and Rebroadcasting // client/src/App.js //... in App component // onSubmit for form for new message submitHandler = event => { event.preventDefault(); const newMessageInfo = { command: "message", identifier: JSON.stringify({channel: "MessagesChannel"}), data: JSON.stringify({action: "create_message", content: this.state.newMessage }) } this.state.socket.send(JSON.stringify(newMessageInfo)) } # app/channel/messages_channel.rb ... def create_message(data) message = Message.create(content: data["content"]) messages = Message.all ActionCable.server.broadcast("messages", {type: "current_messages", messages: messages}) end ... The submit handler will send state.newMessage and also instructs which “action” should be invoked in the Channel. WebSocket.send() is used to send data to a WebSocket connection. create_message action will create new message in database, and rebroadcast the updated array of messages to all subscribers. As a result, all WebSocket connections subscribed to this channel will update their component states via the onmessage handler. Now, our application updates the messages board real-time with any new messages posted by other consumers! Thanks for reading, hope you found this helpful. Any feedback is appreciated!
https://arth3rs0ng.medium.com/making-your-react-rails-application-update-real-time-f02a5ff83546
['Arthur Song']
2020-06-04 17:40:04.321000+00:00
['Ruby on Rails', 'Websocket', 'Software Development', 'React', 'Webdev']
The Problem with Polarizing Content
The Problem with Polarizing Content Society always pays the price Photo by Nihal Demirci on Unsplash As a content creator, if you want to get thousands of eyeballs on your work, make sure you offend. Prime your audience for a fight by goading them into clicking and slap their outrage awake one cliché at a time. On platforms that carry advertising, it isn’t quality or ethics that matter but how long a user stays on the site. Engagement is advertising revenue and the more controversy a platform hosts, the more successful it and those who provide it with the content of discontent get. But simple maths often cause non-tech folks to overlook the human aspect of the internet. Most of us in the West carry it in our pocket and we rely on it for every aspect of our daily life, including but not limited to information, education, and entertainment. We don’t just live on the internet together, we are it.
https://kittyhannaheden.medium.com/the-problem-with-polarizing-content-68c6c297b0b0
['A Singular Story']
2020-03-10 14:06:00.624000+00:00
['Philosophy', 'Social Media', 'Tech', 'Future', 'Media']
Wishing for a Simpler Time and a Simpler Ignorance
When we all drank the Kool-Aid. I have spent a lot of time thinking about my discovery that I was transgender when I turned 61 years of age. It still shocks me even though I have spent almost every minute of every day of the last four years, thinking about it since. I have reached the point in my journey where I have to decide the final direction of my gender life. At 65 I don’t have an endless amount of time to ponder this question. I have exhausted all of the middle ground solutions for my gender dysphoria and they are not enough. It is “put up or shut up” time and I am truly afraid. I have learned a lot about myself in four years since I have started this journey. I have learned that I was born with a brain wired female into a body that was built male. I have realized that I grew up in a simpler time of the 1950’s and 60’s. The world was uncomplicated; it was either male or female. You didn’t have to think. It was actually frowned upon. The doctor slapped me on the butt, declared my gender at birth, put it on my certificate, and I never looked back. Friends, family and society were satisfied with the decision. I was a little confused, but I suppressed my sense of gender and went on with the rest of my life. But I wasn’t entirely settled. As a child I wished I was a girl. I wanted girl things. I was corrected and successfully funneled into “guy” stuff. As I grew older I still day dreamed of being a girl and I wanted girl stuff but I believe that I would eventually outgrow it. I didn’t but I eventually rationalized it as I was simply a guy with a harmless fetish for woman’s clothes. Not perfect but not dangerous and entirely personal and an easy secret to hide. For the rest of my life I thought the fetish was the source of my constant wish that I was a girl. My defensive wall of gender denial was so cleverly constructed that it took near suicidal panic attacks in my 61th year, followed by years of deep therapy and scathingly intense self-analysis, before I realize I had it backwards. I did not want to be a girl to satisfy a fetish; I had a desire for feminine things because I was female. My true gender was suppressed by male socialization, gender ignorance and a lifetime saturation of testosterone. I was thunder-struck by this discovery. Suddenly everything in my life clicked into place. I could not deny my reality as much as I desperately tried to. This revelation was destroying everything in my life. And still, my journey continued. Along the way I discovered Emma. I discovered that I was Emma all my life. She had always been a part of me. She made me who I am. She just never got the credit. She was hidden and denied all these years but the joy of discovering her and finally embracing her has been huge and special for me. Now comes the hard part, do I stay as I physically am or do I need to see Emma in the mirror every day for the rest of my life? Does she/I need to be seen by others? My decision changes by the second. At this moment I miss five years ago. It was a simpler time. I had this unknown gender “itch” that was still not a pain. I was happily gender ignorant. I fit in the world I grew up in. It was a blissfully unaware and totally binary and I was totally naïve. Now my world is filled with a brutally hard decision and I truly don’t know if I have the courage to make it. I miss the simple ignorance*. Emma Holiday *Honestly I really don’t. Please also read: Writers note: If you have read any of my writings on Medium you will have noticed a definite theme: the incredible pain of gender dysphoria and all the difficult aspects of just being transgender. My writing has three specific goals: 1. Writing is my therapy. I have a very limited outlet for my thoughts so I write to find a way to process the most profound experience in my life. I need to understand and I need to accept myself to move forward. 2. Being transgender, for me, is a very lonely existence and if I can share some of the things that I feel and think as I go through the process of transitioning with others who are transgender and, in some way, lessen their pain and sense of loneliness, then all of this public exposure of my personal thoughts is not a waste. 3. I write to help cisgender people understand that all trans people want is to simply be understood, accepted and treated as a normal person. We are.
https://medium.com/prismnpen/wishing-for-a-simpler-time-and-a-simpler-ignorance-a4d09c9e367
['Emma Holiday']
2020-12-15 09:07:31.694000+00:00
['Humanity', 'LGBTQ', 'Transgender', 'Society', 'Life']