title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
#KnowYourNDCs: NDC: Redefining The Transport System in Nigeria — by @JameoMac
Migration impacts on both rural and urban settlements are instigated by a cultural, environmental, social, human, political/economic components and it’s consequences are multilayered, which has been exigent on our poor transport system characterised by dilapidated and eroding roads known for countless mishaps. With these increased vehicular movements our transport system backed by unimplemented policies, manned by undisciplined road marshals and complimented by incompetent drivers; has further decayed leaving unmeasurable emissions of greenhouse gas in it’s wake and as such our transport system needs climate change re-orientation and experts to enforce policy implementation. Given the rate of deterioration of our road infrastructure with specifics on our federal highways rather than intrastate roads, a NO will suffice. With the right road construction material use, civil expertise, policy implementation and progressive maintenance plan on both our urban and rural road infrastructure our commuter routes will accommodate any stream of vehicular traffic. In retrospect to the Paris Agreement on NDC’s reducing greenhouse gas and increasing resilience, the transport sector was identified and still is a key player in mitigating climate change. Given that Nigeria’s transport sector is motorizing at full-tilt, then to achieve NDC goals it is pertinent that sectoral and private stakeholders start and continue the implementation of the Paris Accord as it pertains to NDC. With careful consideration of it’s carrying capacity, daily distance covered, combustibility properties of its engine and economic impact it has on Lagos State; the BRT-Lite been the first of it’s kind in Sub-Saharan Africa has improved our transport system, displacing the “Molue” with is trademark black carbon footprints and ensuring commuters seating safety along it’s routes. Lagos Bus Services Limited (LBSL) was launched inline with NDC set targets of 2030, agreed it is not an immediate solution to Nigeria’s carbon emission challenges but if implemented to the letter will add positive gloss towards achieving our NDC. Improve and upgrade existing infrastructure; Review current Transportation legislations for adequacy Invest in R&D Consider PPPs Procure and launch greener modes of public mass transit. Train and retrain staff of the MOT. Recommendations The Nigerian transport sector and system was one out of the seven (7) countries whose transportation system was used as a case study in preparation to COP21 to fully understand the role of this industry in NDC implementation. Some key challenges were identified: Lack of transport data limits the sectoral ambition. Buy-in from key transport actors is essential for ambitious sector targets. NDC should be more closely linked with transport sector strategies. Transport authorities need more climate change expertise. Based on these identified challenges the below recommendations were given which partly run in parallel: Preparation of the NDC groundwork. Development and negotiation of the NDC. NDC implementation and integration in sectoral policies. The government should also genuinely carryout rural industrialization; the installation of basic, social and economic infrastructures in rural communities will drastically reduce “Rural-Urban” migration and inturn reduce the challenges unique to our transport sector. It is only morally astute that citizenry align to visions of the government in reducing greenhouse gases generated from the transport sector by plying more conventional commuter buses than private cars. This is a tweet-chat series on #ClimateWednesday — #KnowYourNDCs
https://medium.com/climatewed/knowyourndcs-ndc-redefining-the-transport-system-in-nigeria-by-jameomac-cb87d1e477a0
['Iccdi Africa']
2020-08-28 11:13:47.784000+00:00
['Transportation', 'Climate Change', 'Renewable Energy', 'Nigeria', 'Ndc']
MerzFiles. Stop!
One thing is true: There are too many newsletters. So don’t fill your mailbox with another one. Don’t subscribe MerzFiles, and the following will be spared you: Exclusive Experiments with GPT-3 in various areas: the meaning of life, new art forms, abstract and weird contents in various areas: the meaning of life, new art forms, abstract and weird contents Exclusive Experiments with other AI-driven models All articles on Art, Science, AI, Society, Videogames and more and more All Articles on Merzazine with Friend-Links (it means: in case you have a basic Medium account, you will spare your “3 articles per month” for other writers) (it means: in case you have a basic Medium account, you will spare your “3 articles per month” for other writers) Exclusive ideas and thoughts , not written in Merzazine (yet): possibility to follow the development of conceptions , not written in Merzazine (yet): possibility to follow the development of conceptions Hidden Tweets and hidden gems I found in the WWWasteland I found in the WWWasteland another hidden items like interaction with AI for you etc. Just don’t click on this link below, which could bring you to the Newsletter MerzFiles.
https://medium.com/merzazine/merzfiles-stop-39411570172b
['Vlad Alex', 'Merzmensch']
2020-10-20 21:15:10.473000+00:00
['Artificial Intelligence', 'Art', 'Culture', 'Merzfiles', 'Newsletter']
‘WHO’: Pete Townsend And Roger Daltrey Prove Rock Isn’t Dead
Paul Sexton Photo: Rick Guest The tone of Pete Townshend’s interviews of late is that everything in music has been done, and rock is dead. But it’s a thrill to be able to say that his own work contradicts him. Certainly, some of WHO, the band’s 12th studio album, released on 6 December 2019, is pleasingly familiar and exhilaratingly nostalgic. But other tracks, equally excitingly, have The Who sounding as never before. Listen to WHO on Apple Music and Spotify. Wisdom, perspective and humour It’s been 13 years since the band’s remaining core of Townshend and Roger Daltrey last convened in the company name, and while 2006’s Endless Wire contained moments of sublime glory, there were times when it felt somewhat obligatory. Nothing could be further from the truth on WHO. It may or may not be the band’s final testament as an album, but either way it’s a brilliant treatise on how the older rock g-g-generation can not only remain relevant, but impart a wisdom, perspective and humour that would have startled their younger selves. Resplendently contained within its Peter Blake-designed visual flypast of a cover, the record roars from the traps with ‘All This Music Must Fade’. One of the three opening tracks that previewed the set as what we once called singles, it’s the first signal that, if you’re going back to the well, you might as well have a proper drink and enjoy yourself. It’s aggressive (“I know you’re going to hate this song”) but playful, especially in its open embrace of the lyrical metre of ‘The Kids Are Alright’. It’s also early confirmation that, throughout the album, Daltrey is in the vocal form of his life. ‘Ball And Chain’, previewed when The Who played Wembley Stadium in July 2019, is a rumbling, grumbling indictment of “that pretty piece of Cuba”, the Guantanamo Bay detention camp. Then comes the vividly melodic ‘I Don’t Wanna Get Wise’, on which Townshend ruminates about not dying before he got old, how success was a surprise but shouldn’t have been, for those “snotty young kids”, and about the improbable acquisition of a certain sagacity. Here and often, the guitarist, with co-producer Dave Sardy, blends contemporary production touches with synthesiser nods to the Who’s Next era. A brave triumph The percussive ‘Detour’ has Daltrey alternately growling and coaxing on a song that Townshend described during its creation as being “about men needing to find new routes… to reach a decent but still honest way to approach women in our lives and our business”. The essential symbiosis between vocals, guitar and John Entwistle and Keith Moon’s erstwhile magnificence in the engine room is stirringly recreated via the invigorating contributions of Pino Palladino and Zak Starkey. The gentle, understated elegance of ‘Beads On One String’ houses an apparent anti-war call, with Townshend’s arrangement and lyric for music by Josh Hunsacker, an artist he discovered on SoundCloud. ‘Hero Ground Zero’ (also debuted at Wembley) is an archetypal Who anthem underpinned by their unrivalled use of opulent orchestration, while Daltrey soars again on ‘Street Song’, with Townshend’s distinctive harmonies and wonderful guitar textures. Then, perhaps the five most extraordinary minutes on the whole album, and a track quite unlike anything The Who have ever recorded. Townshend, rarely given to overt expressions of love in song, lays his emotions on the line for all to hear on ‘I’ll Be Back’, which opens with a beautiful harmonica motif and blossoms into a gorgeous declaration of devotion, with not an electric instrument in sight. “In this life you’ve so blessed me, why would I want to get free?” he asks. “I’ve been so happy loving you.” In one of the most extraordinary lyrics Townshend has ever written, he faces his mortality square on (“I must accept I might be finally dying”) with an assured serenity, as he pictures returning to his lover in the next life. With clearly Auto-Tuned vocals and its air of sophistication, some Who diehards might hate it, but others will hear it as a brave triumph. A record of true humility and intelligence Far from feeling a need to reach any rocking conclusion, the album continues with the almost poppy ‘Break The News’, another relationship song with the protagonists “watching movies in our dressing gowns like we were 24, or thereabouts”. Once again, they’re contemplating their age, but feeling no different from when they were young. ‘Rocking In Rage’ looks like a quintessential Who title, and builds into something more robust, with some classic chords, but still an episodic and pensive aura. There’s one last surprise with ‘She Rocked My World’, Daltrey close-mic’d and tender on a Latin-flavoured finale. An album, then, of no concepts, no overarching theme and only one track that even makes it to five minutes in duration, WHO is also a record of true humility and intelligence. It may be 2019’s most surprising album, and it’s certainly one of the year’s best. WHO is out now. Buy it here. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/who-pete-townsend-and-roger-daltrey-prove-rock-isn-t-dead-d368b1bd04a3
['Udiscover Music']
2019-12-09 09:48:47.665000+00:00
['Culture', 'Features', 'Rock', 'Pop Culture', 'Music']
Lose a Lot of Weight During the Pandemic (and Stop Making Excuses)
Lose a Lot of Weight During the Pandemic (and Stop Making Excuses) A lesson in Sci-Fi thinking for these strange times Photo by Stefan Cosma on Unsplash Can you lose weight in a Pandemic? My friend Nina has always been a little on the heavier side, but her health has been declining steadily since the start of the Pandemic, and so has her attitude towards it. “When things get back to normal, I’ll lose the extra weight.” Like many people, she has been working from home since March, and this has resulted in a much more sedentary lifestyle. Nina spends most of the day in her home office. To make matters more difficult, Nina lives with her mom, who has cancer. She’s terrified about bringing home the virus, so she hasn’t been going to the gym. For Nina, and for many others like her, weight gain has become another unfortunate side effect of this shared Pandemic experience. Obesity is associated with the leading causes of death worldwide, but Nina feels weight gain is unavoidable under these crazy COVID-19 circumstances. She’s wrong, by the way.
https://medium.com/in-fitness-and-in-health/a-sci-fi-method-to-lose-pandemic-weight-bdbdff5b96de
['Keith Dias']
2020-11-21 12:01:26.265000+00:00
['Fitness', 'Health', 'Lifestyle', 'Nutrition', 'Pop Culture']
To fulfill my Soul…
If beauty will save the World, as once Fyodor Dostoevsky said, than poetry definitely will save our hearts. Do you like my little poem? Does poetry live in your heart? Will be grateful for your feedback! And below are my articles that I hope you may find interesting. One is about why we need relationships and another is about how The Love changed my life:
https://medium.com/daily-connect/to-fulfill-my-soul-6c83f42da4a
['Alexandra I.']
2020-01-27 23:42:57.276000+00:00
['Self-awareness', 'Spirituality', 'Life', 'Poetry', 'Art']
How to Find Time to Write 3–5 Articles a Week Even While Working a Full-time Job
Why Write 3–5 Stories a Week If you want to be a top writer on Medium, you’ll need to put in the work, which includes contributing well-written stories high in both quality and quantity. Think about it. It’s just like anything else where consistency and commitment pays off. The more quality stories you write, the more people will want to read what you have to offer. Another reason is the more active you are on the platform, the more likely the Medium algorithm will pick up your stories to show up in the feeds and recommendations. Also, the more you write and contribute positively to the platform with value-added content, the more positivity will come back to you. What you put in is what you can expect to get out of this. You should also be encouraged to read more on the platform, just as much as you write, as that is also a way to get noticed and reap the benefits of networking with your fellow writers. Writing and publishing 3–5 stories a week can help you reach your goals as a Medium writer.
https://medium.com/illumination-curated/how-to-find-time-to-write-3-5-medium-stories-a-week-while-working-a-full-time-job-6d2dad1570fa
['Audrey Malone']
2020-12-30 17:47:05.200000+00:00
['Time Management', 'Business', 'Writing', 'Self Improvement', 'Writing Tips']
I’m Watching My Neighborhood Grow Whiter Through the Window
I’m Watching My Neighborhood Grow Whiter Through the Window I wonder how much longer my 13-year-old Black son can run freely through a neighborhood that is growing whiter Photo: Brian van der Brug/Los Angeles Times/Getty Images Gentrification of the Black Beverly Hills did not start with Covid-19, but it feels like it. A slow whitening of Windsor Hills/View Park began in the late 1990s and then gathered steam during the Great Recession of 2008. Now, 12 years later, the area is teeming with baby strollers pushed by white hands. Before the world ground to a halt, I saw the white residents in passing — a wave here, a smile there. Though we shared a street, our lives remained separate. But then mid-March came and school closed. For the foreseeable future, we would be safer at home. The pandemic had not only brought sickness and death, but it also arrived with a Spike Lee Double Dolly shot that forced me to see our surroundings, that is, white neighbors, up close. From the 1960s through the 1990s, Black families lived and loved in this bedroom community. I grew up here, knowing my neighbors well and taking comfort in the fact that they were watching out for us, even when we couldn’t see them. Now, I am the adult on the street, peeking between blinds, keeping an eye on the younger kids, and admonishing them to look before crossing the street. But I know I’m not the only adult peeking through blinds — and increasingly, many of my neighbors keeping watch are white. Thinking of them, I wonder how much longer my 13-year-old Black son can run freely through a neighborhood that is growing whiter with each home purchase. These new residents are making it clear that these streets, my streets, are now theirs too. Needless to say, the rapid gentrification is rankling some of the older Black homeowners, (though the real issue is predatory white realtors, who pretend to care about the character of View Park’s historic designation). But to my kids, a friend is a friend, and their increased free time has led them to rediscover the friendship of the white kids who live across the street. The Shaws, all 10 of them, are a nice family. The kids range in age from seven to early twenties. They are friendly, well-behaved, great athletes, and homeschooled. They have lived in Windsor Hills/View Park for more than a decade, having bought at the beginning of the Columbusing of this neighborhood. When the kids play hide and seek together, they scatter over several blocks, seeking cover behind tall trees, thick bushes, parked cars, and side yards. While they giggle and shush each other trying not to get caught, I hold my own breath, worried that one of the white neighbors will accuse the Black boys of trespassing. These violations do not apply to Meghan and Emily Shaw, the white girls from across the street. Their skin carries privilege and entitlement. They can hide where they want, and run up and down the street without a second thought. No one will look out their kitchen window and read their presence as a threat. I wish I could say the same for the Black boys ranging in age from six to 13 who comprise this crew. My colorful seven-year-old daughter frequently joins the fun. Hands down, she is a cutie pie, but in a few years, white folks will misread her heart-shaped face as older and experience her wit as sass. Our children’s age is not the problem, it’s their race and that keeps Black parents up at night. Racial antennae up, we Black parents talk amongst ourselves. Because we now live in a gentrifying neighborhood, we worry about our kid’s safety. We want to keep the tech-free fun meter high, and not bother their spirits with race-based rules that start with you can’t do what they do. Someone might assume the worst… call the police… or hurt you. But we can’t ignore the danger and have the talk anyway. White parents don’t have these conversations with their kids, even in neighborhoods where they are the minority. I often wonder if our kids ever discussed what was happening in the world with the white kids across the street. Oh, how we wish this fear was irrational. But 2020 is not the summer of love. This is the summer that finally (hopefully) woke white people to a terrifying truth: Black bodies are not safe in these streets. Ahmaud Arbery was killed while jogging; a white woman falsely accused a Black bird watcher of harassment; Breonna Taylor was asleep when she was murdered by the police; and George Floyd couldn’t breathe under the knee of a police officer in Minnesota and died. It’s like the bottom fell out of an already stressful situation. So, we, Black parents, gear up for another conversation about the vilification of Black males, the weaponization of white female privilege, and how, even within the confines of our formerly all-Black neighborhood, they must be mindful of where they are at all times. The world has intervened again and the pandemic, which is doing a great job of shining bright lights on racial, economic, and gender inequities in health care, education, and employment, reminds us that it is hard to escape the feeling that it is open season on Black people. When the peaceful protests for Black Lives Matter broke out, my kids and I were safe at home. When the protests turned violent, we were still safe at home. The mass gatherings occurred north and west of Black neighborhoods, spread across several days in affluent white Beverly Hills and Santa Monica. Our community was spared tear gas, looting, and rubber bullets. I was grateful to not have a front-row seat to the passion and chaos. My kids were confused and scared and I spent hours checking in with them. I reexplained racism in America and offered empathy for citizens so tired and angry of being abused by police and ignored by the mainstream that they resorted to violence to be heard. Overnight, our white and Black neighbors began posting Black Lives Matter signs in windows and lawns. They seemed aware, perhaps for the first time, that truth and reconciliation, in a society that has been for 450 years separate and unequal, is necessary. Once the curfew was lifted, my son finally expressed that he felt his mortality, and that broke my heart. He needed help processing the pandemic, school closure, protests, and so much violence. Luckily, he had his Black male friend group and the school psychologist, whom I enlisted, because I didn’t want him to hold such angst inside his sensitive 13-year old soul. He eventually rallied and carried on, though I know these events will remain with him for the rest of his life. I often wonder if our kids ever discussed what was happening in the world with the white kids across the street. I also wonder if their parents spoke with them about race or were they silent, so as not to disrupt their childhood. Honestly, I can’t tell. The kids continued playing together and getting on each other’s nerves, but then a new thing happened. Meghan and Emily began riding their bikes beyond the neighborhood for donuts and tacos and our boys asked for that same freedom. My gut reaction was no — would they be safe in those other neighborhoods? But then I relented. Why shouldn’t they have the same privilege of a summer bike ride with friends? With trepidation and a cellphone, I gave my son a taste of freedom. Now on the verge of a second shutdown, we have new white neighbors. The wife is pregnant and their dog’s name is Milo. They, like the Shaws, will raise their family here, and I will keep looking out the window, watching and keeping my kids safe at home.
https://gen.medium.com/im-watching-my-neighborhood-grow-whiter-through-the-window-eb3ada5265bb
['Nefertiti Austin']
2020-07-17 05:31:01.396000+00:00
['Gentrification', 'Race', 'Family', 'Society', 'Neighborhoods']
Care From Wherever: DESIGN Canberra
The Australian design festival has gone to great lengths to accommodate online viewers so they can participate in a variety of activities, celebrating the vast forms of art and design created under the theme of Care. Emma-Kate Wilson /MutualArt Artist Installation — Civic Square. Photo credit: 5 Foot Photography The program of DESIGN Canberra is a full and vivid experience to celebrate the design, art and culture of Canberra. Usually, DESIGN Canberra appears to be a festival that must only occur in the capital. However, in times like these, we can look online to experience the program. Not just Australians who are limited to travel, but further afar, international guests can engage with the thematic of Care — a poignant marker to the world as 2020 draws closer to its end. The artistic director of DESIGN Canberra Festival and CEO of Craft ACT: Craft + Design Centre, Rachael Coghlan, shares her proudest highlight: “A welcome opportunity to connect in this city of design and take some time to appreciate the natural and built beauty that is all around us.” Open Studios. Photo credit: 5 Foot Photography When Covid lockdowns hit, the festival questioned whether they should continue. “We surveyed our community, and the overwhelming response was to proceed with DESIGN Canberra,” Coghlan says. “It’s never been more important to support the arts, to build visibility for contemporary craft and design, and help artists make a living from their practice.” The theme of Care was first proposed in 2019, as Australia faced extreme weather conditions — bushfires, hailstorms and flooding in the space of three months. No one could have been prepared for what followed as the pandemic gripped global communities. “The 2020 festival program will focus on giving back to the community, building caring and creative experiences, and producing enduring content to support our city’s and our sector’s recovery,” the festival’s director promises. CraftACT. Artist credit: Marilou Chagnaud. Photo credit: 5 Foot Photography “Our 2020 theme of Care seems timelier than ever. The value of care is more important than ever: for community, for creativity, for craft and for our world,” Coghlan adds. “I love that DESIGN Canberra can bring new awareness, new audiences and a new appreciation of our city of design.” For the last four years, DESIGN Canberra has nominated a designer-in-residence; this year, Kirstie Rea captures the sentiments of Care in blown-glass installations. With care (2020) was initially created during a residency in Calgary, Canada, in 2011. Surrounded by the cold and snow, on the other side of the world, far away from her family, the artist was left thinking about how we attribute values of care, comfort and warmth. Kirstie Rea, With care, 2020. Image Lean Timms The result was a glass blanket, both incredibly fragile, yet strong; it has to be handled with care. The folded blanket takes shape within a timber frame — a doorway it appears — a gentle metaphor for finding refuge in the home, offering a stirring depiction of our time in lockdown. In the capital’s CBD, fellow glass artist Hannah Gason produces a graphic intervention. Titled Glimmer (2020), the artist re-imagines the glass mosaic by Frank Hinder, Star Ceiling (1963) in Monaro Mall’s iconic City Walk entry. “Hinder’s mosaic, located above, focuses on the vibrant night sky, [my work is] responding to the tonal shifts of the fragments within Star Ceiling, the contrasting patterns of my design draws the passerby’s eye to the pillars of Monaro Mall and up to Star Ceiling,” Gason maintains. “We are lucky in Canberra to experience wide open spaces and the natural beauty surrounding the city,” the artist continues. “My design celebrates this light and aims to bring a sense of positivity into the city in a year that has been challenging for so many.” Medusa Opening. Photo credit: 5 Foot Photography Like Hinder’s Star Ceiling and Monaro Mall, most of Canberra is vastly inspired by mid-century design, its most significant period of growth taking place in 1950–1975, which was closely overseen by a professional planning body (NCDC). As such, design is holistically blended within all aspects of the city. The author of Canberra House and a Canberra architecture expert, Martin Miles, will be conducting tours around the capital — something which can also be engaged independently with downloadable maps and online zines. Through ‘DESIGN, Anytime’ the online visitor is privy to a full program of tours, online exhibitions and catalogue essay. Highlights also include live-streamed keynote talks by Bernard Salt on the future of suburbia, Dominic Hofstede’s reflection on Australia’s graphic design from 1960 to 1990, while a webinar by contemporary artist Daniel Boyd (Kudjala/Gangalu) and Edition Office discusses their sculptural pavilion, For My Country, which commemorates the military service and experiences of Aboriginal and Torres Strait Islander peoples. DC Glass. Photo credit: Anthony Basheer For the first time, the opening week of DESIGN Canberra also lands in NAIDOC week — a celebration of the history, culture and achievements of Aboriginal and Torres Strait Islander peoples. To honor the first people of Australia, the festival embraces art and design by First Nation makers on Ngunnawal Country. Ever an essential message to non-Indigenous Australians, Luritja artist Kayannie Denigan transforms the foreshore of Lake Burley Griffin with her artwork My Country (2020). In graphic pops of yellow, pinks and navy against pastel blue, imprinted on the concrete by the water’s edge, the artist transports the different eco-systems — the scrub, water bodies, boulders and hills — from across her country in Central Australia to her new home in Canberra. “As I flew over the land of my ancestors, I was struck by the beauty of the harsh desert. It was the first time I had been back to Central Australia since I was a child,” Denigan explains. “Upon returning home, I set out to incorporate these separate elements in my art, inspired by my Nanna’s country in Central Australia and my upbringing on Cape York.” Artist Talk (Lucy Irvine). Photo credit: 5 Foot Photography Whereas Denigan celebrates the color and life of Aboriginal and Torres Strait Islander culture, in an exhibition at Canberra Contemporary Art Space, Canberra-based artist James Tylor explores the other side of this narrative. From an untouched landscape is lacking color, as a black and white palette explores the effects of colonization and the resulting absence of Aboriginal culture. The viewer is welcomed to delve into these histories and narratives from wherever they have internet. From an untouched landscape is fully documented online, with an image gallery, catalogue, and essay. We can acknowledge that in person is always better, as would be the case with My Country, as the graphic color intersects within the landscape, the earthy smells of the lake and surrounding mountains anchor one on the country. Yet, in a time of lockdowns and travel restrictions, we are still privy to the artworks. From extensive online reading material to downloadable city design maps, audiences across the world are invited to engage with DESIGN Canberra. At the same time, those more local are welcome to stop and consider the programming, visiting artist talks and workshops, and remembering to care — complete with yoga, meditation, and mindfulness. DESIGN Canberra recognizes and celebrates the lasting effects of a healthy and thriving art and design community. DESIGN Canberra November 9–29, 2020 designcanberrafestival.com.au
https://medium.com/mutualart/care-from-wherever-design-canberra-5cc811c0362d
[]
2020-11-18 08:02:18.998000+00:00
['Australia', 'Care', 'Design', 'Festivals', 'Art']
Redux vs. Context vs. State
React-Redux Connect/useSelector To get a better idea of this system, let’s dive into some of the innards of Redux and how React-Redux uses them. What is Redux? Redux is a state container for JavaScript applications. Although it was initially built for use with React, it’s not tied to React. Instead, we use a library from the same team called React-Redux to connect Redux to React. The actual structure and shape of a Redux store are entirely up to the user. It must fit the same shape as the original passed in state, which they refer to as Preloaded State. Traditionally, this is an object and is placed into the variable currentState . At this point, nothing complex is happening, and we just have a single variable that stores our data. Now we want to turn this into a pub-sub. The PubSub pattern is a form of an observer pattern where users can “publish” updates to a central system, and any subscribers listening to those events are updated accordingly. So what do these steps look like in our system? Let’s look at the events in sequential order: An event takes place in the system, which dispatches an action to the store. The store takes that action and updates the store through state reduction. Once the store has updated, we inform all listeners to the store that the data has changed. Note: If any new subscriptions are added after one has occurred, they will not be notified of a change within the system while iterating through the listeners. After the update is complete at #3, we then move any new subscriptions into our currentSubscriptions . So, who are the listeners? In the case of Redux, they are any entity that has subscribed to it. You may have seen the syntax for it before, but it happens after you create the store: Middleware is built using this API, as well as React-Redux. Again, it’s not React specific — absolutely anything can tie into this pub-sub and know about state changes. One thing you might notice here is that absolutely every single change to the store will call our subscribed event. Does this happen under the hood of our React components? Wouldn’t that be heavy, as each component will be updated no matter the change? Well, this is an excellent opportunity to dive into React-Redux! What is React-Redux? React-Redux is a binding library used to connect React to Redux. That’s not all it does though; it also gives us several enhancements under the hood that are often overlooked. Before we look at these enhancements, let’s see how it connects the Redux pub-sub to our components. This can be done in one of two ways: with the traditional Connect syntax or with the newer useSelector and useDispatch hooks. First, let’s talk about their similarities. Both of these approaches use the React Context system to pass the store instance to any child components in the react tree. Note that this isn’t the store state itself — this is collected later. Beyond that, there are a couple of misconceptions about how Context is used in React-Redux. We saw earlier that the Context system does have a mechanism to inform child components that our values have updated. But React-Redux does not use this mechanism. Instead, they subscribe directly to the store itself and use the store pub-sub to be notified of possible state changes. This is the store.subscribe() that we mentioned earlier. Note: This is the useSelector example. The subscription logic for connect is very similar, and can be found here. When the store state changes, it will call checkForUpdates, which determines if the state alteration has any effect on this component. If it does, it calls forceRender, re-rendering this component. In this entire flow, the Context is never used. So what’s the purpose of it? A special thanks to Mark Erikson, who helped me walk through the React-Redux/Context approach. He wrote a fantastic post here that goes more in-depth on these reasons. The purpose of Context within this system is to do exactly as we said above. We use the Context provider to tell React where this store is housed in the hierarchy of the React tree. Any child components of that provider have access to that store instance. This gives us the ability to have multiple stores, nested stores, and a cleaner approach to passing the store instance. Why doesn’t React-Redux use Context to update the child components? There’s a Github Issue outlining the reasons, but I want to look at a purpose that we’ve already described in this post. Remember the context example we used before? We were able to reduce the amount of re-renders in our system to two. They were: Re-render the component house the provider, to update the value Re-render any components housing a consumer. LevelTwo (the component with nether provider nor consumer) was not re-rendered because it was memoized and had no changed props that would need to update the component. That means, for any children of the provider, and any children of the consumers, your components should be memoized so as to reduce unnecessary re-renders — particularly children that do not require props from said state updates. Now, I’m not arguing against memoization. Memoization is a crucial tool in your belt, that is under-utilized and often misunderstood (perhaps an article on that next). But that’s a pretty hefty requirement to your components and any future components. There’s also one other issue we come across that I left purposely vague in our list above. Render any components that house a consumer. In the example above, we only have a single consumer, and therefore don’t see the full extent of what this means. Let’s build a case with two components with consumers instead. So, we’ve updated our context to have two consumers, and both of those components are being displayed at a sibling level in LevelTwo . Now, when you click either of the buttons to update one of their states, you should see this: No matter what, both of these components are re-rendered, despite them both being memoized, and at a sibling level. That’s because context only notifies their consumers that a change has occurred. It does not know whether that change affects the component using that consumer. As our application grows, this problem also grows. There are strategies around this, which we will get more into later, but this is why React-Redux does not use context to update its components. Instead, React-Redux relies on its internal pub-sub to notify children. So how do the re-renders look when we use React-Redux? Let’s start with a code example, then walk through a diagram to explain the steps. This example will still be using useSelector , but subscription-style is similar to connect . Here’s our code now using a Redux store, connected through the React-Redux provider and Hooks. Note that there is some additional boilerplate, like the reducer and actions, that isn’t present here. You can check out the code example above to see more. What happens now when we click the Change context button? Our output should look like this: Each component was called once for the initial render. Now, when we update our Level Three component, it only re-renders that single component. Not even our provider component updates. And you will see that if I split our tree into two LevelThree components, like our last example, only a single one would update (assuming they were different state values). How is that? Partly because of the Redux subscriptions. But you might recall from the Redux breakdown above, a subscription gets called for every single change in the store. And that does happen here, but React-Redux gives us more under-the-hood tools to help increase performance. Here are the steps it takes: A state update is published. All the listeners, in this case, every single rendered component using useSelector, will call their checkForUpdates function. Each component takes the store instance passed to them from the context earlier, to get the current state of the store. Use the selector passed in to derive a new state value for this component. If the new derived state used by this component has changed from before, re-render the component. If not, do nothing. This flow can be seen here: Note: That equality check is where connect and useSelector start to have some differences. We’ll dive into those in a moment. At a high level though, these work similarly. An even more simplified version is broken down in this diagram: Note, any child of component three will also still be re-rendered, so, if that’s not desirable, it’s still on the developer to ensure that child components are properly memoized. Differences between mapStateToProps and useSelector Although the general idea of useSelector and mapStateToProps are the same, there are some key differences under the hood that can throw you for a loop if you’re not ready for them. However, in my experience these differences, or at least their significance to us developers using these tools, have been greatly exaggerated To help break down the differences we first need a high-level understanding of what the structure of each approach is. Once we know the difference in architecture, the difference in style should be clearer. So first, let’s examine each signature. Note: we are going to be doing some selectors that I wouldn’t typically build. This is to try and examine the differences between styles. Above is the Hook style, using the useSelector Hook of react-redux. Now let’s take a look at the classic mapStateToProps . As we can see above, the Hook can tie directly into the data and pull out what’s needed. On the other hand, our HOC requires us to create a mapStateToProps function and pass it into our HOC, along with the eventual component. That component will then receive all state data as props. The DOM output is the same, but there is a difference in the structure of our React document. ComponentWithHOC React Structure It appears that our Hook component resembles precisely the elements as they appear in our code. The connect HOC component is a little bit more complicated. Now, we know connect is a HOC, and if you’ve previously used HOC’s, you should have some idea how this looks, but let’s diagram it. Now we have this extra component that wraps our component. This is responsible for collecting the state data and passing it to our component as a prop. Now, because our component is wrapped with this higher-level component, some extra optimizations are performed out-of-the-box, which aren’t available for Hooks. Namely, we can render both the component creation as well as memoize our actual connect HOC. What does that mean for people using or migrating to Hooks? It means that where you previously had “free” memoization caused by the connect HOC you will now have to handle on your own. For most cases, that’s as simple as wrapping your component in a React.memo . Note: There are some interesting differences with mapDispatchToProps, which you may want to check out more about in my article understanding a useEffects dependency array.
https://medium.com/better-programming/redux-vs-context-vs-state-4202be6d3e54
['Denny Scott']
2020-03-10 15:40:59.701000+00:00
['Redux', 'Web Development', 'React', 'Programming', 'JavaScript']
2020鴻海獎學鯨 錄取心得 交大 鴻海獎學金 鴻海-交大聯合研究中心人才培育獎學金 實習
Mess up We are nobody and we always mess up.
https://medium.com/mess-up/%E7%8D%B2%E7%8D%8E%E7%B4%80%E9%8C%84-%E9%B4%BB%E6%B5%B7-%E4%BA%A4%E5%A4%A7%E8%81%AF%E5%90%88%E7%8D%8E%E5%AD%B8%E9%87%91-db3557fb5131
['Yuan Ko']
2020-11-04 01:33:00.411000+00:00
['獎學金', 'AI', 'Scholarship', '鴻海', 'Review']
Implementing a Data Vault in BigQuery
In my last post I went over how we went about implementing a fitness program at Pandera, the need for a custom solution to handle a little friendly competition, and an architecture that would support that. Designing a Fitness Leaderboard in GCP In this post I want to focus on what a Data Vault model is and what it looks like in BigQuery. I am covering this first instead of the data pipeline as it really shapes the way I handle the messages and events that come in. First I want to start off by saying Data Vault is extremely overkill for this scenario. It is really good for when you have high data velocity, auditing, and traceability requirements. We come across a lot of these scenarios, especially in financial services, healthcare, and other regulated industries, which is why I wanted to get a better understanding of some of the gotchas during implementation. The main concepts in Data Vault are Hub, Satellite and Link tables. Hubs primarily store business keys and a hash of the business keys. Links store a relationship between hubs by capturing the associated business keys, their hashes, and a new hash of the business keys from both hubs. And Satellites store additional attribution about hubs and satellites. This is a basic explanation of Data Vault. But what this setup allows us to do is load data without much in the way of dependencies, so tables can be loaded in parallel, and not have to worry about much other than getting the data loaded. So for the purposes of the Strava data we had two entities, the athlete and the activity. This is a fairly simple relationship but it will demonstrate some of the things I learned implementing a Data Vault. Lesson 1 — The number of tables involved. For those two entities alone I needed five tables: activity_hub activity_sat athlete_hub athlete_sat athlete_activity_link In most scenarios this could be as simple as one table or three. Since the scope of this implementation is very small the additional tables are not very noticeable, but on a large scale implementation it could be a hassle. This is typically why automation takes a key part in data vault implementations, and being that most of the patterns of these base tables repeat (hashing of business keys, hashing of attributes, inclusion of meta columns) this can be programmed with pretty low effort or by using a COTS product. But for the time being I will be doing this manually. First though I need to model those tables. Data Vault Model I had to rework this model a few times in developing it, mostly because I worked on this project between 4:45 am and 6:30 am. Which brings me to: Lesson 2 — ironing out the data model pays dividends in the long run. To do this you really need to analyze the data ahead of time and make sure you have a good understanding of relationships, uniqueness, and the attribution. Once this is all done though you are in a good place, because one of the principles of data vault is to get the whole dataset into the data vault. What this does is allows me not to have to reengineer a pipeline because I now need an extra field. Get it all, because the addition of a field or ten during this phase is very low, but if you have to go back and add it becomes a pain. With the model set up you’ll notice that each table has a few common columns, primarily the ‘_seq’ and ‘_load_date’ columns. These assist in a few different ways: Determining the latest record Being able to identify differences in records. Only focus on inserting records and not updating existing record When I look at the ‘_seq’ columns, these are my hashes of the business key(s) that identify a unique instance of something. I used an MD5 hash because it is readily available in python and BigQuery. athlete_hub_seq — hash of athlete_id, I struggled with this one mostly because the business key really should be the email address associated with Strava. It provides more contextual value than a random integer. However with the scope I have for the api it is not available. Ideally if I did have the hash of the email I could then relate data to other company entities which is powerful. activity_hub — hash of activity_id, unlike the athlete_id, I am fine with this being based on a random integer. So many elements of an activity can be edited in the application that the only constant unique is this ID. athlete_activity_seq — hash of athlete_id and activity_id. This hash provides a combination of the unique occurrence of an athlete performing an activity. Because of the above data vault features, my pipeline into BigQuery can be very simple, I can focus on only inserting records and I have a clear method of distinguishing duplicates and determining my latest version. This is where the benefits of Data Vaults speed in loading comes from. No need to look up to see if a record exists already, or search for an existing surrogate key, just insert and go. This does leave me with a bit of a gap and: Lesson 3 — Data Vaults are not suitable for a reporting layer. Which means….more tables or in my case views. I need to build a data mart on top of the Data Vault to service my visualization tool. Information Vault You can see that I have two datasets. The first being my data vault, and the second being my information vault to serve as a reporting layer. This is a very important aspect as it pertains to BigQuery and data security. Today you are unable to secure individual tables or data elements, security is all at the dataset level. This is not a huge issue because you can simply create datasets to serve various security requirements. So my view layer consists of one singular row level table, fact_activity, where I have applied some logic: WITH act_sat_latest as ( SELECT activity_hub_seq, MAX(sat_load_date) as latest_load_date FROM strava-int.strava_datavault.activity_sat GROUP BY activity_hub_seq) SELECT DISTINCT ath_s.athlete_hub_seq , ath_s.firstname , ath_s.lastname , act_s.* FROM act_sat_latest asl JOIN strava-int.strava_datavault.activity_sat act_s on (asl.activity_hub_seq = act_s.activity_hub_seq and asl.latest_load_date = act_s.sat_load_date) JOIN `strava-int.strava_datavault.athlete_activity_link` aal on act_s.activity_hub_seq = aal.activity_hub_seq JOIN `strava-int.strava_datavault.athlete_sat` ath_s on aal.athlete_hub_seq = ath_s.athlete_hub_seq WHERE act_s.delete_ind = False This will serve as a denormalized springboard for my other aggregate views, each of which serves a specific purpose of calculating activities and time spent for each week or at an overall level, additionally there’s a stats table where I collect some of the ancillary metrics like miles travelled, calories burned, etc. As a couple of quick notes on the BigQuery implementation. I am not doing any partition or clustering in this instance. The data is small enough to where I will not get any real performance or cost savings by implementing it. With all the structures put into place I can start building out the actual pipeline, which I will cover in my next post. As a whole, I really like the Data Vault method. Just like other methods there are clear guidelines that can make it successful. And it is important to really understand the overhead involved in that success. However given the resources a platform like Google Cloud provides, having the ability to have massive parallel processing out of our data pipelines is becoming simpler with a method like Data Vault. If you have implemented a Data Vault on Google Cloud, let me know about your experience in the comments!
https://medium.com/swlh/implementing-a-data-vault-in-bigquery-c91a91292fbb
['Daniel Zagales']
2020-05-20 01:09:18.296000+00:00
['Data Warehouse', 'Google Cloud', 'Data Vault', 'Data Engineering', 'Bigquery']
Creating a Wildlife Camera With a Raspberry PI, Python OpenCV and Tensorflow
I am a great believer that for you to learn, you need to create. And to create you need to have fun! First I want to give you a background on why I am trying to build a wildlife camera with a Raspberry Pi. I live in London and my garden gets visited often by local wildlife. It is so often, that as an avid gardener I get a bit annoyed. Broken plant pots, plants dug out, missing plants, missing fruits... In my garden, I have seen small foxes, the cutest foxes that you could ever see, big foxes, cats(not mine), and birds. I have even received a visit from a Sparrowhawk. And who knows what else shows up, lurking in the dark? You must have eagle eyes if you can identify the animal in the picture What more excuses do I need, to build a wildlife camera with a Raspberry PI, Python, Tensorflow, and who knows what else? And it will be an awesome camera! You might argue, but why not just buy a wildlife camera, one that is already in the market and should do the job pretty well? Good point. But that would take half the fun! Raspberry PI Camera Modules So let's get started. The first thing we need to do is to understand the types of cameras that are available for the Raspberry Pi. The cameras that are most popular are the camera modules that directly connect to the Raspberry PI using the MIPI connector. The advantage of this type of camera is that data transfer between the camera module and the Raspberry Pi is very fast. And there is a simple to use Raspberry Camera API that I can call from Python to interact with the camera. I have tried this API and it has many useful functions that allow me to capture movement and record at the same time. I can do cool things like detecting motion and record in parallel using an intuitive API. There are three versions of the Raspberry PI Camera Module available: Screenshot from RaspberryPI.org Above are the official Raspberry PI cameras but you can also get camera modules from other vendors that still work with the Raspberry Pi. The Raspberry PI Camera modules rely on the Raspberry PI GPU to do the image processing from the camera sensor. Because the Raspberry PI itself needs to do the image processing, the options available in terms of camera sensors are quite limited. Each camera sensor comes with a different set of APIs, so it is not straightforward to support all different types of camera sensors. Recording in Low Light Since I am building a wildlife camera, it should be capable enough to operate during the day and night. It needs a camera sensor capable of recording in a low light environment, preferably in True color. The Raspberry v1 and v2 camera doesn’t work well in low light. To see in low light you need to use an IR light and remove the IR filter in the Raspberry PI Camera, depending on which version you buy. But then an additional complication you get is that during the day you get a pinkish image. So you need to have a mechanism to add the IR filter when there is sunlight and automatically remove the IR filter when there is no sunlight. Picture from Raspberry PI camera with sunlight in dark environment There is a new Raspberry PI Camera out there! The Raspberry Camera HQ but I am not 100% certain about its lowlight credentials. The Sony IMX477 is supposed to be better in low light than the previous versions of course. Whether or not it is capable of recording in true color in the dark, that is something I will test very soon in my channel. My initial investigation in terms of specs is that this is unlikely to be the case. Sony Starvis, a remarkable camera sensor Sony has a special family of camera sensors, primarily used for surveillance, called the Sony Starvis. The Sony Starvis sensor is an extraordinary advance in technology and is capable of recording with true color in low light at only 0.001 lux. Lux is a measurement of how much light it is available in a given environment. Just to put it into perspective. The lowest LUX you can get is when it is totally dark, no sunlight, no moon, no stars, or almost none, on an overcast day. And no artificial light. That is unbelievably dark(0.0001 lux)! No camera sensor that I am aware of will be able to capture an image when it is that dark. But if you have a clear sky, the stars will provide 0.002 lux. That’s still very dark. The Sony Starvis sensor can record in low light with 0.001 lux, half of that. Isn’t that mindblowing? See the table below, which I grabbed off Wikipedia for a better idea of what LUX really means. Wikipedia So you will hopefully agree that the Sony Starvis camera sensor is perfect for a wildlife camera. The bad news is that the Raspberry PI camera doesn’t support the Sony Starvis camera sensor. But the good news is that if I get my hands on a USB camera or even an IP camera with the Sony Starvis sensor, then I will be able to leverage in combination with the Raspberry PI That’s exactly what I have done!!! But I went cheap so I only found a decent IP camera with Sony Starvis on Alixepress. It didn’t cost me more than £20 if memory serves me right. And the difference between the Raspberry PI camera and this camera is like the difference between night and day. Literally. Judge for yourself. Sony Starvis IMX307 Camera Sensor — Dark room Raspberry PI v2 Camera — Dark Room — different perspective For a wildlife camera, or even a security camera it's important to record good quality footage in a lowlight environment. You might say that it is not a big deal to record in color at low light, but in my specific situation, I can’t use IR light as I am going to be placing the camera indoors facing the garden. It is going to have a window placed right in front of it. But IR light is not capable of going through a window. Did you know that? If you shine IR light at a window, it behaves in the same way as a mirror, and the camera will go blind. Going back to my point. It is always nice to use the Raspberry PI camera for some simple projects or just to learn. But as soon as you try to do something half-serious, then you do need to have access to better cameras. But if you are willing to use a USB camera(or even an IP camera), then you are opening a whole new set of possibilities. You will also be freeing up the Raspberry PI to do more important things with AI. Installing The Raspberry PI Camera Now it is time to set up the Raspberry PI Camera v2. The setup process is very simple. Raspberry PI v2 Camera The Raspberry Pi Camera comes with a white and blue ribbon. You need to connect the ribbon to the CSI connector to the Raspberry PI board. The blue side facing the back of the board. I used a 3D Printed case for the Raspberry PI camera, downloaded from Thingiverse, but you should be able to buy a Raspberry PI Camera case cheaply off Amazon. Now it is time to power on the camera Once the Raspberry PI is powered on, open a terminal window. And we execute: $ sudo apt update .... and next $ sudo apt full upgrade .... This is to ensure that the Raspberry PI is running in the latest version of Raspbian and it has all the latest patches/updates available to date. After this, we need to execute also in the terminal sudo raspi-config You will need to select the Interfacing Options-P1 Camera. Then select Finish and reboot. Taking a picture using Raspistill Now the Raspberry PI Camera should be set up, let’s do a quick test with Raspistill. Let's open a terminal window again and try the following: raspistill -v -o test.jpg My Raspberry PI camera took this decent photo: Recording a video with Raspivid Taking a picture is very nice, but much better is to record a video. To do that, we can use raspivid. raspivid -o vid.h264 This records a 5 second video. If you want a longer video, then you need to pass the -t parameter with the number of milliseconds raspivid -o vid.h264 -t 30000 This records a 30s video. Creating a Livestream Now the fun part starts. To see what the camera is recording in realtime we can try: raspivid -o - -t 0 -n | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264 This creates an RTSP stream from the Raspberry PI camera that is accessible from the local network. Conclusion There is so much more that we need to do that it is not going to fit in this article. I need to also setup the Raspberry PI 4 swith Tensorflow, OpenCV and Python of course to start developing. If you can’t wait until my next article in this series, why not subscribe to my Youtube Channel and see how I am getting on with this build? What are you waiting for? Resources:
https://medium.com/swlh/creating-a-wildlife-camera-with-a-raspberry-pi-python-opencv-and-tensorflow-d21280077f76
['Armindo Cachada']
2020-10-21 21:12:28.616000+00:00
['Python', 'TensorFlow', 'Raspberry Pi 4', 'Programming', 'Raspberry Pi']
What’s Happening to Our Planet?
Arabian leopards in Oman, as seen in Netflix’s Our Planet As viewership of nature documentary series continues to soar, it’s clear that these types of shows are resonating with audiences and have the ability to change conversations around their subject matter. Take Our Planet, the new series from the BBC’s Natural History Unit (NHU). The ambitions of this new series are clear: to “inspire and delight hundreds of millions of people across the world so they can understand our planet and the environmental threat it faces.” That’s what Alastair Fothergill, former head of the NHU and Our Planet co-producer, has said about this high-profile undertaking. Keith Scholey, who is the series co-producer and former NHU Head, reiterated this sentiment to POV, noting that “right from the get-go we were planning a big landmark series, which shows the wonders of the world but really points out what the issues are.” These claims are indicative of the noble aspirations of the filmmakers to simultaneously entertain, educate and conserve while working in the natural history and wildlife genres. Our Planet is the newest series associated with the NHU’s Planet Collection (2001–), which includes Blue Planet I and II, Planet Earth I and II, and Frozen Planet, all of which have been widely successful. The various series have been recognized with Peabodys, Emmys and BAFTAs, along with viewership that frequently breaks the 10-million mark. Our Planet is envisioned to be more successful than its predecessors, in part due to its Netflix partnership, which will simultaneously release the show in 190 countries to 139 million subscribers. The scale of this release is even more apparent with Netflix utilizing one of the coveted 2019 Super Bowl LIII advertisement spots to promote Our Planet — the only ad spot taken by the platform this year. Adding to the success of the Planet series is narration from naturalist and filmmaker Sir David Attenborough, whose NHU work dates back to the 1950s and includes hosting the nine-part Life Collection (1979–2008). While promoting Our Planet in November 2018, Attenborough noted that the series “will take viewers on a spectacular journey of discovery showcasing the beauty and fragility of our natural world,” and added that “today we have become the greatest threat to the health of our home.” This journey, in which viewers can witness and learn about the natural world and see the threats it’s encountering, is increasingly important considering that the deterioration of the earth, at the hands of humans, is so well documented that we may already be in a new geologic epoch — the Anthropocene. And so, the question of whether the Planet series has lived up to the worthy goals set by the filmmakers is one that warrants inspection. How do the NHU, Netflix and Attenborough entertain viewers about the natural world? Do they educate viewers about the natural world, and to what degree? And, perhaps most importantly, in the midst of it all, do they contribute to conserving the natural world, and, if so, how? The fall out zone around Chernobyl has become a refuge for wildlife Kieran O’Donovan, courtesy of Netflix ENTERTAINMENT Mark Terry, professor of Environmental Studies at York University and director of the Youth Climate Report, believes that the Planet series can “reach people” by extending the natural world “off the screen and into their own world,” because of the “stunning photography, that can’t be seen anywhere else.” Take for example episode four (“Forests”) of Our Planet, which mesmerizes viewers with the remarkable resilience of nature as exemplified by the Red Forest’s reclaiming of the Chernobyl disaster site, during the 32 years since the infamous 1986 event. In one scene, weather-worn balcony doors of an apartment sway and creak in the wind, as lush green vegetation grows around the rubble and glass. Attenborough notes that “as the forest re-established itself, animals began to appear.” In a following series of scenes, rabbits, foxes, moose, deer, wild horses and wolves trek through the townscape, which is now the exclusive home of thriving wildlife. This sequence may be read as a companion piece, of sorts, to Edward Burtynsky’s photography. His work beautifully and hauntingly showcases the enormous scale of anthropological harm to the natural world, while this sequence showcases the natural world’s ability to, as Scholey says, “bounce back” if given the chance and left “alone for an extended period of time.” This is a rare glimpse into a part of the world which is linked to an exclusion zone that is projected to last for another 20,000 years. Another rare sight is in the astonishing footage of a calving glacier in episode one (“Our Planet”), which is as entertaining as any in the Planet series. The subject matter is not wholly original, mind you: there’s no lack of cracking iceberg formations in documentary film. In Chasing Ice (2012), photographer and filmmaker James Balog captured the record-setting break of a glacier that was five kilometres wide and one kilometre tall; for comparison, that’s twice the height of the CN Tower, and the distance from the CN Tower to St. Clair Avenue in Toronto. However, not many filmmakers can relay the almost incomprehensible scale of a glacier in the same way as Balog or Viktor Kossakovsky in Aquarela (2018). The filmmakers of the Planet series can. As the kilometre long and .5-kilometre-tall (the actual height of the CN Tower) iceberg begins to emerge from the ocean after breaking from the Greenland ice sheet, it resembles an enormous sea monster, materializing from the depths, sounding enraged at those who dared to disturb it. And yet, the methods of achieving some of that visual pleasure have been questioned. In 2016, following the release of Planet Earth II, the filmmakers were criticized for recreating sound effects, like footsteps of a millipede, or the crunching as a jaguar eats its prey. The filmmakers have noted that, in some instances, these added sounds are a necessary technique, such as when audio can’t be recorded in the field due to the distance from the species filmed, or because of ambient noise from, for example, the movement of a helicopter. Previously, in 2011, after the release of Frozen Planet, the filmmakers were criticized for combining footage of wild polar bears with that of cubs photographed in a snow-den at a wildlife centre in the Netherlands. In this instance, Attenborough responded, “We wouldn’t do that now because we are being very, very meticulous to be correct and not in any way misleading.” It is hoped, of course, that Fothergill, Scholey and Attenborough remain aware of those moments when entertainment value and storytelling techniques begin to border on questionable, and risk threatening the integrity of the material. Barrie Britton films a wandering albatross parent as it returns to feed its chick Sophie Lanfear, courtesy of Netflix EDUCATION It’s not only the Planet series’ methods of entertaining that have been questioned. In 2018, while promoting NHU’s series, Dynasties, Attenborough came under fire for playing down environmentalism and playing up escapism. Although he began by stating that “we do have a problem,” he was also weary of the “bell ring[ing]” each time a threatened species was on screen. Attenborough continued that these “are not alarmist programmes” or “proselytizing programmes,” but “a new form of wildlife filmmaking.” When asked whether the series was a form of escapism, he stated that “it is reality and has implications for our lives, but it’s a great change, a great relief from the political landscape which otherwise dominates our thoughts.” Unsurprisingly, these comments were met with criticism, notably by the Guardiancolumnist George Monbiot, who didn’t find “escapism [to be] appropriate or justifiable,” or for it to be “proselytizing or alarmist to tell us the raw truth about what is happening to the world, however much it might discomfort us.” According to Mark Terry, “educationally, there is something lacking” in the NHUprogrammes. He believes that Our Planet “might open audience’s eyes to something they have not seen before, which has value; but…that’s not a complete educational package.” He continued that “the Planet series gives you the salient points, and the interesting stats, but not necessarily the background which leads to those stats, or projections into the future of what these stats might mean.” In other words, there’s a notable lack of substantive information about the influence that humans are having on the changing landscapes and wildlife featured in the Planet series. This could be problematic considering that climate change literacy has a low water-mark. In 2018, the CBC reported that “nearly a third of Canadians say they’re not convinced that climate change is being caused by human and industrial activity.” However, this is not to say the Planet series is completely devoid of educational material. Perhaps one benefit of long-form television series is the ability to deliver overarching messages, which are broken into parts over many episodes. Scholey mentioned that one of the messages in Our Planet is to “show how the planet works, and how that functioning impacts on life.” A part of this message can be found in the narration accompanying the calving glacier from the first episode of the series. Here, viewers learn that polar regions, which are home to sea and land ice, are warming faster than anywhere else on Earth. Attenborough states that we need the ice in these regions because they “protect our planet by reflecting solar radiation away from the surface, and so preventing the earth from overheating.” He continues, “and yet, the rate of loss of ice in these regions is accelerating.” In this Climatology 101 moment, Attenborough certainly doesn’t provide viewers with any depth as to the reasons the rate of loss is accelerating; but that’s not to say there aren’t lessons at play. If taken as a part of the whole, Our Planet offers a wide lesson on aspects of the global climate system. It does so through weaving an overarching theme of water from one episode to the next. The first episode discusses water in ice form, some of which is the world’s stored freshwater. In the second episode, freshwater is shown to also take form as a sky-river of vapor. In the third episode, it is noted that water in vapor/cloud form falls (or sometimes doesn’t) and potentially brings life to grasslands and deserts, and so on. In doing so, the series creates a storyline grounded in basic climate knowledge and introduces a lesson that may help viewers understand a part of how the climate system operates. Fossa in the Dry forests of Madagascar. Jeff Wilson, courtesy of Netflix CONSERVATION It’s not surprising that the conservation aspirations of the Planet series have also been a subject of debate. The series have been criticized for only explicitly addressing anthropogenic harm to the natural world in stand-alone episodes, like episode seven (“Our Blue Planet”) of Blue Planet II. Here the focus was on plastic pollution in the world’s oceans; and yet, although the issue was addressed, it still created frustration in many viewers who were told they could “do something” but not what that something was. In fact, it is surprising that there has not been more of a conservation focus, considering the “Blue Planet Effect,” which followed the release of Blue Planet I and II. In 2017, the United Kingdom’s Environment Secretary, Michael Gove, said he was “haunted” by the images of ocean pollution featured in Blue Planet II, and that cutting plastic would be the focus of future proposals. And there are other indications that, perhaps, the series have had tangible impacts on global conservation efforts. In 2018, there was a 24-hour global boycott on plastics, as well as talks on nationwide single-use plastic bans, and an uptick in marine biology studies. Our Planet has been explicit about its conservation goals. That is immediately apparent in the opening sequence, which features the image of the Earth rising behind the moon, which is a nice ode to the iconic photo taken by Apollo 8 in 1968. The original “Earthrise” photo is an environmental icon, and credited with inspiring the United States Environmental Protection Agency and civil society’s organization of the first Earth Day in April 1970. Attenborough notes in the introduction that “this series will celebrate the natural wonders that remain and reveal what we must preserve to ensure people and nature thrive.” Southern humpback whales feeding on krill in the Gerlache Straight, Antarctic Peninsula Netflix This time around, the conservation efforts are rooted in a partnership with World Wildlife Fund (WWF), the leading global network of independently operating branches that work tirelessly on pollution, climate change, invasive species, and more as the drivers of wildlife loss. POV asked Megan Leslie, president and CEO of WWF-Canada, how this partnership contributes to saving biodiversity and life. Her response was precise: WWF offers filmmakers locations and scientists to help ensure scientific accuracy and it captures “the people who are inspired after watching this series,” through a goal-oriented website that closes out each episode and “allows the series to live on through action.” This action is important considering that in Canada, between 1970 and 2014, half of the 903 wildlife species monitored by WWF were in decline. If these stats aren’t clear enough, “the whole web of life is in crisis,” says Leslie. The hope for her organization is that a partnership with such a successful TV franchise will inform audiences about the planet, helping them “to understand why nature matters,” and inspiring those viewers “to take action themselves” with guidance provided by WWF. “Right now, we are facing a man-made disaster of global scale, our greatest threat in thousands of years: climate change.” Those dire words were spoken by Attenborough, who was given the “People’s Seat” at the United Nations Climate Change Conference (COP24) in December 2018 at Katowice, Poland. His stirring speech came just months after the October release of Special Report 1.5 (SR15) by the United Nations Intergovernmental Panel on Climate Change (IPCC), which followed up on the optimistic COP21 Paris, in which countries pledged to keep global average temperature rise to between 1.5°C and 2°C. The new report warned that if we don’t keep temperatures from surpassing 1.5°C, in the next 12 years, the world could experience significantly increased sea-level rise, ocean acidification, coral bleaching, ice-free Arctic summers, and more extreme drought, floods and heat. Attenborough further stated that “no other creature in the world has had the effect on the planet that the human species has,” and we should “recognize the responsibility that we now have in our hands.” In a time when the public, young and old, is speaking out en masse in strict condemnation of the deterioration of the natural world, it no longer seems possible for docu-series in the natural history and wildlife genres to delicately interject climate change knowledge and conservation action. As BBC’s NHU begins its next phase of series, which includes Perfect Planet (2020), Frozen Planet II (2021) and Planet Earth III (2022), eyes will be on whether Attenborough and his associates will truly speak on behalf of the public or not. Michael John Long, POV Magazine POV Magazine is Canada’s premiere source for documentary culture. Subscribe today to read more in print or sign up for our newsletter to stay up to date!
https://povmagazine.medium.com/whats-happening-to-our-planet-660cef3e4c7a
['Pov Magazine']
2019-07-03 17:17:41.117000+00:00
['Nature', 'Netflix', 'Climate Change', 'Documentary', 'Environment']
Circular Reasoning
X and X cannot not be X and Y. Circular reasoning. You have to use circular reasoning if you want to get to the truth, and we do this all the time (X is X). Complementarity is the basis for identity because duplicity is the basis for a unit. X and X cannot not be X and Y (pi is the mediating observer) (for all observers) (the only observer).
https://medium.com/the-circular-theory/circular-reasoning-8c5da2836736
['Ilexa Yardley']
2017-07-04 20:12:43.455000+00:00
['Deep Learning', 'Artificial Intelligence', 'Venture Capital', 'Virtual Reality', 'Data Science']
Introducing the future of visual discovery on Pinterest
By Dmitry Kislyuk | Pinterest engineering manager, Visual Search Our mission at Pinterest is to help you discover and do things you love. Under the hood, we’re powering a visual discovery engine with 100B ideas saved by 150M people around the world. Today we’re introducing three new visual discovery products–Lens, Instant Ideas and Shop the Look–that turn any image into an entry point to finding more ideas. Visual search at Pinterest In 2014 we started investing heavily in computer vision and created a small team focused on reinventing the ways people find images. Less than a year later we launched visual search, a new way to search for ideas without text queries. For the first time, visual search gave people a way to get results even when they can’t find the right words to describe what they’re looking for. Last summer, visual search evolved as we introduced object detection, which finds all the objects in a Pin’s image in real-time and serves related results. Today, visual search has become one of our most-used features, with hundreds of millions of visual searches every month, and billions of objects detected. Now, we’re introducing three new products on top of our visual discovery infrastructure. Pinterest Lens Lens is a new way to discover ideas with your phone’s camera inspired by what you see in the world around you. Just take a photo of anything that interests you–a pair of shoes, seasonal produce, a chair you’d like to see in your home–and Lens will give you recommendations for similar objects or ways to bring the idea, recipe or style to life. If you’ve ever wanted an easy way to find a jacket like something you spot on the street, or get ideas for decorating your house with the same style as your designer friend with a single tap of button, Lens is for you. Lens builds on the advancements we’ve made in computer vision and machine learning over the last year and goes beyond visual similarity. Lens tries to understand the objects you’re looking at and how they could be useful to you. For example, if you see strawberries on sale at the grocery store, just point Lens at them, take a picture and get creative recipe ideas (like chocolate strawberry waffle balls…on a stick!?), not just strawberry images. If you point Lens at an awesome pair of sneakers you find, Lens won’t just find visually similar sneakers, but will return outfit ideas with the same sneaker style. You can even point Lens at the night sky to find ideas related to constellations, galaxies and UFOs. We’ll share an in-depth technical blog post on how we built Lens in the coming days. In the meantime, we’re busy rolling out Lens in BETA on iOS and Android. Ultimately, Lens will get better as more people use it. If you have feedback as you use Lens, please let us know! Inspired by the world Lens has revealed, we’re also introducing new ways to find ideas on Pinterest in just a tap. Shop the Look Shop the Look is a new way to shop and buy products you see inside Pins. It combines our computer vision technology with human curation to recommend a variety of related products and styles you can buy on Pinterest or from a brand in just a tap. For Buyable Pins results, we’ll also show you ideas for bringing a look to life. For example, if you see a scarf you like, now you can see different ways to style it right on the Pin. We’ll share an in-depth technical blog post on how we engineered Shop the Look soon, and for now, you can check it out on iOS, Android and the web. Instant Ideas Building on the machine learning that powers 10B recommendations on Pinterest every day, today we’re launching Instant Ideas, a new way to transform your home feed with similar ideas in just a tap. So as you’re browsing through your home feed and find something interesting, now you can get more ideas like it by tapping the circle in the bottom right corner of any Pin. We then use these signals to personalize your home feed in real-time, making recommendations instantly more relevant to your tastes and interest as you scroll through your feed. Check Instant Ideas out on iOS, Android and web starting today. With these new products, it’s now easier than ever to find ideas across Pinterest and from the world around you. We’re excited to see how people use these technologies. Look for technical deep dives on these products in the coming days!
https://medium.com/pinterest-engineering/introducing-the-future-of-visual-discovery-on-pinterest-48fb469b0d67
['Pinterest Engineering']
2017-02-21 20:48:24.031000+00:00
['Pinterest', 'Deep Learning', 'Computer Vision', 'Enginering', 'Machine Learning']
Building Basic React Authentication
Wait, What Did We Do So Far? Before we move on, let’s make sure we have a fundamental understanding of SPAs and react-router. If you’ve worked with routers and SPAs before, you’re welcome to skip over this part! Create-React-App is a fantastic project that handles a lot of the heavy work for creating a new React application. Behind the scenes, there are a lot of different pieces that need to be put together, particularly with module bundling. That topic is one for a different day, but for our sake, it allows us to just focus on the app itself, without having to worry about configuration. Now, there are several different ways we can present web pages to users. Traditionally, web pages were served to users from a web server. The user would go to a URL like http://www.dennyssweetwebsite.com/hello the server hosting my web site would get the request, find out the page they were looking for (in this case, hello ) and return the user hello.html , which was an HTML file residing on the server. As the web grew more complex, these calls would resolve to a server application, running on something like PHP which would generate the HTML page for the user and return that data. The critical piece to note here is that the URL specified by the user was directly related to a route on a web server. So, generating and returning that content was an actual address to the web. On the other hand, Create React App scaffolds a client-side Single Page Application (or SPA). Single Page Applications are web apps that reside entirely on the user’s browser. When a user makes a request for www.dennyssweetwebsite.com they are instead handed my entire application. From there, we don't actually even need URLs. What the user can view can directly be handled by the state, without ever changing the URL. The problem is that browsers and users are still highly dependent on the URL. Browsers allow you to move back and forth in history, bookmark specific pages, etc. Users may bookmark particular pages and want to jump directly there. They even may memorize URL’s. Also, in all fairness, URLs are an excellent way to separate our content, especially when it comes to things like route-based lazy loading. For that matter, many Single page applications still use a routing system to split their content. All this does is read the given URL, and instead of passing that change to a server, it will instead display a component for a given URL. Rendering a component based on the route is precisely what we did in App.js above. OK, the history lesson is over, lets jump into building some private routes.
https://medium.com/better-programming/building-basic-react-authentication-e20a574d5e71
['Denny Scott']
2019-07-15 18:22:13.464000+00:00
['JavaScript', 'React', 'Programming', 'Web Development', 'Front End Development']
From PHP to Rust: Part I — What is a system language?
PHP have served me right, until now. I have felt that PHP has become something too restrictive for my own projects. I’ll get this straight right away: PHP is not a bad language, is a very useful one specially when paired with frameworks like Symfony and Laravel, but it still lingers behind in features when compared to other languages, like Javascript (Node.JS), Python, Go, or even Rust. No amount of frameworks can fix that. I decided to jump ship for two reasons: I needed to make something that PHP doesn’t support, and that was microservice-friendly. I don’t think that pulling out thousands of files from the Internet for each Kubernetes Pod can qualify as friendly. In my case that translated into creating a WebSocket Server. I needed handling video calls between two peers using WebRTC, a protocol that connects both peers directly, while also transferring very important files to the other party through WebSockets to avoid reloading the browser and losing the connection. PHP wasn’t up to the task. It doesn’t support WebSockets natively, and for WebRTC, you can hack your way in unreliable and resource hogging ways. There is also the problem of not having coroutines, meaning, everything blocks the request-response behaviour. PHP is nice and all, but I had to dive in and create something from scratch. That’s where Rust comes in. Going deep. Going hard. Rust is a system language. It’s very different to PHP, which is a interpreted language: A binary “reads” files with PHP code, transform the code in the file to «opcode» into memory, and executes it. Opcache and the next JIT engine in PHP 8 may speed up things. PHP is a web-bound language. With Rust, on the other hand, you can also do this with some work, but also a graphic engine, a web server (like NGINX), or even a transcoder. There is no limit. And that’s one of the core principles with a system language like Rust: you are the pilot, you just have to know how to drive if you want to go somewhere, anywhere. Rust for dummies Most of your coding work that goes into Rust, like you would using C++ or any other similar language, it’s finally compiled into a single binary, which depending on what you do, may take some megabytes. That binary (or “executable”) is bound to the system architecture (x86–64, ARM, RISC-V, for example) and the operative system kernel (Windows, Linux, MacOS, Android, iOS). You would think that it would require changing the whole code when changing “targets”, but instead Rust comes with a switch that allows you to point where you want to deploy your software, and it supports a whole lot. Unless you’re really doing something very low level, you won’t need to do anything but changing the destination. The PHP stack on Rust The “PHP” stack is mostly comprised of having a web-server, like NGINX or Apache, the PHP interpreter running behind of it with a network socket, and the PHP files to read. Also, you need to get your permissions right, otherwise your application won’t work or your whole server will become vulnerable to attacks.
https://medium.com/p/871136fae31d
['Italo Baeza Cabrera']
2020-10-14 04:17:42.327000+00:00
['Rust', 'Software Development', 'Development', 'PHP', 'Programming']
5 MANTRAS TO START AN ANALYTICS REVOLUTION
5 MANTRAS TO START AN ANALYTICS REVOLUTION Alteryx Follow Mar 27 · 7 min read You’re amazing and you get it. You’ve solved challenges using every data source imaginable. You’ve unleashed the answers that helped your team take action, and you’re sharing outcomes to make everyone more successful and productive. Truly — you’ve rocked it from zero to analytics hero! But how do you turn this spark into a full-fledged analytics revolution? How do you change the mindsets and daily practices of analysts and data gurus across your company to empower every team to deliver better, faster, game-changing results? Here are five mantras to repeat while you kick-start your analytics revolution and build a sustainable, high-performing analytics culture. WHAT TYPE OF CULTURE ARE YOU LOOKING TO GROW? Building out a solid analytic culture is not just about finding the right questions to ask, but understanding your data landscape to know whether you have the data and the expertise to prioritize those questions — before trying to solve the problem with analytic technology. In fact, before you even take the first step on the journey to analytic transformation, there’s a fork in the road and an early decision to make: What kind of traveler are you looking to be? With analytics, there are (broadly) two ways to change your world. First, you can take a process that you work with today and try to improve it by making it faster, cheaper, or more efficient. Author Stephen Covey of Seven Habits of Highly Effective People would call this “Sharpen the Saw.” We’re taking an existing business process and using analytics technology to refine the underlying steps. A nip-and-tuck spreadsheet removal here, a self-service analytics application there, and success! We’ve taken a six-hour-weekly, manual slug-a-thon with your data down to 20 seconds of automated bliss! You’ve won your day back! Rejoice! With Covey’s method, you’re helping your best people tackle their problems head-on and freeing their days to work on more valuable activities. Just imagine what a team of analysts could achieve if all that manual spreadsheet munging was history. Covey’s culture is all about winning. It’s about getting rapid, measurable change to existing processes and then banking the winnings to invest in new projects. Winning analytics teams quickly evolve to high-performing units that thrive on the buzz of victory, and this feeling is contagious! You’ll find yourself quickly surrounded with new recruits as you scale this approach across your company, so use the tips in the rest of this article to manage that growth. Second, we have the alternative path: disruptive transformation. Don’t worry, this isn’t the dark side. But it does involve shaking up the status quo and fundamentally changing much of what your company considers “normal.” Disruptive transformation often looks to parts of your business where people make decisions: a bank loan approval here, a next best offer there, and so on. Analytics (especially advanced analytics) can make a huge impact by automatically driving business actions that lead to faster and more competitive outcomes. In fact, prescriptive analytics are enormously valuable — a successful deployment of analytics against a critical business process can often produce gains that pay for entire analytics divisions in a single release. Remember — disruptive transformation isn’t better than saw-sharpening — your path is dependent on where you want to end up. Before you start down any analytics path, remember to ask yourself what you’re trying to build and then assemble your teams with those skills in mind. Modern analytics has everything you need to play the supporting role whichever direction you choose. ANALYTIC CULTURE REQUIRES TIMING It’s not just about choosing the right path — a great analytic culture also needs a sense of timing. You need to know where you are in the journey in order to make the right move. If you’re just embarking on your journey with self-service analytics, then you need to focus on answering those all-important business questions and learning how to get from data to insights in a fast, easy, and repeatable way. Your journey continues as you start to use these same analytics to not only confirm what’s happening in your business, but to start to make models of what might happen next: forecasts, predictions, simulations. This isn’t for the purpose of idle speculation; it’s a means of empowering your analysts to take competitive action. Eventually, companies like yours reach that tipping point when analytics needs to jump from beyond a single user, a single desktop, or a single team, and there needs to be a way to take insights and actions and share those outcomes more widely. The discipline that comes from walking this path is really what we call analytic culture — to get analytics powering your entire business. But it’s not a one-off. You’ll find that successful analytics leaders will find themselves right back at the start with a new project, technology, or department and will need to begin the journey afresh — learning new practices each time. ANALYTIC LITERACY MEANS DISCOVERY, ENABLEMENT, ENLIGHTENMENT, AND BUILDING BRIDGES W. Edwards Deming, the famous statistician, once said, “Without data you’re just another person with an opinion,” and I’d agree. One of the main reasons companies need an analytic culture is to step away from random gut feelings and the opinions of the HiPPO, aka the highest-paid person in the room. But data-driven opinions? Developing analytic literacy is the best approach to developing stronger insights with your most valuable resource: data. A huge enabler for analytic literacy is the widespread access of governed and trusted company-wide data, along with technology that makes it easy to discover what’s available from databases, local files, reports, dashboards, and workflows. Enable anyone who wants to learn by running office hours — fixed times every week, where you show that your experts are there to answer anything that’s on your community’s mind. Identify missing skills or connections and build out coaching plans for those that want to improve, and use certification as a way of assessing improvement as your rock star analysts walk the path. A mixture of core skills, collaboration, and curiosity leads to much bigger impacts for your company. Call out where your analytics tribal knowledge has led to a breakthrough and celebrate your analytic champions — inspire everyone on your teams to become THIS good! Finally, build bridges wherever you go: subject matter experts, data scientists, and especially IT. You want everyone with you on this journey. HARNESS SOFT AND HARD SKILLS TO FIND BALANCE IN YOUR ANALYTICS CULTURE As you build a culture of analytics, you’ll be dealing with new data and new technologies, but you won’t be successful if you don’t understand the people who are on the journey with you. People don’t neatly fall into boxes for classification, which means that you’ll need to deal with a spectrum of different behaviors if you want your analytics teams to truly perform. Personalities range from highly empathic and compassionate to results-driven and focused on hard skills. People are complex in how they take actions too — for some it’s all about having conversations at the water cooler or understanding how people feel, whereas for others, it’s about letting the mathematics do the talking in complex machine-learning models. Finally, there are the analytic outcomes themselves, which range from pure intuition and educated guesses to a reliance on models and algorithms. Your analytic culture needs to be a balance of these extremes. Too much weight on the softer side and you risk building a hit-or-miss analytic culture that won’t deliver sustainable results. Too much weight on the hard side and you risk building a complex house of cards that’s equally unsustainable (unless perhaps you’re Google or Facebook who can afford those Silicon Valley salaries!). The middle ground: a balance towards curiosity, blending code-free, approachable analytics with code-friendly building blocks, and working towards actionable insights helps generate both team satisfaction and sustainable analytic performance. GETTING ANALYTICS ACROSS THE LINE Remember how we talked earlier about how a strong analytic culture gets addicted to winning? You “win” at analytics by getting your model, your report, your actionable insights, into the hands of your audience, be they other employees, customers, or even other applications. Without that delivery, you’re not giving your team’s credit for their hard work behind the scenes. Getting analytics over the line and making it actionable is often the hardest barrier for teams to cross — according to Rexer analytics, data scientists report that only 13% of their models ever get deployed, but the problem can be just as serious for analysts in the line-of-business. Building an analytics team that’s obsessed with winning means that you’re looking to deploy early and often — crossing the last mile of analytics often doesn’t need perfection, but every release should produce new value and remove waste from your business processes. With analysts, there’s waste every time they run a manual process that could be automated. Most spreadsheet power users spend north of 28 hours a week in that tool — nine of which are simply reworking sheets and macros to fit new incoming data every single week. Most spreadsheet power users spend north of 28 hours a week in that tool — nine of which are simply reworking sheets and macros to fit new incoming data every single week. To remove the waste and make that work widely accessible, consider wrapping the process into an analytic app and letting users self-serve without needing to call you everyday for a newly-cut report. For data scientists and IT? Take an R or Python model into production without having to recode in other languages such as Java or C++. Get a model operational and serving real-time results inside your business applications, products, or services. KEY TAKEAWAYS Deploy often. Validate requirements. Fail fast. Get feedback. Rinse and repeat these mantras. The relationship between a data-driven organization and a corporate culture of analytics is strong. Keep this core loop alive and create a winning analytics culture. This blog was originally posted here: https://www.alteryx.com/input/repeat-after-me-5-mantras-to-start-an-analytics-revolution
https://medium.com/input-by-alteryx/5-mantras-to-start-an-analytics-revolution-559366695ee5
[]
2020-04-02 19:47:08.144000+00:00
['Big Data', 'Alteryx', 'Analytics', 'Data', 'Data Science']
Grease the wheels of Machine Learning
Machine Learning has become the most-talked-about topic these days, what is so different that Machine learning does when compared to traditional software applications? In the traditional software applications, we developed systems and logic for existing rules and we kept updating the systems as rules changed. But, change is the norm in business and technology. The main feature of machine learning is that it allows you to continually learn from data and predict the future. This powerful set of algorithms and models are being used across industries to improve processes and gain insights into patterns and anomalies within data. So, what exactly is Machine Learning? Machine Learning is a field of study that gives computers the ability to learn without explicitly being programmed. Machine learning is a technique that teaches computers to do what comes naturally to humans and animals - learn from experience. To put this in simple terms, Consider a 2–3-year-old kid, you would have to teach them how to read & write alphabets and basic rules of the language, vocabulary. After a few years, the kid learns how to read and write without much help. So, you are training the kid to be able to read/write without getting much help in a few years. This is exactly what Machine Learning is Machine Learning is used in our everyday lives, for example, Google Maps, Product recommendations in Amazon, Alexa, Siri, Facebook product suggestions, Snapchat filters, Instagram’s auto-recommended emojis, The image search in Google Photos that enables you to find the species of plant/insect and many more. How does Machine Learning do what it does? Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation as a model. The algorithms adaptively improve their performance as the number of samples available for learning increases. A Machine Learning program is said to learn from Experience ‘E’ with respect to some Task ‘T’ and Performance measure ‘P’. The performance of the program improves with experience ‘E’ for the task ‘T’, performance measured by some parameter ‘P’. Basically what this means is that, the computer program learns from experience and gets better as it experiences more such incidents by accumulating more data. Categories of Machine Learning: Machine learning techniques are required to improve the accuracy of predictive models. Depending on the nature of the business problem being addressed, there are different approaches based on the type and volume of the data. In this section, we discuss the categories of machine learning. Supervised Machine Learning algorithms In Supervised Machine Learning, the algorithm applies what it has learned from previous experience and produces an inferred function to make predictions. The model compares the output of the existing data set and compares it with the intended and the calculated output and rectifies the errors. The disadvantage of this system is that the data needs to be labeled. Unsupervised Machine Learning algorithms Unsupervised learning studies the hidden patterns in the unlabeled data set and builds a function. The system does not figure out the right output, but it explores the data and can draw inferences from datasets to find the hidden structures in unlabeled data. This is used when the dataset that is intended to train has neither been classified nor labeled. The disadvantage of Unsupervised algorithms is that the application spectrum is limited Semi-supervised Machine Learning algorithms As the name suggests, Semi-supervised machine learning algorithms are a mix of both Supervised and Unsupervised algorithms. This was developed to overcome the limitations of both Supervised and Unsupervised learning algorithms. This typically has a very small amount of labeled data and a huge amount of unlabeled data. Reinforcement Machine Learning algorithms: Reinforcement Learning does not require inputs or outputs. All this does is that it learns from the environment by trial and error. This method allows machines to determine the ideal behavior within a specific context to maximize its performance. Few applications of this method include: game theory, aircraft control, business strategy planning Why is Machine Learning so popular? The amount of data that is generated per day on the internet is almost 2.5 quintillion bytes, the amount of data that is generated has steadily been increasing over the years. To process unstructured data from the internet and make it structured to conclude and valid information from it is a huge task. Blunders like missing data, wrong interpretation can cause the business a huge loss. To overcome this, Machine Learning can be used. Machine learning enables the analysis of massive quantities of data. It generally delivers faster, more accurate results to identify profitable opportunities or dangerous risks. Combining machine learning with AI and cognitive technologies can make it even more effective in processing large volumes of information. Machine Learning aims to build a model that does the task without very little to no human intervention.
https://sivashankarivaitheswaran.medium.com/grease-the-wheels-of-machine-learning-ee31d98a9fa5
['Sivashankari Vaitheswaran']
2020-11-03 13:47:41.433000+00:00
['Machine Learning', 'AI', 'Techology', 'Computer Science', 'Data Science']
This is Why Survivors Can Never Un-Know
There are certain words survivors can’t ever seem to escape: “He said, she said.” “There’s her side, his side, and the truth.” “She’s just in it for the fame and money. Nothing REALLY happened.” “She put herself in that situation. She should have known better.” * How many times must we read these persistent, blame-filled statements that dismiss, minimize, and assign fault to victims of sexual assault and rape? The language here puts all the emphasis and responsibility on the victim instead of the criminal who perpetrated the crimes. This one, however, might be the worst: “Why did she wait so long to report? It must not be true.” How does one equal the other? Many victims choose not to come forward out of shame, fear, reputation, power, loss of relationships with loved ones…any number of reasons. Sexual crimes are so incredibly invasive — people love to tell us what they would have done, yet they cannot imagine what it’s like to live with these memories and triggers daily…of someone sexually invading you. Imagine what that’s like. You probably can’t. Which is why many people make these ridiculous statements. The truth is, there are very real reasons why survivors don’t report crimes that’ve happened to them. Here are just a few. Self-Blame Many survivors feel complicit — we blame ourselves. How could we put ourselves in a situation to be assaulted? How could we ‘let’ it happen? Self-blame is rampant among sexual assault victims, made worse by others blaming and shaming us. Sex complicates things. It’s an intimate act. For an outsider, if sex is involved, they make assumptions: How do we know she didn’t want it? How do we know she didn’t truly consent? How do we know she isn’t lying? How do we know she doesn’t have ulterior motives? When, in actuality, sexual assault and rape aren’t about intimacy — they’re about violence, control, and power. Invasion. Criminal acts. We are discussing crimes, here. Why is the victim required to prove our innocence? For the victim whose entire life is now changed forever, these questions are not only invalidating as well as presumptuous, but they’re also almost beside the point. Almost. Our worldview that people, especially men, are inherently good, has changed. (You can #NotAllMen us — and by all means, feel free — because we already know that there are good, even great, men who would never hurt us, and if you’re reading this, you’re probably one of them. I’m not talking to you or including you in this conversation. This isn’t about you.) The Rape I haven’t previously shared my own rape story; I’ll do so here for the first time. *Trigger warning. I met a man for sex, something, as an adult, I’m allowed to do. I’d separated from my ex and wanted to explore, so I did. We’d met twice before, and it was good. Complete opposite of my ex — big guy, skillful. We discussed upfront what I was okay with and what I had no interest in. He was smart — he built trust. I trusted my instincts. My bad. When he became violent with me, engaging in painful sex acts I never consented to, I froze. I dissociated. Despite my desperate pleas to stop, he ignored me and continued on till he finished. As a lay limp as a rag doll, he picked me up and put me in the shower, washing away all possible evidence, and then walked me to my car, ordering me not to tell a soul or he would send his brothers after me and my kids. Terrified and shaking, I don’t remember how I made it home — only what happened. I couldn’t un-see it. I kept replaying it all in my head. This is a common reaction to extreme trauma: our brains are flooded with chemicals during any kind of intense, traumatic situation, in particular during a sexual assault. I went into survival mode. (Source: Daily Cardinal) I shut down. I didn’t share with family or friends. I stopped dating, my body recoiling at the thought of any kind of touch. My skin crawled with every flashback. I cried constantly. It took me months to share with anyone, and even then, only my therapist. I didn’t use the word ‘rape,’ because I blamed myself. “I put myself in that…situation. It’s my own fault,” I tell her, voice shaking. “How can I call it…rape?” “It’s not your fault, honey,” she soothed. “You went there for consensual sex, agreed to in advance. He gained your trust. He didn’t gain consent to hurt you.” Self-blame. Humiliation. Embarrassment. This is why many of us don’t disclose. I still feel like it’s my own fault and yet from a logical, even legal perspective, I know it’s not. I did not rape myself. I did not perpetrate criminal acts on myself. I am not a rapist. He is. Should I have reported him? Tough one, and here’s why I didn’t — terror. For me and my children. Judge me if you want — you can’t possibly reach the level of judgment I’ve already placed on myself. Sharing this here with you is like a betrayal of my heart. I’m waiting for the fallout. Mental Health It’s no secret that many sexual abuse survivors wrestle with mental health issues as a direct result of the trauma we experienced. As a childhood sexual abuse survivor at age eleven, I experienced panic attacks, anxiety, hypervigilance, and depression — not understanding or knowing these terrifying feelings had names (or even a diagnosis of PTSD). What many people have no understanding of is exactly how trauma, especially sexual trauma, affects the brain of a survivor. Which is why, when the more uneducated among us, tells us to, “just get over it already,” we’d love to. Alas, our brains, our very cells, don’t comply. “Comprehensive systematic review and meta-analysis of 37 longitudinal observational comparative studies including 3,162,318 participants found an association between a history of sexual abuse and a lifetime diagnosis of anxiety, depression, eating disorders, PTSD, sleep disorders, and suicide attempts.” (Source: NIH) Multiple studies in epigenetics show that the brains of trauma survivors actually change as a result of these experiences and can even be passed down to future generations. Triggers You may be familiar with this word as a funny kind of game trolls play on social media. For survivors, it’s something else entirely. We can be going about our day and then BOOM!, we’re hit with a flashback, scent, or news story that brings us right back into the trauma we experienced. This is our brains on trauma. Despite every effort we’ve made to “just get over it and move on,” we cannot control triggers that pop up out of nowhere. Officially defined, “a trigger in psychology is a stimulus such as a smell, sound, or sight that triggers feelings of trauma. People typically use this term when describing posttraumatic stress (PTSD).” (Source: GoodTherapy) For any survivor, this makes sense. For any non-survivor, I can tell you from experience, this makes no sense. Example: my kids love horror movies. I do not. They recently played music from a horror movie after dinner, and I suddenly jumped up and yelled: “Turn it off! Turn it off right now!” “Okay. Geez. What’s your problem?” my daughter asked, wide-eyed with incredulity (you see, I am not a yeller). “I don’t know. I don’t know. I don’t know! Just make it stop!” I answered, shaking uncontrollably. I left the room, bolted upstairs, focusing on my breathing till I calmed down. Even I had no idea why that bothered me so. Talking it through with my guy helped. Hearing that scary music put me right back into that mind space of waiting to be victimized as a child. Waiting for the bad guy to get me — again. Of being that innocent child who had to hold that horrible secret — again. It wasn’t conscious on my part, yet that music became a trigger for me, bewildering my children. Consciously, I didn’t know. Subconsciously, I cannot un-know. The #MeToo Movement The #MeToo movement has opened the doors for many survivors to come forward after years, even decades, of not saying a word about what they experienced. Most childhood sexual abuse victims don’t disclose until well into adulthood — if they disclose at all — with a median age of 48 and an average age of 52. (Source: ChildUSA) Hearing others’ stories creates a compelling sense of courageous bravery we haven’t seen before, a sort of receptiveness we hope will make a difference in how we’re treated. As we’ve seen, sadly, this is not always the case, particularly, when famous men are involved (Jackson, Kavanaugh, Cosby, Weinstein)…just to name a few. Sharing Our Stories Yet, if you’ve read my previous books, Broken Pieces and Broken Places, or any of my blog or Medium posts, you know this about me: I write what scares me. I tell uncomfortable truths. I want other survivors of rape and sexual assault to know they’re not alone. I’ll be the voice. I am the voice. Despite non-survivors telling us how terrible we are for coming forward years later, or not reporting, or attempting to blame us for lying or gold-digging or fame, we know. We were there. We live it daily through PTSD, flashbacks, nightmares, holding our bodies rigid and primed for another assault. Through not trusting, not giving people a chance, never fully relaxing. Through anxiety, panic, depression, migraines, and a host of immune disorders we’re much more susceptible to because of extreme trauma.
https://medium.com/survivors/this-is-why-survivors-can-never-un-know-49b3d73173d5
['Rachel Thompson']
2020-03-13 02:07:48.408000+00:00
['Relationships', 'Life', 'Mental Health', 'Abuse', 'Life Lessons']
Investing in New Frontiers
Investing in New Frontiers Facebook was founded in 2004. Bitcoin in 2009. Old frontiers are often still very new. Knowing what they’ll do next is nearly impossible. “Abiding in the midst of ignorance, thinking themselves wise and learned, fools go aimlessly hither and thither, like blind led by the blind.” — Katha Upanishad That’s the first time anyone ever said anything like, ‘blind leading the blind.’ It’s from 800 BCE. That’s a long time ago. Rome had not even been founded yet, that’s how long ago it is. Homer had not even published the Iliad. I wonder, if Homer was alive today, what he would write about financial markets now: A millennial sits in a coffee shop He sings with tweets “HODL my crypto to all the moons!” His goal is to own a lambo. That’s my attempt at writing a modern Homeric stanza. As an investor, or someone interested in markets, history is the only road map we have — even if that means studying Rome, Homer, and Upanishad. It’s also why I think its important to be careful of things that have a short history. And a short history is anything under 100 years old. I think of Cryptocurrencies, which in the grand scheme of financial markets, are still very new. So is social media. So is VR and AR. Our news, conversations with friends, and the things we read may make them appear old, over time, after a year of talking about them, but in the grand scheme of things that is very little time. Today, there are cryptocurrency experts everywhere. They are both bearish (think it’s going down) and bullish (think it’s going up). They’re raising money for their ICO or they’re writing about the revolution taking place. However, what’s fascinating to me, is how someone can become so sure of one thing when that thing has less than 10 years of history behind it. The entire world is connected now through Twitter or Facebook. Maybe that’s why “following” an expert is so easy. So, should we blame social media? But, even that is still in its infant stages. We’re debating privacy, fake news, and data harvesting that all come with the new social media world. I always remind myself that Facebook was founded in 2004. That’s less than 15 years of history. Is that really enough time to be an expert on something? And something that connects a few billion people? We still don’t fully understand the ramifications of it. When I was growing up in the Bay Area, I remember when they were getting ready to introduce FasTrak. Those were the little readers that sat on your car’s dashboard and made it easy for you to drive through tolls on the Golden Gate Bridge. A machine would scan it every time you crossed the bridge and that was your payment style. One of the main arguments against FasTrak at the time was centered around the Government. People would say things like, “now they will know where I am and when I cross a bridge!” and “the Government should not be able to track us crossing the bridge!” Let that sink in. In just a few years, people went from freaking out about their privacy crossing bridges to carrying phones with GPS, video, and apps like Facebook that suck in everything about them, their lives, and day-to-day activities. I wonder if Mark Zuckerberg sits back in his chart at times and just says, “holy sh*t what have I created, how did this happen so fast?” I wonder what Satoshi would say about Bitcoin. Maybe he’s sold a few coins, is sitting somewhere on a beach with millions, thinking, “I can’t believe that just happened.” I think that’s more probable than some revolutionary V for Vendetta figure striving to change the world. These products happened so fast, still have so little history, that their creation and permanence in society is still in explainable, I believe, even by those who started them. Why Facebook and not Myspace? Why Bitcoin and not Hashcash (a 1990s cryptocurrency)? Young and exciting industries, like crypto and social media, are total game changers for society. But as much as we want to understand them, they are still so know. They are just happening and there’s no formula or anecdote that can explain it. At least not yet. I’m not saying Bitcoin or social media is a bubble, nor am I saying it’s the future of our world. What I’m saying is that these topics are still too young to be an expert on. There isn’t enough history for anyone to truly get it. I think someday, after many years have passed and the smoke has cleared, we may look back and realize that ultimately it was a case of, “the blind led by the blind.”
https://medium.com/luchini-in-the-air/are-your-leaders-blind-73471455b866
[]
2020-01-07 21:21:46.498000+00:00
['Facebook', 'Investing', 'Bitcoin', 'Social Media']
The Storyteller’s Guide to the Virtual Reality Audience
4. 360° is less than 180° The more there is to see, the less the audience remembers. In our third test, audiences with a 90° range of vision could recall nearly every event in the story, whether the information was physically in the room or relayed through the audio. However, audiences in the 360° view recalled fewer details of the story and the environment. For example, in the 90°scene, all of the participants in the debriefing referred to Taro by name. In the 180° scene, Taro was sometimes referred to by name, but was more often given descriptors like “young man.” By the 360° scene, few remembered Taro’s name, instead they referred to him offhandedly as “the kid at the computer.” Much of the story information, including character names, was delivered through the audio. The fact that participants in the 360° scene couldn’t remember Taro’s name (among other story details), suggests that they were focusing less on the audio in 360° than in the 180° or 90° scenes. Perhaps there was too much information in 360° for the audience to process. When telling a story in 360°, we need to consider how to combine audio and visual elements without overloading the audience. …but 5. 360° is more than full circle The more complete the environment, the more it resonates. Audiences in the 360° scene were more aware of the tone of the piece, which they attributed to the pacing and shifts in the lighting. They were so attuned to the tone that when asked who was in control of the story, they described the storyteller as the mise-en-scene itself, or used some abstraction, like the storyteller was the “rhythm” of the scene. Audiences in the 360° scene were also more attuned with Taro’s feelings. They could clearly and unequivocally identify that Taro was feeling “lonely,” and sometimes felt that Taro’s feelings were reflected in the mise-en-scene itself. Whereas those in the 90° and 180° scenes really struggled to characterize Taro, claiming that they did not have enough information to draw conclusions about him. There’s something interesting happening here. It may be that when you feel present in an experience, you are more likely to rely on abstractions and pick up on feelings, and when you are in “detective mode” you are more likely to pick up on story details, but have difficulty accessing feelings. Perhaps being present and retaining story details are fundamentally at odds.
https://medium.com/stanford-d-school/the-storyteller-s-guide-to-the-virtual-reality-audience-19e92da57497
['Vr Ar Media Experiments']
2016-06-07 18:24:18.211000+00:00
['Storytelling', 'Audience Experience', 'Virtual Reality', 'Media', 'Insights']
Designing for Data Visualization
Data Visualization at IBM Our clients are from various industries as well as organizations of all sizes, from large institutions to lean start-ups. But regardless of size or industry, our users all have the same goal. They have data, they have questions, and they need an analytics tools that will help them make sense of their data and turn it into useful business insights, while reducing uncertainty. When it comes to designing the details of a data driven product, there are a few things that we keep in mind to create the best possible experience for our user. What is the power of data visualization? Consider this: you receive a postcard from a friend in Venice. The glossy photo contains a typical Venetian scene — a view of the Grand Canal, a gondola navigated by a man in a white shirt who appears to be singing, and stone bridges that fade into the horizon. Your friend writes about how beautiful it is and ends the note with “you simply have to see it for yourself!” Suddenly you’re overcome with excitement and you begin trolling travel booking sites looking for cheap flights and accommodation. One postcard just isn’t enough of an experience for you. You want to go there yourself, explore the tunnels across the stone bridges, and hear the sounds of the gondoliers singing as you venture off the Grand Canal down the backstreets of Venice. A snapshot is simply not enough to satisfy your need to explore and see things for yourself. You’ve heard about St. Mark’s Square and while it may not be pictured on the postcard, you really want to see it and it can’t wait any longer. It’s the same with data visualizations. Users are not typically satisfied with simple postcards no matter how picturesque they may be. They need an experience that is as immersive as possible, while making it easy for them to uncover deeper insights and drill deeper into their data in order to make better business decisions. How can you experience your data? Our users are looking for a tool that not only presents a static view of their business, but one that also enables them to interact with that data in real time. Offering data visualizations that are flexible and change with the user’s thought process allows for true exploration. For example, in Cognos Analytics we offer an experience in the user dashboard that provides side-by-side data comparisons and methods to quickly see how these data points relate to each other and what these discoveries mean. The first dashboard of Cognos Analytics gives a good overview of data from Bikeshare Chicago’s overall ridership. By using the sorting, filtering, and brushing features, the city manager was able to see which neighborhoods have the highest percentage of subscribers in the 39 to 55 year age range from the previous year. These type of user goals show that data exploration isn’t the end, but rather a means to gain as much use out of the data as possible, whether it be applying it to business models to maximize profit, or to creating more accurate troubleshooting techniques. This application of data demonstrates the true power of data analysis tools. Thinking back to our Venice analogy, a tool like Cognos Analytics allows the user to really dive-in and and get immersed in the data, rather than only being able to see it at surface level. Guided exploration with cognitive analytics Insights have more value when you can act upon them, especially in business. The tools we build enable users to quickly uncover interesting patterns and relationships in their data without the need for any coding. But endless exploration can sometimes lead to analysis paralysis, where the user is continuing to search for every possible data correlation. This may be fun for some data scientists, but is not something that most businesses can afford. To overcome this, a well-designed data exploration tool not only helps users explore freely, but also navigates them towards the insights they’re really looking to make. Watson Analytics allows easy data exploration through natural language processing, which means users can ask simple questions about their data regardless of their analytics expertise. In this day and age, as designers we need to reduce the gulf between man and machine. With the predictive capabilities of Watson Analytics, users can ask questions like “what drives sales?” and swiftly be presented with key sales analytics to help them make decisions. In this example, Sam, an airport operations manager, asks Watson Analytics “What’s driving overall satisfaction?” in English. Watson does the number crunching, creates a predictive model and returns a number of different fields and graphics associated with levels of satisfaction for airport customers, in addition to displaying their predictive strength. The user is easily able to obtain this level of data analysis without needing a statistics degree. Again thinking of Venice, guided exploration in Watson Analytics is like having your very own personal tour guide who not only knows the best local restaurants to eat at, but also knows what to order and how to order it in Venetian. 3. Analytics your way When it comes to analytics, not all users needs are equal. One of the challenges of designing a data visualization tool is making it intuitive to use for anyone. To define universal experiences, we believe in designing for diversity. When designing our Business Analytics portfolio, we have multiple personas in mind that range from a novice analyst to a power user. This means the design needs to strike a balance between reducing the learning curve while conveying powerful analytic capabilities. Our customer experience strategy is focused on providing users with the tools and resources they need to be successful, regardless of their existing expertise in data analytics. Our research process includes many direct and indirect partnerships with our clients and users, including deep observational studies to learn about their workflow as well as how they want to use a data analytics tool. All of this hands-on research helps ensure that we design meaningful experiences, with embedded support and guidance to help users succeed. Analytics isn’t easy and it’s not as intuitive as booking a trip to Venice. Our users have many different levels of skill, experience and understanding. They consist of new managers who want to get a better understanding of the business, to power users who want to look under the hood to find out what statistical model was used in the analysis. In Watson Analytics we offer layers in our products that progressively disclose as much or as little of the magic that is used to generate visualizations. Watson Analytics doesn’t simply show a chart of his data, it highlights statistically significant numbers and results, saving him the trouble of doing the calculation himself (B). It also surfaces a series of insights and follow-on chart suggestions in the Discovery Panel on the right (C). A data scientist working for a business professional, can open up the Statistical Details panel to confirm and investigate more closely the models and parameters behind the results. Design plays a key role Designing data visualization is not just about the visuals, but why those visuals matter in the data analysis process and how they can be of actual use for the user. We work on designing for iterative data exploration, a guided experience that helps the business user get to their business answers as quickly as possible, and a flexible work flow that supports analytics experts and novices alike. Design work in this field can have powerful implications for data users and effect on how businesses operate. This is just a taste of our data visualization design approach at IBM Business Analytics. We could tell you more, but why not explore for yourself what design for data visualization can look like. First time visitors and locals are invited.
https://medium.com/design-ibm/designing-for-data-visualization-c2b18359878c
['Arin Bhowmick']
2018-07-05 19:17:52.877000+00:00
['User Experience', 'Design', 'UX', 'UX Design', 'Data Visualization']
Eggs — The Perfect Weight Loss Food
Eggs — The Perfect Weight Loss Food Why it’s one of the best foods for permanent weight loss. Photo by Hannah Tasker on Unsplash I was overweight and obese for 25 years, failing at every diet you can imagine. Then, in 1985, I discovered how to succeed, lost 140 pounds, and I have kept it off for 35 years. One of my greatest discoveries was a miracle food for weight loss, the humble egg. Here’s why it is so helpful: 1) Eggs fuel you up and stay with you. Eggs are a magical high-protein food compared with other proteins in terms of carrying you for the day and eliminating hunger. If I had cereal and/or fruit for breakfast, I’d be ravenous by 10. With eggs for breakfast I would not get hungry later in the morning. Sometimes I’d work through lunch and not even notice any hunger at all during the day. This did not occur with other protein-rich breakfasts made with meats, peanut butter, or even protein shakes. Eggs worked like magic. All protein takes longer to digest and breakdown in your system than other nutrients like carbohydrates, and therefore it does not spike your blood sugar like carbohydrates do. Carbohydrate breaks down rapidly and raises your blood sugar level quickly. That triggers an overproduction of insulin to lower the blood sugar, and that low blood sugar triggers feelings of hunger. That’s why I got ravenous at 10. Protein does not spike your blood sugar this way, so you feel more satisfied than with other foods, and for a longer period of time. All protein has this satiety-producing effect. But eggs are special. After working for the last 35 years with thousands of weight loss clients, I’ve found that they too have experienced the same super-satiety of eggs, and those anecdotal findings are confirmed by scientific research. I’ve searched to find out why, but no one seems to know. It is suggested that the unique amino acid make up of egg protein is the reason why eggs are superior to other protein sources, but that is yet to be confirmed. Of course, it does not matter to us why eggs work so well. What counts is that they do. 2) Eggs are fast and easy. When talking about weight loss, one of the most important features of any food, besides it’s high nutrient and low-calorie content, is its convenience. We are so over-scheduled and rushed that we can frequently succumb to eating food that is fast and easy. We often start out with the best of intentions, but when we are late and rushed, we are liable to get the easiest and fastest thing, either from a drive-thru, or in a package, like a snack cake or a cereal box. That can be a disaster first thing in the morning if we blow half the day’s calorie budget before we even arrive for work. Poaching an egg to put on a slice of toast takes 3 minutes, and you can be putting on your clothes while it’s poaching. And that’s only 150 calories or less. Scrambling or frying only takes a moment longer. If you plan what you are going to do before you go to bed, it’s a snap, and you arrive at work well fed and hardly having made a dent in your calorie budget. Hard boiled eggs make an easy lunch to pack. Talk about fast and easy. Put one in a bag with some low-calorie bread and make an egg sandwich for lunch. It doesn’t even have to be refrigerated. Or just have a couple on their own while you read the paper at lunch. It’s a nice meditation, cracking and peeling the shell, sprinkling with pepper and salt and relishing the good taste and nutrition. That’s only 150 calories for lunch. You’ll have plenty of calories left at the end of the day for a nice satisfying dinner. Eggs are versatile. You can do so many things with eggs! I make omelets with egg substitutes which bring the calories even lower. They are half the calories of real eggs. They are made from real eggs, using only the whites, leaving out the yolks. If you make the omelette with veggies only (I use onions, peppers and mushrooms) you can have a very filling nutritious omelette for a little over 100 calories. If you make egg salad with lite mayo, you can have great a great egg salad sandwich for under 200 calories. If you just have the egg salad on lettuce, it’s even less. When I make scrambled eggs, I use one real egg and two portions of egg substitute (to preserve the texture of real scrambled eggs) and I have a three-egg breakfast at 150 calories. A slice of low-calorie toast adds only 45 calories to it. Eggs are cheap! Usually, when we are deciding to eat healthy, we have to brace ourselves for the high cost. Not so with eggs. A dozen jumbo eggs goes for $0.96, under a buck! A two-egg breakfast only sets you back 16 cents! And eggs have a shelf life in the fridge of 3 to 5 weeks! How can you go wrong? Eggs are healthy! Despite the paranoia about cholesterol in eggs some years ago, it’s been found that they pose no real threat at all. If you are still fearful of the cholesterol, you can just use the whites, or the egg substitutes which are made from the whites only. All the cholesterol in an egg comes from the yolk. If you don’t eat the yolk, there is no cholesterol to worry about. The nutrient value in eggs is spectacular. If you are looking for good protein sources, eggs are terrific, especially egg whites or egg substitutes. They are pure protein! No fat! No carbs! No cholesterol! Nothing, not even fish, compares. Even real eggs, with the yolk, are a good high-protein food. While fish and skinless chicken have a better protein to fat ratio, eggs are comparable to lean beef. The micronutrients in eggs are remarkable. But that would make sense, as eggs contain every single nutrient and micronutrient needed to create a healthy living creature. No wonder they are so nutrient-rich. There is no other food that is so complete. Eggs are one of my best discoveries that made permanent weight loss not only possible, but easy and a pleasure. Check out my other articles at my profile page to learn about some of my other discoveries.
https://williamandersonmalmhc.medium.com/eggs-the-perfect-weight-loss-food-bb6feb110722
['William Anderson']
2020-04-22 16:37:16.594000+00:00
['Health', 'Diet', 'Weight Loss', 'Success', 'Self Improvement']
Tummy Rolls and Shapewear
Tummy Rolls and Shapewear Learning to love my post baby body Photo by Sean Thomas on Unsplash I have never been comfortable with my body. As a teen, I was teased for being too thin, yet despite this I was still concerned about sitting down to reveal the barely-there tummy roles which I thought were ugly. I didn’t like the faint blue veins that were evident under my skin and I didn’t like the way I blushed from my head to my chest. I also didn’t like my frizzy hair and giant feet. There was just so many things I didn’t like about myself and my body and these feelings remained as I grew. As I look back now I can’t believe I was so hard on my poor body during those years of my life. I look at photos of myself as a 19–23 year old and realise that I actually had a fantastic figure. What the hell was my problem? If I looked like that now I would never wear clothes again! Ok, so I am joking but I really do regret not appreciating the body I had across those years because it certainly looks very different now. When I had my first child, my body changed dramatically overnight. I recall meeting with a physiotherapist the day after giving birth and I asked her if the skin on my tummy would eventually return to normal. Her reply? ‘Honey, your two-piece swimsuit days are certainly over.’ Ouch!! She had no idea how deep those words cut and as a young mum I became paranoid about keeping my stomach area hidden from the world. This was particularly difficult with a breastfeeding baby and so nursing singlets became my best friend. God forbid I should lift up my top and someone see my flabby, wrinkly baby belly that I was so ashamed of. As three more babies followed, my tummy certainly didn’t return to its firm pre-baby state. One day I discovered shapewear. I thought this an ingenious creation and stocked my cupboards full of the giant undies that sucked my gut in and smoothed my shape. The problem with this is that I grew even less comfortable with my tummy flab because with the shapewear keeping it nicely hidden, I was afraid that if I didn’t wear them then people would think I had suddenly gained a ton of weight and be surprised at how big my belly had become. The end result? I wore shapewear every single day (and admittedly still do). The thing that now begun troubling me is that I have an absolutely gorgeous 8 year old daughter who I know looks up to me. She watches me doing my make up in the morning and seems me putting on my giant undies. ‘Why do you wear two pairs of knickers?’ she has asked, and I dismissed her question not wanting to explain that the second pair is designed to hide the true shape of my body. I have become acutely aware that I am likely sending negative messages to my beautiful daughter about what a body should look like. I want her to know that her body is beautiful and she shouldn’t be ashamed of it. I have realised that to teach my daughter to love her body, I would need to set an example by loving mine. I completely admit that I am not there yet in my journey of self-acceptance but I am certainly making small steps in the right direction. I am making an effort to put on a pair of swimmers and enjoy the pool with my kids instead of sitting on the side watching. I am making an effort to talk positively about myself and my body in front of my children. I am trying to stop myself from commenting about diets and calories and instead focus on being healthy and active. My biggest hope is that my daughter will see my new, kinder attitude towards myself and realise that if her mum can be proud of her body — saggy tummy and all — then she should be proud of her body too. Why not check out my Facebook and Instagram pages, titled The Write Book. https://www.instagram.com/the_write_book/ https://m.facebook.com/livelovereadwrite/?ref=content_filter
https://medium.com/home-sweet-home/tummy-rolls-and-shapewear-89ce2fdd604a
['Kylie Tull']
2020-12-08 21:05:48.561000+00:00
['Health', 'Motherhood', 'Life', 'Life Lessons', 'Parenting']
How do you transform values into EB actions in the Tech world?
Now, there is a big question — how do we display these beliefs as actual actions? We have to remember that, in the end, EB’s goal is to attract great people to join us. We gathered to think about how we actually understand these quotes and to come up with a way to implement the ideas they convey. We thought that the best way to do that was by creating #PeopleofDocplanner, which encompasses our passions, everyday lives, and challenges. Second station — For us mission is possible. At the beginning of our workshops, we wanted to figure out a short sentence which will be the opening line that sets the tone of our whole statement. Last time around, we focused on “Do what you love”, because passion is very important in Docplanner. But this time we went a different way. Thinking of our “mission”, one of the first things that came to my mind was the Mission Impossible franchise. In my mind, what happens in these movies closely aligns with our values: our mission is very important and nothing can stop us. That’s how the tagline “mission is possible” came to be. We make healthcare experience more human. We follow this mission and prove every day that achieving it is indeed possible. It’s real and we are really committed to it, because it’s unique. We have and need more fighters who feel the same way and will do anything to make it happen. Healthcare is a part of our lives and making it better, in the long run, is a huge undertaking. We try to make our product better every day by creating new features and solutions. Lately, we delivered video consultations, which are available to every one of our doctor’s and to every one of their patients. We simply had to show it in our social media, because we are extremely proud of what we are creating. First, we announced what happened and then we summarized it to show it to everyone — both as recognition to our teammates and to show our accomplishment to the world! On our fan pages, you can also find many success stories of our people — some of them have worked with us for a few years now. Why? Adam can answer that question.
https://medium.com/docplanner-tech/how-do-you-transform-values-into-eb-actions-in-the-tech-world-9831a26fdc64
['Ania Kalisiak']
2020-06-04 08:51:00.712000+00:00
['People', 'Values', 'Recruiting', 'HR', 'Employer Branding']
The Uncanny Valley Is Our Best Defense
The Uncanny Valley Is Our Best Defense Our bodies recognize the dangers of simulation, and we should too Photo: Coneyl Jay/Getty Images While humans are drawn to and empowered by paradox, our market-driven technologies and entertainment appear to be fixed on creating perfectly seamless simulations. We can pinpoint the year movies or video games were released based on the quality of their graphics: the year they figured out steam, the year they learned to reflect light, or the year they made fur ripple in the wind. Robot progress is similarly measured by the milestones of speech, grasping objects, gazing into our eyes, or wearing artificial flesh. Each improvement reaches toward the ultimate simulation: a movie, virtual reality experience, or robot with such high fidelity that it will be indistinguishable from real life. It’s a quest that will, thankfully, never be achieved. The better digital simulations get, the better we humans get at distinguishing between them and the real world. We are in a race against the tech companies to develop our perceptual apparatus faster than they can develop their simulations. The hardest thing for animators and roboticists to simulate is a living human being. When an artificial figure gets too close to reality — not so close as to fool us completely, yet close enough that we can’t tell quite what’s wrong — that’s when we fall into a state of unease known as the “uncanny valley.” Roboticists noticed the effect in the early 1970s, but moviemakers didn’t encounter the issue until the late 1980s, when a short film of a computer-animated human baby induced discomfort and rage in test audiences. That’s why filmmakers choose to make so many digitally animated movies about toys, robots, and cars. These objects are easier to render convincingly because they don’t trigger the same mental qualms. We experience vertigo in the uncanny valley because we’ve spent hundreds of thousands of years fine-tuning our nervous systems to read and respond to the subtlest cues in real faces. We perceive when someone’s eyes squint into a smile, or how their face flushes from the cheeks to the forehead, and we also — at least subconsciously — perceive the absence of these organic barometers. Simulations make us feel like we’re engaged with the nonliving, and that’s creepy. We confront this same sense of inauthenticity out in the real world, too. It’s the feeling we get when driving past fake pastoral estates in the suburbs, complete with colonial pillars and horse tie rings on the gates. Or the strange verisimilitude of Las Vegas’ skylines and Disney World’s Main Street. It’s also the feeling of trying to connect with a salesperson who sticks too close to their script. In our consumer culture, we are encouraged to assume roles that aren’t truly authentic to who we are. In a way, this culture is its own kind of simulation, one that requires us to make more and more purchases to maintain the integrity of the illusion. We’re not doing this for fun, like trying on a costume, but for keeps, as supposedly self-realized lifestyle choices. Instead of communicating to one another through our bodies, expressions, or words, we do it through our purchases, the facades on our homes, or the numbers in our bank accounts. These products and social markers amount to pre-virtual avatars, better suited to game worlds than real life. Most of all, the uncanny valley is the sense of alienation we can get from ourselves. What character have we decided to play in our lives? That experience of having been cast in the wrong role, or in the wrong play entirely, is our highly evolved BS detector trying to warn us that something isn’t right — that there’s a gap between reality and the illusion we are supporting. This is a setup, our deeper sensibilities are telling us. Don’t believe. It may be a trap. And although we’re not Neanderthals being falsely welcomed into the enemy camp before getting clobbered, we are nonetheless the objects of an elaborate ruse — one that evolution couldn’t anticipate. Our uneasiness with simulations — whether they’re virtual reality, shopping malls, or social roles — is not something to be ignored, repressed, or medicated, but rather felt and expressed. These situations feel unreal and uncomfortable for good reasons. The importance of distinguishing between human values and false idols is at the heart of most religions, and is the starting place for social justice. The uncanny valley is our friend.
https://medium.com/team-human/the-uncanny-valley-is-our-best-defense-9006f87d3647
['Douglas Rushkoff']
2020-12-17 16:09:14.194000+00:00
['Uncanny Valley', 'Society', 'Book Excerpt', 'Culture', 'Technology']
Drop Your Identity Markers To Become Real
Labels help us choose the right product off the supermarket shelf. Human beings also apply labels. This is okay in order to quickly and easily communicate, using socially and personally agreed-upon categories. We label ourselves, we label others, and others label us. We can place identity markers on ourselves because of our perceptions of what others want or expect. Problems with creativity and growth arise when we identify too much with these or when we constrain ourselves to our comfortable identity markers. Drop your identity markers for a while and see what happens. Identity markers are different expressions of who we are. These labels embody characteristics that have meaning to us and the society in which we exist. You may describe or identify yourself by your age, religion, nationality or citizenship or political persuasion or a mix of these. Why and when do you use identity markers? Your biography may use identity markers such as your vocation, hobbies, interests, character traits, or achievements. We use identity markers as parts of structured groups of people, to allow us to zoom in on significant or useful elements that we use to interact with each other. When you introduce someone, you may use identity markers based upon the person’s role and the restricted number of things you know about them. When you introduce yourself, you will probably feel obliged to say more than just your name, so you will add a few “identification labels”. When we place an emotional value on an identity marker, we may paint a labelled or identified thing as either good or bad We are emotional beings, but we need to be careful and not tar things with one sweeping brush, either all good or bad. For example, you may not like a particular media outlet, but perhaps it reports something true that no other outlet does, so it’s not all “bad.” We may take on identity markers when someone says something to or about us, or treats us in a certain way. But we are more than our labels. Don’t let others impose an identity upon you When I was twenty-three years old, a group of work colleagues planned to go to a Midnight Oil concert. I eagerly approached Guy (yes, his name was Guy) and told him I wanted to join the group. I won’t forget the look of dismay on his face. He cast his eyes down to his shoes, screwed up his face, then with a grimace and even a hint of derision in his voice, he said two words. “What. YOU?” I felt demoralised and even threatened. The voices in my head immediately popped up loud and clear. “You are stupid to think they would want you to go. Obviously you’re not the type to go to a Midnight Oil concert, and there’s no way that Guy and his friends would be seen with a dork like you.” Identity Marker number 201 or it may as well have been. When you are a sensitive person or have not had a lot of guidance in your Life, you are quick to take on the labels that others impose upon you. If someone derides you for what they see or don’t see in you, drop the identity that you mark yourself with. Another example is if someone mimics you, making fun of your accent or of your tone of voice, don’t immediately identify yourself as being a poor speaker. Your self-worth is not dependent upon having the approval of everyone. Don’t take on someone else’s identification of you, when it’s not true. But treat others as you would have them treat you Often when people are discourteous or disagreeable with you, their action is not even “personal”. If you look at a person’s attitude or approach over time, you will carve out an identity for them in your eyes. You may brand them or identify them as being something you don’t like, for example, pushy or bossy, or inconsiderate, but you need to understand that these are identity markers. Just like you, the person may sometimes be bossy or inconsiderate or something else; but at other times they won’t be. “I am a person before anything else. I never say I am a writer. I never say I am an artist…I am a person who does those things.” — Edward Gorey If someone treats you with disdain and you return this with the same, this will only make things worse. We are all continually changing or going through a personal development process. You don’t have to like a person but you need to get on with them the best you can. Someone may act in what you identify as an arrogant or an uncaring manner because of their insecurities. They may not be exclusively focused on you, but they act that way toward everyone. Try to make them feel more secure in the long term by questioning them, then showing that you support them. Your support should be based upon you both understanding what each other wants. Know your boundaries and stand your ground. For example, I could have squared up to Guy. “What makes you think I wouldn’t go to a Midnight Oil concert?” When he had no argument to this, I could have said “I have been to rock concerts but this time around I’ll find someone who wants to go with me, thank you.” This would demonstrate that I understood he didn’t want me to go with him. Maybe it would even make him change his mind for next time, by challenging his reasoning. Drop your identity markers Identity markers can be helpful. They give us a sense of who we are, for better or worse, in our own eyes. We can use our sense of self to review and to reclaim and improve ourselves. Many people don’t like to be “boxed in” or labelled as only a few things. Others can misunderstand your role or your circumstances and misidentify you. It’s up to you to adopt a wholesome identity. Use your identity markers only if you have to. Work toward looking at yourself as a person with interests and skills, and with experiences, who takes actions. You are not just a teacher or a writer or a fisher-woman or a concert-goer or a mother or a husband or a singer, but you are infinite. Identity is a socially and historically constructed concept. Social and cultural identity is inextricably linked to issues of power, value systems, and ideology. Personal identity is linked to your own personal circumstances and is impacted by how you identify yourself within a society. Photo by Mel Poole on Unsplash Our identity is useful when working out how we fit into things. It is useful for finding people with similar interests and values to us. We use it to let others know what we do and what we are about. But when we cling to an aspect of our identity and believe that it is a permanent fixture, we lose sight of what is possible for us. We may have an inflated sense of Self and not grow. We may feel obliged to do this or that because of labels we impose upon ourselves. If you have to work with identity markers, it is best to let your identity markers be firm positive ideas held loosely Create your identity organically by what you do, rather than create it by affixing sticky labels. Your identity is not static, but is fluid or changeable. Your true Self is your being. That being is free to flow through the containers that hold it in order to support and honour you. This is the real you, a person who is multi-dimensional and who does things. Drop your identity markers and your real Self will flourish.
https://medium.com/the-ascent/drop-your-identity-markers-to-become-real-d09f71c87cfd
['Celine Lai']
2020-09-01 04:15:39.437000+00:00
['Relationships', 'Society', 'Identity', 'Mindfulness', 'Self']
From Teaching Middle School Math to Being a Full Stack Software Engineer
I’ve recently received my first offer as a Software Engineer for a great company after 3 months of job searching. As I’ve finally reached my goal, I wanted to take some time to reflect on my journey leading up to this moment. Teaching I was a middle school math teacher for about 4 years. That period of my life was certainly a challenging time. Being a teacher in a public school was no joke, especially in a middle school setting. I personally enjoyed facing challenges and finding solutions to overcome them. However, at some point, I felt that the challenges I faced were no longer providing me personal growth. I felt that I needed a change and that being a middle school math teacher was probably not something I wanted to for the rest of my life. It was around that time that a friend of mine also landed his first Software Engineering position at Forbes. He was really happy with what he was doing and that had an impact on me. I felt that I wanted that for myself but I did not want to go back to college and spend more money to learn how to code. My friend suggested a Udemy course on Web Development by Colt Steele to get started. It was a great course but during that time, I wasn’t able to fully combine the concepts together. I needed more structure. Fortunately, my friend offered me another alternative solution to learning: a coding boot camp, Flatiron School. He did not attend the program itself but some of his colleagues at Forbes did. Furthermore, he mentioned that his colleagues also did not have coding backgrounds and learned the core concepts from Flatiron School to succeed in an environment such as Forbes. That reason was enough for me to select Flatiron School and take the next step. From there, I applied for the program, passed the interviews, and got selected for their April 1, 2019 cohort. Afterwards, I made the decision to leave my role as a public school teacher and dedicate myself entirely to this program. Flatiron School My time at Flatiron School included stints as both a student and a teacher. Student Mode: Beginning When I started the program, I was both extremely excited and nervous. After introductions, we were immediately paired up and given a project to work on. The project itself wasn’t difficult but it made me realize just how fast the pacing can be. The first couple of weeks were what’s known as Module 1, which covered concepts in Ruby. It was a challenge for me but it was good since I was learning and growing. I had to adjust to the pacing of the program. New concepts were consistently being exposed to us and many labs were released for us to work on. Initially, I thought that I had to complete all of the labs in order to keep up but I did not realize that until way later in the program. Student Mode: Struggles Naturally, I felt I was falling behind early on. My imposter syndrome was at an all-time high. I felt everyone else in my cohort was keeping up with pacing and completing all of the assignments. However, that was not the case as there were others in the same situation as me, I just was not aware of it at the time. It was a matter of controlling my mentality and learning to adjust. It didn’t help that I was not able to pass the code challenge associated with that first cohort. A lot of negative thoughts came with that and made me question if programming was for me especially since I couldn’t pass the first of many code challenges, and that it would only get harder from there. But it was not the end for me. I had another opportunity to showcase my understanding of Ruby. I decided to push forward and focus on the project. Working on the project gave me the opportunity to apply the concepts that were taught to us and it helped me make the connections. With that, I showed just enough understanding to move on to the next module. Student Mode: Project Partner When the next module began, I had a much better grasp of how to prioritize my learning. I also a much better feel for the pacing of the program. In that second module, I passed the code challenge and had the opportunity to work with a partner on a full project. Before, I was paired with a partner to work on minor labs/assignments. It was my first time working collaboratively with someone on a web application. It presented me with another learning experience. We brainstormed a few project ideas and discussed how different ways to implement them. While working on the project, it made me realize that I still did not fully grasp all of the concepts taught in that second module but I was able to learn from my partner. Student Mode: Finish Line By the third module, I had fully adjusted to the pacing and structure of the program. More importantly, the concepts covered from then on involved the programming language I had some experience with, Javascript. I’m not saying that it was smooth sailing especially with some of the hidden challenges Javascript provided, but I knew what to expect. It was essentially a similar experience for the fourth module, which covered the React library. For the final module, I had the opportunity to work on a full-stack application for three weeks. I went through the entire process of designing and coding the entire application from the server to the client and everything in between. That’s where everything came together for me. I was starting to see how the code communicated with each other and connections were being made. Creating the full stack application along with successfully completing the Flatiron School coursework were two major accomplishments for me at the time. It provided me with the basic structure for learning other programming languages. Teacher Mode However, my time at Flatiron School did not end there. Shortly after finishing the coursework, I was hired as a Software Engineering Coach. I returned to teaching but I already knew that it would be different and better. I attended lectures and re-learned everything I was taught as a student. It made me realize that there were concepts that I didn’t fully grasp. This time around, I was continuously growing through teaching. Explaining the concepts to my students was the best way for me to test and build my own understanding of programming concepts. On top of that, I wrote blogs on Javascript, Ruby, and how to create applications from the ground up to provide an alternate resource for anyone who wanted to learn web programming. Learning just didn’t occur during my coaching hours. On the side, I continued to watch programming tutorial videos on YouTube and Udemy. Notable instructors I watched were TraversyMedia, Andrew Mead, Dev Ed, and of course, Colt Steele. I focused on Javascript concepts including NodeJS and React. I wanted to focus on mastering one programming language and its key concepts instead of learning multiple languages. I developed multiple full-stack applications to apply the concepts I have learned from those resources. Looking ahead, these projects would be essential in the job searching process. Not only would they provide samples for recruiters but the experience itself provides great talking points during interviews. My time at Flatiron School was certainly a great experience as both a student and a coach. The learning and growth never stopped for me. Job Searching It was time to pursue my original goal and the reason I enrolled at Flatiron School: a career as a software engineer. Resume/Portfolio Before my time ended at Flatiron, I created my resume and website portfolio. I dedicated a good amount of time to ensure that both stood out enough while showcasing my ability and skills. With my resume, I used a template from canva.com. They have simple, yet great resume designs appealing to the reader. The only downside is that parsers have trouble reading data accurately. With my website portfolio, it was built with React and deployed on GitHub. The focus of the portfolio was to show more of my work that could not fit on the resume. I felt that either the resume or website portfolio was important in grabbing a recruiter’s attention and helping me land an interview with a company. Data Structures & Algorithms Once that interview was scheduled with a company, it was essential to have some understanding of Data Structures and Algorithms, which I did not fully focus on during my time at Flatiron School. Immediately, it was something that i shifted my attention to. I had a few resources available: a Udemy course on data structures and algorithms taught by Colt Steele (yup this person again), Interview Cake, and of course LeetCode. My structure of learning DS&A revolved around these three primary resources. Colt designed the course so that a student can start from any topic but he does mention the prerequisite knowledge required. I watched Colt’s course in order, even though I had some experience with some of the concepts he taught already. But it was a good review. After watching, I’d work on some problems from Interview Cake and LeetCode to practice the concepts he went over for a specific section. This was something I worked on over a course of two months. I did not attempt to rush absorbing the material. With LeetCode, you can search for interview questions specific to a company. However, you would have to purchase a LeetCode subscription. I am, in no way, saying that it’s a bad decision to do so. In fact, you’ll have access to more content and it’ll focus the attention on the specific company you’re interviewing for. To be transparent, I didn’t want to spend the money. Also, I wanted to first comfortably solve the questions that were available. Working on these technical questions was certainly challenging, especially LeetCode questions. Even the “easy” level questions gave some trouble. It’s certainly still a challenge for me to this day. But the biggest issue for me was that I was not accustomed to the process. To get over this obstacle, consistent exposure and practice were necessary. Early on, there were moments where I spent a good amount of time attempting to solve problems not getting around to finding a solution. Those types of problems would certainly take my confidence away. At some point, I realized that spending too much time on a problem and getting nowhere was not efficient. I learned that if I was on a specific problem for too long, I would read the discussion board for how others solved the problem. The intention was not to copy the solution but to gain insight into how they came to that solution. Interview Process As expected of the job searching process, it took a while. The responses from recruiters for an interview were few. When the interviews were scheduled, it was another process to get accustomed to. The only time I really had practice was during the interviews themselves. There were a couple I made the final rounds but I underperformed. Most of the time, it was automated responses stating that I was not selected to move on to the next round. It could be frustrating to not even receive an initial interview especially during times I passed the code challenge assignments. Attempting to understand how some companies select who gets to move to the next round was a difficult and impossible thing to do. But it was all a matter of moving forward and learning from the failures. I continued to practice data structures and algorithms. Interview Preparation One day, I received an email for a phone screen technical interview. To me, it came as a shock since I applied not expecting a response. For this interview process, I decided to try something different on top of practicing DS&A. I decided to do some research on the interview process for this company on Glassdoor and reviewed other candidates' experiences to try and gauge how the process would be. On top of that, I contacted the company recruiter as well to ask for advice. Equally as important, I searched LeetCode for discussions involving the company’s interview questions. There was a good selection of questions mentioned and it probably didn’t include all of them. But my intention was not to ask for, find, or remember all of the questions the company possibly asked all their interviewees. There was no way I could remember them all for the actual interview. There was also no guarantee that those questions were going to be the ones given to me. Instead, I used them as references as to what questions I should work on to get accustomed to the type of questions I would face. For the most part, they were higher-level questions than I was used to. However, I was able to apply some of the strategies and information Colt taught most of them. When I had my phone screen, I was a bit surprised at the level of the problem. It was relatively simple compared to what I was practicing. The practice certainly paid off during that round. A week later, I was contacted again by the recruiter for the next rounds of interviews. It was three rounds, an hour each, two technicals, and one behavior. In preparation for the next rounds, the recruiter gave me a heads up mentioning a list of the data structures (which, was most of them but some of which I haven’t reviewed yet) that might be asked. I continued my structure of watching Colt’s lecture and practicing until the day of the interview. The days leading up to the interviews were completely nerve-wracking. No matter how much preparation I did, I felt that it was not enough and that I needed more practice. When the day came, I spent that morning relaxing and saving my mental energy for the interviews. I felt that helped for the marathon of interviews. Interview Results When I finished my interviews, I wouldn’t state that I aced them. As expected, the questions presented to me were questions that were not on the LeetCode discussion but they were a variation of some of the problems I have practiced, which helped tremendously in approaching it. I did enough to demonstrate my core understanding of DS&A by verbally communicating to my interviewers. I clarified what problem(s) we were trying to solve, cleared up assumptions as to what data we’re receiving/returning, and discussing how to solve the problem mentioning tradeoffs. What helped was understanding what the time and space complexities of each data structure were and for what purpose each data structure was implemented, which Colt Steele covers in detail. I passed the three interviews that day but I had one more final round. This gave me anxiety as this was not my first “final” round and I underperformed in previous ones. My recruiter did mention that this one was not going to be too technical compared to the ones I already had. It was more of a discussion of one of my projects. In preparation, I reviewed my projects and focused on the structure so that I could readily explain why I designed it this way. When the day arrived, I had my points prepared and ready to discuss. Instead of talking about all my projects, I was given the option to choose one. However, if there was a downside, I had too many points prepared and I was scrambling a bit to find which points to zone in on during the discussion. It was only a 45-minute interview, so I couldn’t talk about everything. At the end of it, I felt that I was able to talk about key points about the project’s structure and the tradeoffs I made through its design. The next day, I received a call from another recruiter stating that the company would like to extend the offer. When I received it, I was speechless. I encountered so many rejections that I felt it was going to be another one. But thankfully, the company saw my potential and made me an offer. Job Searching Reflection The search for a software engineering position took approximately three months and I found the period absolutely brutal. Some people in my position are still currently searching or searched even longer than I have before eventually landing one. Regardless, I realized that the job searching period can be stressful and difficult, especially in this field. The rejections that came with it were always difficult to accept. A lot of negative thoughts resulted from it, such as doubting my abilities and believing that I was not good enough to be considered, and occasional thoughts of comparing myself to others. Those types of feelings should not get the better of anyone. However, that can be easier said than done. In my case, I acknowledged those frustrations and let them out instead of bottling them up. It made it easier to move forward. Whatever the case may be for you, always move forward and continue to improve when faced with job rejection. It’s not the end and there are many more opportunities out there. Closing Remarks Looking back on my two-year journey since I left teaching in a public school, I certainly had no regrets. I was not happy with my situation and I could not picture myself doing it for the rest of my life (at least happily). I felt that I needed a change during that time and I was fortunate that a specific path was shown to me. But of course, the alternative path I took to programming was not easy. I had to work even harder just to compete with other candidates in the field. But as long as I enjoyed what I was doing, I was able to give it my all and pursue my goal. So if you’re considering a career switch to software engineering and wondering if it’s possible, it is! Thank you for reading!
https://medium.com/the-ascent/from-teaching-middle-school-math-to-being-a-full-stack-software-engineer-cffc831986b0
['Reinald Reynoso']
2020-12-27 22:01:12.257000+00:00
['Algorithms', 'Career Change', 'Software Engineering', 'Career Advice', 'Programming']
A Better Authentication API
A Better Authentication API Using FastAPI to generate JSON Web Tokens Photo by Philipp Katzenberger on Unsplash Recently, I wrote a series of posts explaining how JSON Web Tokens could be utilized in an API that was written using Flask. However, a few weeks ago, I discovered how awesome FastAPI is and have been wondering if it’s JWT validation techniques would be a better fit for what I need. Planning Before we get started writing any code, it’s always a good idea to do a little planning first. Thinking about the design of the API, we are going to need at least two endpoints. POST — /authenticationapi/v2/create GET — /authenticationapi/v2/login The “create” endpoint adds a new user to the system while the “login” endpoint generates a token for the user. The really cool part about FastAPI is that it has support for pydantic models. Essentially, these models define what the data being passed to the endpoint should look like. They also validate and return an appropriate error for invalid data. The model for the “create” endpoint will look something like this: class NewUser(BaseModel): username: str email: str password: str The model for the “login” endpoint will look like this: class User(BaseModel): username: str password: str Coding Time! With a plan in place, it’s time to start a little coding. I went ahead and initialized a new instance of FastAPI and added some metadata to it so that Swagger documentation will look a little nicer. After that, I defined the API endpoints and their functions. With our template set up, we can now dive into finishing the “add” endpoint. As I mentioned earlier, this endpoint adds a new user to the system, but before we can do that, we need to hash the password. Lucky for us, there is a really awesome package called “passlib” that makes hashing very easy. To install it, use the following command: pip3 install passlib[bcrypt] Once installed, it can be imported using: from passlib.context import CryptContext To hash the password, we first need to create a context. Essentially, this context is a helper that defines that a hashing algorithm and hashes passwords. In our case, the context looks like this: passwordContext = CryptContext(schemes = ["bcrypt"], deprecated = "auto") Using the “hash” function, our password will be hashed using the BCrypt scheme. A salt value is built into the function, so if we tried to hash the same value multiple times, we would get a different value. The final step to completing this endpoint is using an asynchronous function to insert the new user into the database. I highly recommend using SQLAlchemy to handle insertion into your database. The “login” endpoint is a little more complex. When first called, we will use an asynchronous function to get the user’s hashed password from the database. After that, using the “verify” function from the passlib context we can compare the stored hashed password to the password the user provided in the API call. If the user provided the correct password, then a JWT will be created (token expiration time is one hour). The final result should look a little something like this: Testing With our endpoints written, we can finally do some testing. After starting up Postman, I entered the URL for the “add” endpoint. http://<HOSTNAME>/authenticationapi/v2/add After that, I added the JSON object that will get sent with our call: { "username": "mike", "email": "mike@testmail.com", "password": "TESTPASSWORD!!" } Below are the results of the POST request. Successfully added user. Now that our new user was successfully added, let’s try logging them into the system with our “login” endpoint. http://<HOSTNAME>/authenticationapi/v2/login The JSON object being sent with the request is: { "username": "mike", "password": "TESTPASSWORD!!" } Sending the request gets us the following result: Obtained token successfully. Fortunately, we were able to successfully obtain a token. However, in order to truly test the endpoint, we need to see what happens when we use an incorrect password. The following shows us the result of that attempt. Incorrect password test. The last test that needs to be made is to see if the built-in Swagger documentation works. By this, I mean that would be seeing our two endpoints as well as the objects that get passed to them. Navigating to this URL will verify its functionality. http://<HOSTNAME>/docs Verifying working Swagger documentation. Final Thoughts So far, we have a pretty good start on an authentication API. We have two simple endpoints that adds a new user to your database and generates a JSON Web Token for an authenticated user. Unfortunately, this API is far from complete, because there is still a lot of work to be done. Specifically, there needs to be more error handling, logging, and a better secret when encoding the JWT just to name a few. Feel free to leave a comment about your thoughts on this API (or any authentication API for that matter). Until next time, cheers!
https://medium.com/python-in-plain-english/a-better-authentication-api-35933c6c1058
['Mike Wolfe']
2020-12-28 08:05:25.562000+00:00
['Python', 'Fastapi', 'Json', 'Json Web Token', 'Software Development']
How to Set Up Redux-Thunk in a React Project
Today I will be discussing how to set up redux in a react project. Yes, you probably heard a lot of fuss around it about how hard it is to set up on a react project, all the boilerplates along with it, etc… Yet redux is so popular today. Why? Because it allows us to manage the global state within one application, where several components can access the same piece of data within the global state. Let’s talk about the installation steps. Step 1: If you haven’t install node already run the following command to install node in your terminal: curl “https://nodejs.org/dist/latest/node-${VERSION:-$(wget -qO- https://nodejs.org/dist/latest/ | sed -nE ‘s|.*>node-(.*)\.pkg</a>.*|\1|p’)}.pkg” > “$HOME/Downloads/node-latest.pkg” && sudo installer -store -pkg “$HOME/Downloads/node-latest.pkg” -target “/” Step 2: install a new react project with the following command, with redux-project as your project name: npx create-react-app redux-project Step 3: Modify your index.js and replace it with the following content: Step 4: To install redux, you need to install several redux packages by running the following command: npm install react-redux redux redux-thunk — — save Step 5: Modify your app.js and replace it with the following content: Over here, I imported in “provider” and wrap it around our application. In this case, that would be ActionComponent even though normally it would be a bunch of routes that route to different pages. What this means is any component wrapped by the provider HOC(Higher-order function) will have access to the global redux store. ConfigureStore is a function that will hook the application to the root reducer(which connects all the reducer). Step 6: create store.js with the following content: Over here, we call createStore from redux, which gives us a default store to work with for the entire application. It takes in 3 arguments: one being the rootreducer, which contains all the reducers combined. This reducer is where you hold your global store data, as well as a function that parses incoming commands that modify the existing state. There is another interesting piece to this, which is the composeEnhancers. If you did not know what it is and by looking at it, you can at least guess that it is related to redux dev tools. That’s right! Redux debugging tools are now available on browsers once you include composeEnhancers for createStore call. Here is how the redux dev tool looks like on the actual browser: Step 7: Create a folder called reducers under the src directory and create a rootReducer.js with the following content: In a regular application, you can expect to have different reducers here, each with their own local state & data. Over here we export one root reducer with local state from simple Reducer. Step 8: In the reducers folder, create a file called simpleReducer.js and including the following content: Over here, the simpleReducer contains result, fruitOne, fruitTwo, all global state data that will be modified and utilized depending on the changes to the state of the application. It takes different commands, such as “SET_FRUIT_ONE”, and modifies the state depending what action.payload holds as a value. Photo by hannah joshua on Unsplash Where are all these commands coming from? They are all coming from an action js file, which contains a list of functions that sends commands to the reducer to modify the state. Step 9: under the src folder, create a new directory called actions. In the actions folder, create a file called simpleActions.js with the following content: Over here, we call the dispatch function with a type and a payload. The “type” is the command that the reducer picks up and decides how to modify the global state. The payload is the data that will be inputted into the global state, depending on how the reducer handles this piece of data. Now our redux is setup! Question is…how do we modify the redux store and get access to the redux store data on our application? Remember in app.js we have an action component that is wrapped by the provider? Step 10: Create a component folder under the src directory. Inside the component folder, create ActionComponent.js and include the following content: Over here, we get the connect function from react-redux, which allows us to wrap around actionComponent and connect it to the redux store. Connect component, in general, takes two components: mapStateToProps: this allows us to access global state variables mapDispatchToProps: this gives us access to functions that can modify the global state In this component, we connect the functions to update the global state from mapDispatchToProps. Afterward, we access them as attributes of props and call it. What happens is it calls the function that dispatches a specific action(for example, set fruit one) with its own type(command) and payload. It then goes to the reducer and modifies the state. If you look into the redux dev tools on chrome, you can track all the commands and the status of the global state, as seen in one of the screenshots above. How about getting access to variables from the state? Step 11: inside the components folder, create a component called FruitComponent.js with the following content: Simple to mapDispatchToProps, you expose the field that you want to see from a particular reducer and that variable becomes accessible as a props. Do note that any changes to the action component will cause mapStateToProps to get fruitOne and fruitTwo again from the global state. Well, that’s not good because we only want to get fruitOne/fruitTwo when the page loads and only when there are changes to these two variables in the global state. For now, this is enough to guide you to set up your redux. In future articles, I will talk about tools to memoize the data we are getting from the redux.
https://medium.com/javascript-in-plain-english/how-to-set-up-redux-thunk-on-a-react-project-79b0c29c96db
['Michael Tong']
2020-12-26 09:19:15.230000+00:00
['Redux', 'Redux Thunk', 'Front End Development', 'React', 'JavaScript']
Why Serverless Architecture Will Be The Future Of Business Computing?
Serverless is currently a trending topic which will definitely be a major hit within the next few years. In the future, you won’t have to worry about the infrastructure, as your complete software life cycle will depend on cloud service providers. WHAT IS SERVERLESS ARCHITECTURE? Initially, the definition of serverless architecture was limited to applications that are dependent on third-party services in the cloud. These 3rd party apps or services would manage server-side logic and state. Alongside a related term — Mobile backend as a service (MBaaS) also became popular. MBaaS is a form of cloud computing that makes it easier for developers to use the ecosystem of cloud accessible databases such as Heroku, Firebase, and authentication services like Auth0 and AWS Cognito. But now serverless architecture is defined by stateless compute containers and modeled for an event-driven solution. AWS Lambda is the perfect example of serverless architecture and employs Functions as a service (FaaS) model of cloud computing. Platform as a Service (PaaS) architectures popularized by Salesforce Heroku, AWS Elastic Beanstalk, and Microsoft Azure simplifies application deployment for developers. And serverless architecture or FaaS is the next step in that direction. FaaS provides a platform allowing the developers to execute code in response to events without the complexity of building and maintaining the infrastructure. Thus despite the name ‘serverless’, it does require servers to run code. The term serverless signifies the organization or person doesn’t need to purchase, rent, or provision servers or virtual machines to develop the application. Servers still run your application. But a third-party company takes care of the grunt work of provisioning, managing, and scaling servers. In serverless architecture, you manage and provision nothing. Serverless architecture often incorporates two components: Function as a Service. Backend as a Service. FaaS is a computing service that allows you to run self-contained code snippets called functions in the cloud. Your functions remain dormant until events trigger them. Functions are self-contained, small, short-lived, and single-purpose. They die after execution. is a computing service that allows you to run self-contained code snippets called functions in the cloud. Your functions remain dormant until events trigger them. Functions are self-contained, small, short-lived, and single-purpose. They die after execution. BaaS is a cloud computing service that completely abstracts backend logic, which takes place on faraway servers. It allows developers to focus on front-end code and integrate with back-end logic that someone else has implemented. BaaS could be authentication, storage services, geolocation services, user management, and so on. In serverless architecture, you focus on writing code only. You deploy when you’re ready, without caring about what runs it or how it runs. MICROSERVICES TO FAAS Serverless code written using FaaS can be used in conjunction with code written in traditional server style, such as microservices. In a microservice architecture, monolithic applications are broken down into smaller services so you can develop, manage, and scale them independently. And FaaS takes that a step further by breaking applications to the level of functions and events. There will always be a place for both microservices and FaaS. For example, the code for a web application is partly as a microservices and partly as a serverless code. Also, some things you can’t do with functions, like keep an open WebSocket connection for a bot for an instance. Here an API/microservice will almost always be able to respond faster since it can keep connections to databases and other things open and ready. Another interesting explanation- you can have a microservice by grouping a set of functions together using an API gateway. So microservices and FaaS can coexist in a nice way. The end-user is least bothered about your API is implemented as a single app or a bunch of functions, it still acts the same. Why Use Serverless Architecture? As you ponder this question, consider these three key attributes of serverless architecture. It’s scalable and highly available . Scaling traditional applications requires you to understand your traffic pattern. You estimate how much of each resource you’d need, and then you’d provision accordingly. Users troop in from all geographical regions to use modern applications. A traditional application could be overwhelmed by a spike — probably on a black Friday. In serverless, your application is highly available, and it scales automatically as your users grow, and usage increases. . Scaling traditional applications requires you to understand your traffic pattern. You estimate how much of each resource you’d need, and then you’d provision accordingly. Users troop in from all geographical regions to use modern applications. A traditional application could be overwhelmed by a spike — probably on a black Friday. In serverless, your application is highly available, and it scales automatically as your users grow, and usage increases. It costs less. One of the reasons serverless architecture is gaining popularity among startups is because of its pricing model. The cost of running servers 24/7 and paying for idle time is no longer an issue in serverless. You pay for usage only. Functions have allocated time in which they run and die afterward. The provider charges based on the number of executions and the size of memory your workload uses. This helps you optimize costs. One of the reasons serverless architecture is gaining popularity among startups is because of its pricing model. The cost of running servers 24/7 and paying for idle time is no longer an issue in serverless. You pay for usage only. Functions have allocated time in which they run and die afterward. The provider charges based on the number of executions and the size of memory your workload uses. This helps you optimize costs. The time to market is faster. Operational tasks such as server provisioning, maintenance, and monitoring infrastructure are off your shoulders. You can focus solely on your business logic (code), experiment with ideas, and hit production on time. Examples of serverless architecture use cases are: 1. High-Traffic Websites If you’re still serving your static websites from an EC2 instance, you may be missing out on a lot. With serverless, you can host your static website on S3 bucket and serve your assets with a global, fast cloud delivery network. Not only is it cheaper and fast, but it is also highly available and scalable. 2. Multimedia Processing Applications If your business deals with images and videos, then serverless architecture might work well for you. You can use a scalable storage service such as AWS S3 to store your data. An upload event can trigger a lambda function after each successful upload that processes your file asynchronously. Your users can continue to enjoy your app while a highly available and scalable back-end service processes the upload in a non-blocking way. 3. Mobile Backends An API gateway gives you an entry point to your business functions. These functions can be exposed as rest API that your mobile app consumes. Serverless services such as AWS AppSync allow you to securely access, manipulate, and combine data from multiple sources in real-time. 4. Internet of Things (IoT) IoT devices generate a lot of data from their environments through sensors. Organizations often struggle to process this overwhelming data coming from these connected devices in a scalable way. Using a serverless back end like AWS IoT Core, you can scale to billions of devices and trillions of messages. 5. Automate CI/CD Pipelines Continuous Integration (CI) and Continous Delivery are practices that empower developers to integrate and deliver code frequently and reliably. You can leverage on serverless to automate your CI/CD workflows. An event from developers’ code check-ins could trigger automated tests or even deploy to production. 6. Big Data Applications Before cloud computing, the insights big data provided were available only to big enterprises because organizations needed the infrastructures’ overheads to make sense of that data. Setting up and maintaining infrastructures for big data isn’t easy. With serverless computing, your app can now take advantage of several services, including Amazon S3, Amazon Athena, Amazon Kinesis, AWS Glue, and AWS Lambda to build scalable data pipelines. Earlier, I mentioned that serverless architecture isn’t a silver bullet. There are cases where serverless might not be a good fit. Serverless Applications Monitoring One of the critical aspects of any application in production is monitoring. Full visibility is vital for debugging and for taking proactive steps when things are about to go wrong. This is even more important in a serverless environment where you use a backend component managed by someone else. At first, complete visibility into serverless apps was not intuitive. Over the years, as serverless architectures have gained widespread, sophisticated monitoring tools have begun to emerge. Services like Scalyr, CloudWatch, DataDog, Epsagon allow you to see updates with robust alerting systems before they affect your users. These modern monitoring tools allow for seamless integrations. Serverless Compute Platforms we’ll look at three serverless computing platforms to get you started. 1. AWS Lambda AWS Lambda lets you run code without provisioning servers. It was introduced in 2014 and has grown ever since. According to a report, AWS Lambda has the largest market share of all serverless computing platforms. If your code in Java, Go, Powershell, Node.js, C#, Python, or Ruby, you’re in luck. AWS Lambda natively supports their runtimes. 2. Azure Functions Azure Functions is a serverless platform offering from Microsoft. It allows you to build, run code at scale without the grunt work of server provisioning. At the time of writing this post, Azure functions support C#, Javascript, F#, Java, PowerShell, Python, and Typescript. 3. Cloud Functions Cloud Functions is an event-driven serverless compute platform from Google Cloud. As stated on the cloud function’s official page, write your code and let Google take care of the operational overhead. Google cloud function is one of the leading serverless platforms to run your code. At the time of writing this post, cloud functions support Node.js, Python, and Go runtimes. Learn More:
https://medium.com/dev-genius/why-serverless-architecture-will-be-the-future-of-business-computing-e99279cb298a
['Adem Zeina']
2020-06-07 17:00:26.455000+00:00
['Programming', 'Software Development', 'Productivity', 'Web Development', 'Serverless']
Facebook is shutting down Moments on Feb 25, here’s how you can save all your photos
Facebook is shutting down Moments on Feb 25, here’s how you can save all your photos Facebook Moments, the standalone mobile app designed which let users privately share their photos and videos, is shutting down next month on Feb 25. Talking about Facebook is shutting down Moments on Feb 25, here’s how you can save all your photos, Facebook confirmed the app’s services will end February 25. Facebook decided to end support for the app, which hasn’t been updated in some time because many people weren’t using it. Moments users will see a message warning them of the imminent end of the app. Below message is an option to export photos and videos. There are two export options for Moments users, which user can access the website version of the Moments app. The user can create a private album on their Facebook account; the other option is to download everything to their preferred device. Users can able to start the export from any device. If the user creates private Facebook albums, they’ll see a link next to each moment below which is ready to view as an album on Facebook. For users who opt to download their all file, they’ll need to enter their Facebook password when prompted. The files available will be shown along with the file size and users will be able to select the quality of the file — high, medium or low. Then The Moments app will email a link and notify the user on Facebook once it’s ready to be downloaded. We Believe Moments, which was first launched in 2015, has seen some competition from other Facebook products in recent times, which might have led to its demise. For Example, Facebook built out its new Stories feature, which has a direct sharing option. This option, which is designed for one-offs and not whole albums, did allow users to skip the Moments app entirely in order to privately send or share photos with a select friend or group of friends. Talking about Facebook is shutting down Moments on Feb 25, here’s how you can save all your photos, Facebook communicate that they will continue to incorporate options for saving memories within the Facebook app, as well. For Example, as the Stories feature grows in popularity, the company is working on more efficient ways for people to save their photos and videos they shared through Stories. Some of these recently launched features include Save Photos, highlights and Stories Archive on Facebook. Facebook is one of the most popular social media platform having billion of users across the globe due to this many people prefer advertisement their product or service on this platform, it is equally important to communicate the right message to the right people at the right time to opt for reputed Digital Marketing Agency for your next social media campaign that will help to increase your brand visibility and increase in sales. Contact us to discuss your Digital Marketing requirement or you can reach us at inquiry@techcompose.com
https://medium.com/techcompose/facebook-is-shutting-down-moments-on-feb-25-heres-how-you-can-save-all-your-photos-3aab96cc56e1
['Jaymine Shah']
2019-02-11 06:46:36.802000+00:00
['Facebook', 'Social Media', 'iOS', 'Facebook Marketing', 'Moments']
War, Bloodshed, and the Emergence of Enlightenment in Europe
Peace talks culminated in the Peace of Westphalia (1648). This laid the groundwork for modern international relations, the Age of Enlightenment, and ended the devastating Thirty Years’ War. This peace refers to a series of treaties between the major powers involved. Over 100 delegations met at different locations and different times in two major cities: Münster and Osnabrück. Within the Empire, the princes were given the right to choose the religion of their respective state, the right of people to practice their faith in private was to be respected, Dutch and Swiss independence were formally recognized, and various specific boundary issues were addressed. Enlightenment — The Potential of Rationality and Its Limits With the worst warfare over and the reality/necessity of toleration having emerged from a violent acting out revealing the excesses of rejecting that principle, the potential of reason came to the fore among the educated of Europe. The latter half of the seventeenth century saw important scientific breakthroughs associated with a group of English scholars. Upon the restoration of the English monarchy in 1660, the Royal Society was founded. This was a society in which metaphysical questions were left at the door and men of science free to discuss their experiments. Men like Isaac Newton, Edmund Halley, Robert Boyle, and Robert Hooke made some of the most important scientific observations of their time (or any, for that matter). The Royal Society was built upon the legacy of that great English Renaissance polymath Francis Bacon. Bacon pioneered the scientific method and the philosophic outlook which came to be known as empiricism — placing an emphasis on observation rather than received wisdom. The ways of thinking explored by scientific minds quickly influenced other areas of study. The seventeenth century did not suffer from the severe compartmentalization of knowledge which plagues so many academic departments nowadays. Additionally, the polymaths of the Enlightenment were much more practically-minded. Indeed, one could say that the academics of today have more in common with medieval scholastic theologians interested in debating how many angels could dance on the end of a pin than on anything practical — think of the absurdities of postmodernism. In the realm of politics, John Locke wrote several treatises on politics, emphasizing the importance of the individual and ‘the right to life, liberty, and property.’ Locke’s philosophy constitutes the foundation of classical liberal political thought — that which puts the individual at the center and seeks to remove unnecessary government constraints. Just as the Enlightenment on the continent was born out of warfare, so too was that of the English Enlightenment. The seventeenth century saw the execution of Charles I in 1649 after years of civil war, a Puritanical republic which lasted just over a decade, and the restoration of the monarchy. Further political advances were made in the 1680s with the Glorious Revolution, limiting the monarch’s power, enhancing that of Parliament, and setting religious issues related to the royal succession. The potential and limits of rationality were explored. While blind support of rationality and notions of inevitable progress may have influenced people during the French Revolution, throughout much of Europe the limits of reason were better understood. The Enlightenment was not a movement blind to realities associated with human nature. This can, perhaps, best be appreciated in the Anglo-American world. The founding fathers of the United States constructed a government based, not on the notion of ideal men wielding power justly all the time, but upon the reality of imperfect people, tempted by power, governing. Hence, the importance of separation of powers and federalism. James Madison read widely. Among the most important of those who influenced him was the ancient Roman statesman Cicero, who argued for preservation of the Roman Republic by using the best of the three systems (rule by one, rule by some, and rule by many) to overcome the worst of each of the three systems.
https://medium.com/digital-republic-of-letters/war-bloodshed-and-the-emergence-of-enlightenment-in-europe-d89293086b4f
['Kevin Shau']
2019-09-01 16:21:00.138000+00:00
['Society', 'Enlightenment', 'Culture', 'Politics', 'History']
5 Data Storytelling Homework Assignments
Level Up Your Data Storytelling Game What draws me to data storytelling? The opportunity to work in the space where analysis and intuition, qualitative and qualitative, logic and emotion, overlap. In my version, data and storytelling work in concert with each other. Both modes of thinking can be incorporated into every step of the process, from determining which questions to ask, to figuring out what data and methods can answer those questions, to designing and communicating the insights. A couple of weeks ago, I had the opportunity to spend a day in Athens, Georgia with folks — designers, journalists, technologists, teachers, researchers — who share a love of data storytelling at Tapestry, a conference hosted by visualization software company Tableau. Here are some five homework assignments my favorite speakers inspired me to give myself, along with a few insights: Homework Assignment #1: Create a data graphic on your cell phone Hannah Fairfield, designer at the New York Times, spoke about how to create data graphics that build to a reveal — just like any other good story does. This concept was echoed throughout the day: When you show your audience “2 + 2” and trust them to infer “4,” they experience the information as discovery, which is stickier than simply presenting an interesting finding. As part of this talk, Fairfield shared an example of on motorcycle helmets and fatalities presented as a slideshow that steps the audience through a data story toward an insight. The slideshow was designed to be viewed on a cell phone (as is every graphic Fairfield develops for the Times): This data story from the New York Times is, effectively, sequential data art. It’s designed with a mobile-first mindset. While this graphic was designed for a cell phone, it wasn’t designed on a cell phone. Why should you even try such a thing? Designers often don’t “think in mobile” because they do their work on giant screens. Forcing yourself to not only view the graphics you make on a small screen, but build it there as well, will help reinforce the mobile-first mindset. If, like me, you are at a loss in terms of how to actually create and edit graphics on your phone, here are some apps to get you started: iOS, Android. Homework Assignment #2: Use a single set of data to tell 7 (or more) types of data stories Ben Jones of Tableau Public — inspired by Kurt Vonnegut and Edward Segel and Jeffrey Heer’s must-read paper on visual narratives — presented seven common data story patterns: change over time, drill down, zoom out, contrast, intersections, factors, and outliers. To illustrate these story types, Jones used a single data set on the freedom of the press. This approach highlights something I truly believe is at the heart of data storytelling (or any kind of storytelling, really): A single data set can lead to infinite stories. This is far more exciting than simply telling the story through different channels or mediums: The insight Jones pulled from the data, as well as the presentation, was different in each example he presented. There are no doubt more than seven types of data stories, but this is a great start. Images are from Ben Jones’ presentation, (which I then converted to an animated gif). It’s easy to become accustomed to telling certain types of stories with data. Those habits help us do our work better, especially when operating on a deadline. But sometimes we need to break them. Creativity is a never-ending dance of finding, learning, and outgrowing patterns, and this homework assignment can help you navigate that process. Homework Assignment #3: Collaborate with a researcher and offer to visualize, for a new audience, the data they’ve compiled Working with a small team, when Popular Science’s Kathryn Peek wants to do an ambitious storytelling project, she looks outside her organization for opportunities to collaborate. Often, this means teaming up with scientists and researchers to present their data in new ways for new audiences. As they work to present the information in new ways, they often end up uncovering new insights as well, as they did with this visualization of the lifecycle of scientific ideas. Data storytelling happens in all kinds of settings — journalism, science, business, academia, education — and each of those communities has developed its own best practices, conventions and points of view. Cross-pollinating your data storytelling with another discipline provide news inspiration and has the potential to expand your skills as a data storyteller. Homework Assignment #4: Write down your definitions of “data,” “story,” and “data story” Data designer Kim Rees of Periscopic introduced herself with a statement she knew would be provocative at a data storytelling conference: She doesn’t think data and story belong together. As she spoke, it became clear that her definition and my definition of “story” didn’t line up. She equates story with fiction, whereas I see story as a way of organizing information. (Many of my favorite storytellers — Joan Didion, John McPhee, Ira Glass, Jad Abumrad — deal in nonfiction.) The reason I bring this up is not to debate Rees — although that would be fun — but to encourage as a thoughtful approach to the work of data storytelling. Definitions needn’t be concrete — in fact, I’m sure mine will be mutable — but forcing ourselves to think about them is a great way to examine the principles and assumptions that underlie the work we do as data storytellers. Homework Assignment #5: Tell a data story using a QUESTION → ANSWER → NEW QUESTION structure Storytelling is an an evolutionary strategy, says Newman University English professor Michael Austin. We tell stories because they help us survive; the entertainment value is a by-product. (Austin has written an entire book on the topic, Useful Fictions.) The best stories create, then mediate, anxiety. How? By posing questions, then answering them. If you can do this in a way where the answer of one question gives rise to a new question, well then you’ve got yourself an infinite storytelling loop. For a great story, pose a question that demands to be answered. For a never ending story, keep doing this. How can this help you be a better data storyteller? You know why your data is important, but your audience doesn’t. Before you present the data, present a question that can be answered by the data. Create a need and then satisfy it. What comes next? As I complete these homework assignments, I’ll post the results and share them here — and I encourage you to do the same. What data storytelling homework assignments have you given yourself, and how have they helped you be a better storyteller? Jordan Wirfs-Brock is a data journalist with Inside Energy.
https://medium.com/design-play/5-data-storytelling-homework-assignments-50f4ee03baaa
['Jordan Wirfs-Brock']
2015-03-16 04:31:30.603000+00:00
['Data Storytelling', 'Data Visualization', 'Design']
I Set a Timer for 11-Minutes to Prove that Fast Writing is Good Writing
I Set a Timer for 11-Minutes to Prove that Fast Writing is Good Writing Then I wrote a full article during that time. Being tired after a long day at work is not an ideal writing condition. However, I believe in fast writing. So I decided to see what happens when I set a timer and write for 11 minutes. This article is the result of that experiment. As full disclosure, I stopped the timer after nine-minutes because I already had 480 words written and a complete article. I then edited the article for the next 14 minutes. How I learned to write fast I won my first big assignment from my first major freelancing writing client. The brief? Ten thousand words in a week. I’d been asked to write a short ebook, and the seven-day deadline felt impossible. A couple of years prior, I spent 10 months writing 18,000 words during my post-graduate degree. That’s 420 words per week. (I just wrote more than that in nine minutes!). Writing 10,000 words in a week meant I was asked to write 24 times faster than what I’d done before. Who gets 24 times better at any skill? And in only a week? Writing 10,000 words in just one week seemed like an impossible task I got up early on the first day. I slipped on my clothes and sat in front of my laptop. I needed to write 2,000 words. Then I’d be done for the day. I planned to research as I wrote. I started writing. I wasn’t trying to write fast. I got words onto the page as they came into my mind. Then, a small miracle happened. I was done with my word count for day one within 90 minutes I started my working day at 7:30 am and finished at 9:00 am.“How is this even possible?” I thought. I let myself have the rest of the day off, to celebrate how simple it had been to write those 2,000 words. And I easily finished the 10,000-word ebook in a week. Of course, now I know about writers who go upwards of 45,000 words in a week, but back then, 10,000 words in seven days seemed impossibly fast. Miracles happen when you’re willing to chase them. I wanted to be a writer, and here I was, being paid to write. What’s more, I finished more work than I set out to finish. Writing is a miracle When you write fast, your thoughts come alive, you follow the spark. It’s a journey of discovery for you as the writer. Readers sense your excitement in what you create. Your fingers dance on the keyboard, and magic happens. Try it, set a timer for 11-minutes, and write!
https://medium.com/2-minute-madness/i-set-a-timer-for-11-minutes-to-prove-fast-writing-is-good-writing-e0f420aad790
['David Majister']
2020-11-29 21:38:54.870000+00:00
['This Happened To Me', 'Inspiration', 'Writing', 'Writing Tips', 'Art']
Better Marketing Weekend Reads
Better Marketing Weekend Reads Clever analogies about rainforests and junk food, tips for how (if?) to Fleet a Tweet, Gen-Z marketing advice, and more. Photo by Wilhelm Gunkel on Unsplash Thanks to everyone who completed the audience survey in the last newsletter! We’re listening to your feedback, and we’ve got some new things in the works. This issue of the newsletter has a list of article highlights — from practical advice to inspirational ideas — as well as a list of some of our most-read articles. (By the way, we’re not including articles about writing and/or making money on Medium in these lists—but you can find lots of resources for that in this guide).
https://medium.com/better-marketing/better-marketing-weekend-reads-e81128f2d7da
['Brittany Jezouit']
2020-11-22 13:05:47.562000+00:00
['Marketing', 'Newsletter', 'Business']
Southern Fans, Keep Your Head
10/15/94, Pelham, AL, Oak Mountain Amphitheatre I’m fascinated by the still-intact regionalism of the jamband scene. In a modern music industry where artists can globally distribute music instantaneously, radio stations are nationally homogenized, and major record labels seek maximal demographic crossover, the idea of a local scene and home-field allegiances seems quaint and antiquated. But in the jam world, the country remains carved up into fiefdoms where particular bands claim pre-eminence. For instance, Phish, the one band that could have claimed national jamfan unity post-Dead, has receded in 3.0 to their power base of New England, drawing smaller and smaller crowds outside of those markets. The Southeast remains Widespread Panic turf, the West Coast is still dominated by the extended Grateful Dead universe, Umphrey’s McGee has risen to the top rank in the Midwest, and so on. Obviously these bands still play all over the country, but reading the tea leaves of who gets billed over who at the various jam festivals and what venues these bands can fill in various cities gives you a pretty good sense of how the pecking order shifts with the map. The early 90’s were much the same, beneath the shadow of the Dead’s semi-ambulant corpse. If anything, there was even more competition and balkanization, particularly in the northeast, where alumni of the Wetlands scene pursued a friendly rivalry all the way to the mainstream charts. Phish was both a part of this battle and above it — pulling the alpha move of only playing a handful of HORDE dates in 1992 and 1993 with their peers Blues Traveler, the Spin Doctors, and Aquarium Rescue Unit. But here in fall 1994, they’re still not too big for a little bit of coalition-building away from their core market. Playing the 10,000-seat Oak Mountain Amphitheatre, just outside of Birmingham, Alabama, the same week as shows in college auditoriums, theaters, and even a parking lot, band management probably figured they could use a little help moving tickets. So the call went out to the Dave Matthews Band, for their third of six shows as Phish’s opening act in 1994. The first two shared bills were solidly on DMB’s mid-Atlantic turf: 4/20/94 in Lexington, VA and 4/21 in Winston-Salem, NC. But this Alabama one-off (and the three-show run they share in California in December) seems more like an alliance to invade enemy territory, a market where neither one could yet fill a shed the size of Oak Mountain. Expanding their fanbase horizons was especially important for both bands in 1994, as they each went for the big breakout album with Hoist and Under The Table And Dreaming, respectively. One of those records successfully crossed over, and it wasn’t the one by this night’s headliner. Under The Table had only come out the month before this show, but its lead single “What Would You Say” wouldn’t reach its peak chart position until the following summer, launching DMB into undisputed national headliner status. But in October 1994, it is assuredly not a two-headliner situation — DMB only get 45 minutes before two full sets of Phish, though Trey is quick to thank them and promise the audience that they’ll “see more of them later.” Given the relative clout of each band at this point in time, it’s perhaps surprising that their onstage collaborations tend to be covers from the DMB songbook, instead of the other way around. Back in April it was “All Along The Watchtower,” and in Alabama it’s Daniel Lanois’ “The Maker,” both straight-ahead pieces of folk rock with lead vocals by Matthews. The Lanois song feels particularly odd for Phish in 1994, with earnest, spiritual lyrics that they rarely attempted until later in the decade, and a laid-back smoothness that they’ve never deployed (thankfully). That uncharacteristic timidity seeps into the rest of Phish’s performance as well; perhaps the same lack of confidence that led them to book an opening act also kept their musical ambitions in check. There’s an “intro to Phish” feel to this show that puts it more in line with 1993 dates where the band was playing a town for the first time, or taking a big step up in venue size — a variety of genres, several of the usual gimmicks (trampolines, acoustic set), and concise jams. There’s no pandering to Southern jamband fan sensibilities, as they’re self-assured enough at this point to stick to what they do best. But the comfort level isn’t there for them to experiment in front of strange faces, aside from tossing in a few Parliament and Headhunters easter eggs for any attentive funk fans in the audience. Whether they’d ever feel comfortable in the deep south is an open question — off the top of my head, the only classic shows I can think of from the region are all from Atlanta. Certainly they’ve given this part of the country some love in the current era, returning to Oak Mountain again in 2012 and 2014, and doing an oddly-routed Southern swing last fall. But low attendance at these shows, and anywhere that isn’t the northeast US or a “destination” run, is a big part of the argument that Phish has returned to regional status in their advanced age. We’ll know for sure if, next time they venture out of New England, they bring back an opening act.
https://medium.com/the-phish-from-vermont/southern-fans-keep-your-head-a050fc7ac339
['Rob Mitchum']
2017-02-15 19:27:36.309000+00:00
['Phish', 'Music']
[好書選讀]Site Reliability Engineering — Simplicity
How to use the two-way binding in Knout.js and ReactJS? Components built by different frameworks must have the same action in one website. What we have to realize is how to…
https://medium.com/a-layman/%E5%A5%BD%E6%9B%B8%E9%81%B8%E8%AE%80-site-reliability-engineering-simplicity-3c9608fb06bc
['Sean Hs']
2019-09-21 11:01:47.253000+00:00
['Reliability', 'Google', 'Web Development', 'DevOps', 'Sre']
CITRIS E.D. Camille Crittenden named Chair of California Blockchain Working Group
Blockchain technology has opened up a range of possibilities for secure, immutable transactions of all kinds. It promises a safer data transfer system, yet its promises and limitations have yet to be fully explored. Human errors, transaction costs, and security attacks must also be evaluated before when integrating blockchain into our vital governmental and economic systems. In response to growing interest in this technology, Assembly Bill 2658 called for the establishment of a statewide Blockchain Working Group to evaluate the risks, benefits, best practices, and legal implications of blockchain for the people of California. California Government Operations Agency Secretary Marybel Batjer has named CITRIS Executive Director Camille Crittenden chair of the Blockchain Working Group. “Distributed ledger systems hold promising opportunities not only for cryptocurrency but for areas of social impact — such as documenting land and property, ensuring chain of custody for legal evidence and supply chains, and giving consumers greater control over their financial and health data — applications in utilities like energy and water, and more,” said Crittenden. Crittenden will lead a group of 20 experts with technology, business, government, and legal expertise to evaluate privacy risks, benefits, legal implications, and best practices for integrating blockchain into government and business. Her team’s ultimate goal will be to gather input from a broad range of blockchain-affected stakeholders and present this information and recommendations in a report to the California Legislature. Crittenden’s leadership experience uniquely prepares her for this position. After earning her Ph.D. from Duke University, she served as executive director of the Human Rights Center at Berkeley Law, as Assistant Dean for Development with International and Area Studies at UC Berkeley, and co-founded the CITRIS Policy Lab and the Women in Technology Initiative at the University of California. Along with her role as executive director at CITRIS and the Banatao Institute, Crittenden brings to this position a deep understanding of technology’s applications for civic engagement, government transparency and accountability, and the digital divide. “I am honored to lead this important working group on possible blockchain applications for the state of California,” said Crittenden. The group will hold a kick-off meeting next month to outline the key components that will make up the final report and recommendations. A report will be delivered to the legislature by July 1, 2020.
https://medium.com/citrispolicylab/citris-e-d-camille-crittenden-named-chair-of-california-blockchain-working-group-8fe76e60993e
['Citris Policy Lab']
2019-08-06 20:59:16.710000+00:00
['Blockchain', 'Society', 'California', 'Policy', 'Technology']
What is NLP — Neuro-Linguistic Programming
The central idea of ​​NLP is that the totality of the individual interacts in its components (“language”, “beliefs” and “physiology”) in creating percepts with certain qualitative and quantitative characteristics, the individual interpretation of this structure means the world. By modifying the meanings through a transformation of the perceptual structure (called the map, that is the symbolic universe of reference), the person can undertake changes in attitude and behavior. The perception of the world, and consequently the response to it, can be modified by applying appropriate techniques of change. NLP therefore has among its aims the objective of developing successful habits / reactions, amplifying “facilitating” (i.e. effective) behaviors and decreasing “limiting” (ie unwanted) ones. Change can also occur by precisely reproducing (“modeling”) the behaviors of successful people in order to create a new “layer” of experience (a technique called modeling). NLP was originally promoted by founders Bandler and Grinder in the 1970s as an extraordinarily effective and rapid form of psychological therapy, claiming that it could aid in the treatment of phobias and learning disabilities, even through a single one-hour session. . Despite the abundance of supporting claims already at the time of publication, the authors were unable to bring any supporting empirical evidence. This fact, together with doubts about the validity of the mechanisms presented, meant that NLP did not receive the support of the scientific community. Neurolinguistic programming is not considered part of the mainstream academic current of psychology today, and has only had a limited impact in some psychotherapy and counseling techniques. NLP has for some time been associated with various psychological manipulation techniques: Milton Erickson responded to these accusations by stating that we all “manipulate” ourselves for different reasons and often for good, as does a mother who takes care of her children and transmits them ways, thoughts and values, or how a teacher does with his students. Neuro-linguistic programming is today a discipline that brings together various areas of the study of human communication, and is proposed as a tool to influence factors such as education, learning, negotiation, sales, leadership, team-building. It has also found application in decision-making and creative processes, in sports and in counseling. Proponents of NLP focus attention on the person targeted, arguing that humans already have all the resources they need, even though these resources are undeveloped or unexplored. The role of the “programmer”, that is of the NLP practitioner, would therefore be to help the person explore his “world map” by asking specific questions to stimulate this process of the recipient. The analysis of the different points of view of a problem is used to define and eliminate beliefs considered limiting, and through the exploration of “ecology”, that is the fabric of relationships of a person, we try to define the consequences of achieving predetermined goals and influences on a person’s well-being. NLP is not considered a science, but a pseudoscience: its claims are not based on the scientific method, and many of the techniques it uses are based on theories that have no basis in the current body of medical and psychological knowledge. A second frequent criticism concerns the absence of empirical evidence and structured research regarding the theories supported by NLP, some of which are contrary to current knowledge. The theories behind NLP may be ridiculous and based on made up or outdated assumptions, but that’s not in itself enough to say it doesn’t work in some way. The problem is that the vague and adaptable nature of this pseudoscientific discipline makes it impossible to test in a way that is satisfactory to the practitioner. For example, when psychologist Richard Wiseman and colleagues proved that it was not possible to tell if a person was lying by eye movements , some experts began to deny that the claim really belonged to PNL . In fact, there are few scientific studies that support it and systematic reviews have regularly rejected it . And to say that there were high expectations : even the American army (remember The Man Who Stares That Goats? ) Was interested in certain New-age techniques that were spreading in the 1980s, including PNL, and asked a committee of the National Research Council to evaluate them. Of the PNL he was particularly interested in the part that would allow to influence others, but at the end of the two years of investigation the response of the committee was negative. Even though it was born as psychotherapy and with the aspirations of a scientific revolution , the only niche where NLP could logically survive was that of self-help and coaching and these sectors, in fact, are the only ones to enthusiastically produce the evidence of its operation. It is a pity, however, that these proofs of effectiveness are exclusively anecdotal : why no one systematically checks how a representative sample of customers reacts to Pnl techniques, and if a certain effect is really attributable to Pnl and not to other factors? In the meantime, anyone can enter the business , because lawsuits have ruled that not even the founders have the right to exclusive rights on the name , so anyone who wants can register their favorite combination, if it arrives in time.
https://ehs-77.medium.com/what-is-nlp-neuro-linguistic-programming-90fe3430ff3f
[]
2020-12-03 11:19:25.936000+00:00
['Nlp Certification', 'Neuroscience', 'Neurolinguistics', 'NLP']
The 5 functions you need to know
Knowing the following set of functions helps you to describe the change in income, the growth of your company, the number of COVID-19 cases, … Logarithmic Functions Logarithmic functions are also monotonically increasing, but after a quick initial phase they are only increasing very slowly: Image by Martin Thoma. Logarithms have a base. In the example above you can see two examples with base 2 and one example with base 10. The higher the base, the slower the function grows. Logarithmic functions have two phases: 0 < x < 1: Negative phase. The closer to get to zero, the more negative the logarithms value becomes. x = 1: The logarithm of 1 is zero for all bases. x > 1: The logarithm keeps growing. One invariant to know is This means the logarithm to base b reverses the exponential function to base b. In computer science, this kind of growth behavior is associated with trees: A binary search tree. Image by Martin Thoma At every node in this tree, the left subtree is smaller or equal to the node and the right subtree is bigger than the node. If you want to check if a value is there, you can make a series of checks. If that tree was a balanced tree — meaning you have the same amount of nodes at the left as you have on the right, then the number of checks you have to perform grows logarithmically with the number of elements: 1 element → 1 check 2–3 elements → 2 checks 4–7 elements → 3 checks 8–15 elements → 4 checks 16–31 elements → 5 checks This is extremely relevant to design fast algorithms. If you see O(log(n)), very likely a binary search or a binary tree is involved. If you see O(n log(n)), very likely sorting is done. Linear Functions Linear functions are described by f(x)=m⋅x where m is called the slope. Three linear functions with different slopes. A slope of 2 means the value increases by 2 with every increase of x. A slow of 0.5 means the values increases by 0.5 with every step. Image by Martin Thoma A typical workers wage is a linear function over the time. This means the x-axis represents the time and the y-axis represents the money the worker earned. If they work double the time, they get double the income. If they work half the time, the income halves. As a developer, one might think that a lot of the resources also show linear growth: If your website has double the amount of users, you need double the amount of machines to show a similar performance. In reality, due to caching or inefficient algorithms it might be more complicated. A slightly more complex version of a linear function is called an affine function: f(x)=m⋅x+t —but pretty often, one does not make this distinction and calls affine functions also linear functions. The parameter t is called the intercept. The intercept just pushes the whole function up or down. Fuel consumption for driving x kilometers is another real world example of a linear function: If you drive double the distance, you will approximately need double the fuel. Quadratic Functions 3 quadratic functions. In computer science, you’re usually only interested in the part x>0 of the red line. Image by Martin Thoma Quadratic functions have the form The x² leads to the characteristic behaviour, that a doubling of x means the value increases by a factor of 4: x ⋅ 2 → f(x) ⋅ 4 x ⋅ 3 → f(x) ⋅ 9 x ⋅ 4 → f(x) ⋅ 16 In software development, it typically happens when you need to look at all combinations of a set. For example, if you had a dating platform and you wanted to check for every pair of users if they are a good fit. This quickly grows out of hand, although today's computer can deal with astonishingly big numbers. Linear and quadratic functions belong to the bigger family of polynomial functions. Polynomials are described like this: n is the degree of the polynomial. A linear function is a polynomial of degree 1, a quadratic function has degree 2. The constant a_n has to be different from zero, but can be negative. All other constants a_i can have an value. Polynomials are important in a variety of different applications in computer science and mathematics, but for describing growth it’s usually fine to just know cubic, quadratic, linear, and constant functions. Exponential functions Image by Martin Thoma Exponential functions multiply their value by the base with every unit step. Exponential functions are typically used to calculate unlimited growth of bacteria and viruses under ideal conditions. For base 2 — the red line in the image above — some of the values are: x=-1→y=0.5 x=0→y = 1 x=1→ y=2 x=2→ y=4 x=3→y=8 x=4→y=16 You can see how this becomes extremely huge after a while. This is also visualized by the wheat and chessboard problem. Sigmoid functions Image by Martin Thoma Exponential growth only describes the growth of bacteria under ideal conditions. This is true for the starting phase, but the world’s resources are limited. At some point there will not be enough food to continue growth. The curve flattens out. Sigmoid functions capture this behaviour of limited growth. They have an initial phase where they grow very slowly. Closely around their symmetry point — at x=0 for the red and the blue line and at x=1 for the green line in the image above— they look linear. Logistic functions are a very typical subclass of the sigmoid functions. Logistic functions are described by the following equation: L defines the upper boundary. In all shown examples, the functions approach y=1 The function has a symmetry point at (x_0, L/2) The bigger k, the more steep the function is Other important sigmoid functions are the hyperbolic tangent and cumulative distribution functions (CDFs) such as the CDF of the normal distribution. Both, logistic functions and the hyperbolic tangent are used as a core ingredient in artificial neural networks. Summary and Outlook You’ve just seen 5 classes of functions which are relevant when talking about growth. Logarithmic functions are always increasing, but very slow. Faster are linear and quadratic functions. Exponential growth is crazy and does usually only happen for a limited amount of time until a threshold is reached. Then the curve flattens and you see a sigmoid shape. There are, of course, way more functions and more general properties. Let me know if you want to know more about function properties, distributions or just some awesome and astonishing functions I’ve seen during my computer science / math classes.
https://martinthoma.medium.com/the-5-functions-you-need-to-know-602e06d6b86
['Martin Thoma']
2020-08-05 18:11:26.650000+00:00
['Mathematics Education', 'Programming', 'Software Engineering', 'Mathematics', 'Complexity']
How to Effectively Skill Up As A Developer?
Do you feel adequately skilled in the market as a software developer? Do you get lost in conversations about tech outside of your work and wonder if you are staying relevant? If your answer to these questions are Yes, you are not alone! The software development world has one constant, and that is change! New programming languages, tools, frameworks, environments, devices are all born regularly. By the time you master one framework, your friend is talking about a cool new framework. What is the strategy to stay relevant and skill up as a software developer? I like to start off by sharing some of my personal experiences as a developer. I started off my tech career as a C++ developer, few years later I was put on a team that was doing Java. A few years later, I quit my job and joined a startup. We were evaluating technologies and picked React Native to build mobile apps. Within a span of few years, I moved from C++, to Java, to JavaScript. I transitioned from a desktop developer to a mobile and web developer. Today I am an Author of tech courses, Blogger, Speaker at conferences, Co-host of a tech podcast and a Software Consultant. So how do I do this and keep up? Note: You can follow me on twitter @AdhithiRavi to learn more about me and keep in touch! Don’t Live in a Bubble When we begin our careers we are enthusiastic and all pumped up to be great. As the years pass by, we get older, get married, kids arrive, and our career just becomes a job that feeds the family. This is the cycle for most of us in any career path. It is not necessarily bad, but being a software developer, also means that tech changes and we have to skill up and stay relevant. Don’t get sucked up within your company, over-work and miss opportunities outside of it. There are exciting opportunities and people outside of just the company that we work for and the first step is to recognize that. It is easy to get caught up in a bubble within a company and sometimes just stepping out of the bubble and seeing what’s around can be enlightening. Don’t Learn Every New Framework To stay relevant in the software development market, you don’t have to learn every new language/framework/tools. This would make you good at none of them. You can’t be a React developer, an Angular developer and be really good at Vue. For example you cant be great at playing every musical instrument (although there are some exceptionally talented musicians who can do that), you instead pick one and master it. The same analogy applies to tech as well. Pick a language, framework and toolset that works for you, your team and the product after comparing with what else is available in the development world. If this tech stack works well, go with it. At this point, you don’t need to panic every time a new idea or tech is born. For example, if you are building React apps and it is working great for you and your clients, there is no need to jump ship to another framework unless you have a valid reason to. Pick your tech stack and master it! I am not discouraging developers from learning and exploring new frameworks and technologies, but I am saying that we don’t have to learn every new framework that comes our way. This is a huge investment in time and may not be worth it. Don’t feel lost or left out if someone talks about a cool new technology you aren’t using. You don’t need to use it right away! Meetups and Conferences When you learn in isolation you may loose track of your learning and goals. Learning in public is a very important consideration for a modern software developer. Tech meetups and conferences are organized frequently around the globe. If you are interested in a certain technology, look for meetups in your area where other developers gather to speak and learn about it. This is a great way for you to meet other like minded developers and share knowledge. If there is a topic you would like to share, speak at the local meetup about it. Once you start getting the hang of meetups, you can try larger conferences and apply to speak there as well. This helps you in learning a ton within a short span and also make connections with other developers in the industry. This is a definite win-win for skilling up and staying relevant. Open Source Contributions Snapshot of my GitHub account Another way to skill up and stay relevant as a software developer is to get into open-source coding. There are plenty of cool open-source projects that need contributors. You can start with providing some help in fixing bugs, documentation and so on and move up to creating new features. This helps you learn outside of your daily work and get in touch with developers across the world and learn from them as well. Open source contributions make your developer profile stand out while you are looking for a job. So next time you have a free hour, try to fix an open issue, and submit a pull request to an open-source project. Once you get a hang of it, you keep doing it! Conclusion Alright folks, that’s a wrap! I hope you enjoyed this article and some of the suggestions I shared to skill up as a developer. See you again with more articles. If you liked this post, don’t forget to share it with your network. You can follow me on twitter @AdhithiRavi for more updates. I am a Pluralsight Author and you can checkout out my courses here: I like to leave you with a quote.
https://adhithiravi.medium.com/how-to-effectively-skill-up-as-a-developer-aa4cf76727b5
['Adhithi Ravichandran']
2020-12-21 21:47:40.121000+00:00
['Software Development', 'Software Engineering', 'Career Advice', 'Developer', 'Programming']
Your Data Safe Weekly Update
It’s been one of those weeks. We started our private sale. We are working really hard behind the scenes and we are well ahead of schedule for the launch of the YDS Academy. We have hit some big numbers on Twitter, 168,000 impressions from the last 9 days in August, 5,120 profile views from 15 tweets. We have also been listed on a number of websites: ICOBench Coinschedule ICOMarks ICOAlert TopICOList ICOBazaar ICOTrack CryptoNext To name a few of course… Keep an eye out! We have a lot of things coming! It’s exciting times. For more information on our private sale: www.YourDataSafe.io
https://medium.com/your-data-safe/your-data-safe-weekly-update-8e10051a4c4
['Your Data Safe']
2018-08-11 10:01:06.822000+00:00
['ICO', 'Startup', 'Twitter', 'Crypto', 'Data']
Do it with data.
The humble pie chart. So let’s say that you’ve been tasked with producing a (suppress yawn) pie chart. If you want to do that in Excel, no problem. Insert > Chart = Done. But if you want to animate it then things become a lot less… simple. If you‘re asked to update that chart at a later date, the following hours/days are likely to be interspersed with bouts of you repeatedly slamming your head into your desk. But it doesn’t have to be like that. Take this typical data set you might receive from a client. Equities — 53% Bonds — 8% Mutual Funds — 2% Cash & MM Instruments — 9% Other — 28% Your job is to present this rather bland information in an engaging and digestible way. So off you go with this data and carefully construct a pie chart in your favourite animation program. You add your keyframes to each slice and lovingly adjust the easing. Maybe you get fancy and spend some time setting up a rig to run it all off one attribute. It truly is a thing of beauty. The most magnificent circle. People weep when they see it. But then… … it’s 3 months later and the client wants to change something. In fact, They want to change all the things. Their landscape has shifted and it needs updating fast for a meeting tomorrow. Breathe… We’ve had .csv support in Cavalry for a good while now but the recent addition of native Google Sheets support has removed a lot of the friction for a user in handling that data. It means that any changes made to that data are instantly reflected in your composition. In the example below the proportion of each slice of the pie is being set directly by the data in this Google Sheet. Because Cavalry has a Duplicator (see Ian Waters post here) all the animation is handled without expressions and with only a few keyframes meaning any updates to timing are easy to make. But we’re not here to talk about all that. A simple, data driven pie chart animation. What we’re interested in here is the data aspect. You’ll notice that the data in that Google Sheet also includes information for the Labels and Colours. They are also being used to drive both of those elements. In the interests of clarity we’re not going to change those in this example but by simply changing the values in the Percentage column of the Google Sheet and refreshing the asset in Cavalry we get a new pie chart. Some quick edits in Google Sheets. Here’s that same chart updated with the data changes: Updated animation based on new data. All we had to do there was update the spreadsheet, refresh the link in Cavalry and hit render (although even that may not be necessary in the future — more on that another time). This technique can be applied to any attribute in Cavalry. If we change the labels in the Google Sheet the labels change in the animation. If we change the colours in the Google Sheet… you get the idea. All we had to do there was update the spreadsheet, refresh the link in Cavalry and hit render. So not only have we saved a lot of time and energy here but by referencing the data externally all the animation is independent of the content. That means cost savings for your clients and that you can get on with focussing on the creative. That gives you (and your client) confidence that there will be no accidental timing changes and that nothing but the data itself has changed. Setting up templates like this for bigger corporate clients could drastically reduce the amount of time spent doing what is generally considered to be on the tedious end of the spectrum for a designer. That means cost savings for your clients and that you can get on with focussing on the creative.
https://medium.com/cavalry-animation/do-it-with-data-6c8024757513
['Chris Hardcastle']
2019-08-30 10:58:05.056000+00:00
['Animation', 'Spreadsheets', 'Design', 'Motion Design', 'Data Visualization']
Should a Good Girl Still Want Sex?
I was called fat, and I was called ugly. I can lose weight, get fit, wear nicer clothes, change my style, colour my hair, learn to do better make-up, hell I can go to plastic surgery and change me completely. Not to please him, but I have been getting fitter and dressing better, after I was allowed to. He didn’t like me dressing up, he was way too jealous. And he would have liked me even fatter, as that meant to him that no one else will want me. I was called stupid and obnoxious. I could change the way I speak, I could get smarter (jeez that wouldn’t be too good, I am already smarter than most of the men I know. I should tone it down rather.) I can convince myself that he was the one who was stupid and obnoxious. But he called me a slut. And it’s difficult to let go. Up to this very moment female sexual liberation is very much in its infancy. We are getting better at destigmatizing sexuality, but the harsh reality is that the classic double standards still linger: While men are praised for having sex with a lot of people, women are shamed for it. That’s wrong on so many levels, but in principle, it means that no matter how far we have come, we still have an even longer way to go before an attitude of sexual acceptance and celebration truly becomes the norm. The worst is the case for the number of sexual partners, but women are judged and stigmatized for being sexually open, for being eager or horny, for wanting sex without a relationship. In my case, I was judged for even wanting sex within our relationship, as it was a clear indication that I wouldn’t be able to keep my panties on. Being a good girl became an insult. It was thrown at me at random moments, tweaking my sentences, taking my words out of context. He expected me to be the Virgin Mary and the most experienced hooker at once. His favourite sentence was that guys want a good girl who is only bad to them. While girls want a bad guy who is only good to them. He always said that I should have married my first boyfriend — as that’s what good girls do. I internalised his abuse, I believed that I was a slut. I apologised for it, I told him I was ashamed and I regretted it. I couldn’t help it, I couldn’t go back and unsleep with my boyfriends in my twenties. I couldn’t undo my past.
https://zitafontaine.medium.com/should-a-good-girl-still-want-sex-91d584ddb050
['Zita Fontaine']
2020-05-12 06:53:09.549000+00:00
['Relationships', 'Mental Health', 'Feminism', 'Sex', 'Abuse']
How to Make Sure Your Website Doesn’t Suck After Google’s Core Vitals Update
How to Make Sure Your Website Doesn’t Suck After Google’s Core Vitals Update Stop what you’re doing, open Chrome, and navigate to the Lighthouse tool — you’re going to need it Photo by Mitchell Luo on Unsplash. Google announced back in May 2020 that “core vitals” would soon be a search ranking signal. This means that websites with good core vitals have the potential to rank above those that don’t. If you manage a website and are SEO-aware, then I’m sure you have spent an incredible amount of time trying to appease our search engine overlords — be it from keyword analysis and backlinks to sitemaps and rich snippets. The techniques used to improve your position in Google SERPs (Search Engine Results Pages) are ever-changing, and you’ll be pleased to know you’ll now need to learn three more. Let’s introduce them: Largest Contentful Paint (LCP): This is how long it takes the page to load. It is when the main content of the page has finished rendering in the viewport (the area the user can see). First Input Delay (FID): This is the length in time from the moment the user first interacts with an element for it to respond. Cumulative Layout Shift (CLS): This is the measurement of elements that move as the page loads. A good example is when a page has loaded, and just as you go to click something, it moves. So what gives me the right to write this? I am a web developer of 12+ years and have implemented a lot of SEO advice over the years. I recently went on a mission to get the perfect Lighthouse score and came pretty close, but several things kept hitting me in the face — and it’s these little nuggets of information that I want to share with you. I’ve kept it as high level as possible, as there are plenty of ways to solve an issue and I don’t want to silo my advice to one technology. Note: The technical implementation of some of these points may require you to rethink the way your whole website works. Let’s look at the typical places that our website will be failing us when Google Core Vitals (GCV) come into play.
https://medium.com/better-programming/how-to-make-sure-your-website-doesnt-suck-after-google-s-core-vitals-update-f4dd66d795f8
['Stuart Costen']
2020-12-18 17:05:14.937000+00:00
['Google', 'SEO', 'Web Development', 'Programming', 'Search']
You’ve Got a Drinking Problem — Here’s Why
What does a problem look like? The sad truth is that once you develop a tolerance, you can consider yourself a pre-alcoholic — you’re in stage one of a spiral that could terribly derail your life. That doesn’t sound that bad, we sometimes even get compliments for our ability to drink larger quantities of whatever alcohol we like. Our actions have consequences though, so let’s talk about our occasional binge, where amazing stories (and headaches) start. What does a blackout actually mean? I’m not going to throw an arbitrary definition at you, just google it— better use Pubmed. When we blackout drinking, we intoxicate ourselves so much, that our bodies begin shutting down vital functions. We can’t articulate, can barely walk, and lastly, our brains give up on us. Short term memory loss happened. Honestly, name one other situation where this has happened? Have you ever done anything in life that resulted in such a catastrophic event? I’m not innocent either — even after graduating in a healthcare-related field, I didn’t come to this conclusion. Blacking out is just part of drinking culture — isn’t it? Human nature can be narrowed down to two principles. Avoid pain and do funny stuff. Drinking alcohol seems funny to us, honestly, we even look better (through the widening of our blood vessels). The truth is that drugs merely mimic the second principle — it is not funny stuff, it is just poison. I’m not judging you, I actually want to encourage a healthy relationship with this age-old substance. So hear me out, have you ever been planning on how much you want to drink? Congratulations — your body is trying to tell you something. It’s blatantly yelling to you one thing: “This is dangerous, you need more control!”
https://medium.com/in-fitness-and-in-health/youve-got-a-drinking-problem-here-s-why-59062e4920e3
['Marcelino Granda']
2020-10-15 15:19:51.070000+00:00
['Addiction', 'Prevention', 'Health', 'Thoughts', 'Alcohol']
Game the System for Good
Teenagers will always find creative ways to cut class, but the way they did it during quarantine deserves a long second look. When school closures forced students in China to complete assignments through a remote learning app, they flooded it with hundreds of one-star ratings to get it delisted from the App Store. No app means no downloads; no downloads mean no class. Eat your heart out, Ferris Bueller. Targeted, coordinated actions like that one can shake up entire outcomes. That’s how young people showed up at President Trump’s re-election campaign rally in Tulsa over Juneteenth weekend — by not showing up. If the hundreds of thousands of ticket registrations were anything to go by, the arena should have been packed with wildly enthusiastic Trump supporters. Yet there were only 6,000 people in attendance, filling barely a third of the 19,000 seats. The curious case of the empty arena confused campaign organizers and pleased opponents alike. Chalk one up for the resourceful TikTok users who banded together to sign up for the event in droves without actually intending to go — just like skip day. In one fell swoop, teens took down a campaign rally from the comfort and safety of their homes. Extremely Online teens these days can organize around pretty much anything they set their minds to. Armed with eye-catching content, K-pop fans have readapted tactics that make their favorite songs and artists go viral to fight racism online. Together, they drowned out racist, anti-Black hashtags like #WhiteLivesMatter, #WhiteoutWednesday and #BlueLivesMatter by flooding them with unrelated entertaining memes and fan-cam videos until Twitter’s trending algorithm classified it as “k-pop” instead of “politics.” Review-bombing, no-show pranks and hashtag-spamming are all guerilla tactics being deployed in a new warlike situation. As teens apply their social media know-how to scale the impact of their social activism, they’ve redirected people’s attention to matters of national interest. And most of them can’t even vote yet. TikTok, Tinder and YouTube have become similar breeding grounds for digital natives looking to game the system for good. Take the video on TikTok of a police officer detaining a handcuffed Black woman singing “you about to lose your job!” It took off, inspiring countless #LoseYoJob remixes and dance challenges that spilled over into the streets as a protest anthem against police brutality. On Tinder, Dakota Rouse started a fundraising trend where she only responded to matches whose opening line was a screenshot of a signed petition or donation to a Black Lives Matter-related cause. Homebound activists on YouTube raised funds for the BLM movement by putting up hour-long monetized videos celebrating Black artists of all kinds, the ad revenue from which were donated to supporting nonprofits. One such video from user Zoe Amira — later pulled down for violating Google’s monetization terms — was viewed over 7.5 million times and raised more than $21,000. Someone called it “a genius way of turning capitalism towards activism.” When the system doesn’t work for you, you work the system. A number of radical, minority-led community efforts during the 70s were born out of the same spirit. The Young Lords took over the nurses’ wing at Lincoln Hospital in the South Bronx for 24 hours to protest public health conditions and demand better care, an action that won the community a hospital reconstruction that was 20 years in the planning. The Black Panthers fed hungry Black kids before school for free by running community programs that were eventually dismantled, but they set a precedent for free breakfast programs in schools across the country. Provocative as they may have been, both initiatives created the discomfort necessary to draw attention to inequality and resulted in better services for the people. Malcolm Gladwell once wrote that the revolution will not be tweeted. A decade later, not only has the revolution been tweeted, but liked, hashtagged, spammed, commented on, Instagrammed, Venmo’d, TikTok’d and shared on Google Docs. The result is, social media companies must adjust to their 2020 role of facilitating important civic discourse. If American democracy is a raucous frat party, they’re the designated drivers, forced to behave responsibly in order keep everyone else safe. If they falter, we all die, or worse, re-elect 45. We’ve come a long way from learning how our Facebook news feeds were weaponized to circulate hate and misinformation and undermine the results of the 2016 presidential elections. Four years later, teens are bending the same platforms to their will for positive social change. Even companies have joined in. Patagonia, The North Face and Ben & Jerry’s, for instance, are calling out Facebook for “doing too little to stop hate speech on its platforms” by boycotting their ad spends as part of the #StopHateForProfit campaign. From cutting class to selling out stadiums at campaign rallies, young vigilantes prove there’s no challenge too great when enough people come together to creatively confront the structures designed to keep them in place. How can we subvert the systems that no longer serve us? Platforms built to entertain and distract us can also be used to make us pay attention and take action. Social networks designed to profit from our time and attention can be retooled to align with their original intention: create new ways for people to build trust with each other.
https://medium.com/thoughtmatter/game-the-system-for-good-105d18c0ab42
[]
2020-06-29 16:21:52.827000+00:00
['Platform', 'Creativity', 'Social Media', 'Democracy', 'In The Know']
Staying Home. Let your fingers run wild over…
Staying Home Let your fingers run wild over imagination’s spine in seclusion Photo by Rowan Heuvel on Unsplash I agree, tonight isn’t like our other weekend nights. I, too, feel its frosty coldness. My heart, just like yours, isn’t pacing with excitement for impeding scenery ahead. There is neither steering nor a glass in these hands tonight. I’m not wearing a shoe, and not expecting any pre-dinner footsie too. I agree it’s hard for us who stay alone. Still, my dear dumbasses — stay the eff home. Just agree to this for some time. There will not be any first meets, and reruns of old stories will only continue online. There will not be a new skin under your nose nor a familiar kiss from a lip that’s not already in your home. There will be no warm breaths raising the heat on dance floors, and no whispering of Manto’s line to another estranged soul. No tousling of hair, and calling cutes. Stay away from flights, and don’t take any Uber too. If you are worried, your anxieties will feel lonely without you — get done with your laundry, and fix your room if you’ve to. Tonight, stay in, flirt with your memories, and leave your fingers wild over your imagination’s spine. If you feel you got some extra blues, let the nib of your pen kiss its shadows over a page or two. To my moronic friends planning their weekend escapades, pour yourself a thick poem, or feast on a thriller if you need to. For a little while, let’s agree — ‘mi casa es mi casa,’ it’s not for ‘tu.’
https://medium.com/literally-literary/on-staying-home-bcabd2b9355b
['Pratik Mishra']
2020-05-26 02:18:19.509000+00:00
['Home', 'Life', 'Self', 'Poetry', 'Creativity']
When Life Gives You Lemons, Make a Free Lemonade Stand
When Life Gives You Lemons, Make a Free Lemonade Stand A story of hope and kindness during uncertain times The weather was perfect in Nashville. Mid 70s. Not a cloud in the sky. My four-year-old and I setup a pretend restaurant inside, but soon after he suggested something much better on a day like this: a lemonade stand. As an alternative to charging people for lemonade in the middle of a pandemic, my wife had the idea of including an optional donation jar instead. She mentioned a few charities and my son chose the local children’s hospital as the beneficiary of the stand. We weren’t expecting anything more than a couple takers, but the response was beautiful. After being outside for just over an hour, we walked away with a jar full of generous donations from people around the neighborhood — most of whom were strangers. My son was elated. At one point we encountered two cyclists speeding by our house, locked into their pedals and wearing proper biking outfits. As soon as the first rider saw a “Free Lemonade” sign with several young children behind it, he immediately smiled and put on his brakes. He walked up to the stand and my son grabbed two cups, then pointed to the other cyclist and said, “This one is for your buddy.” The rider replied, “That buddy is actually my son.” His son walked over and took the cup of lemonade, then the father pulled out a plastic bag from his backpack and grabbed a $20 bill to place into our donation jar. “This seems like a really good cause,” he said with a smile. Moments later, we met a woman who worked at the children’s hospital and she was ecstatic to give us a donation in support of her employer. Everyone who passed by couldn’t have been more kind and appreciative. The total amount raised was $56.80, but it felt like a whole lot more. Our children were prepared to recruit the entire neighborhood for a free cup of lemonade. There was even a point where they proceeded to go door-to-door with some other kids trying to solicit people to buy the lemonade (forgetting that we weren’t actually selling it). Standing next to a bowl of lemons and a tray of lemonade, my son also asked at least five people: “Do you want some orange juice?” Laughter and joy will pave the path back to normalcy in this world. The lemonade stand was a simple act that brought joy to people of all ages and gave me a renewed sense of hope. More often than not, joyful moments must be shared with other people. Our society will never heal itself in solitude. Activist Valarie Kaur offered up this powerful reminder in a recent interview with Baratunde Thurston: joy is “our greatest act of moral resistance” and “returns us to everything that is good, beautiful and worth fighting for.” When times are tough and unpredictable, we still have the ability to take actions that require minimal effort and bring smiles to those around us. My hope for a post-coronavirus America is that we learn to foster more appreciation for people we’ve never met, along with an ability to understand that everyone has a unique perspective on the world. It’s not always easy, but this mentality serves as a way to appreciate the fact that— despite all of our differences—shared joy is the most unifying force that exists today.
https://medium.com/curious/when-life-gives-you-lemons-make-a-free-lemonade-stand-be872253d3d6
['Scott Greer']
2020-10-02 05:49:16.682000+00:00
['Hope', 'Kindness', 'People', 'Joy', 'Children']
Why Facebook’s Diem Is Not a Threat to Bitcoin
Why Facebook’s Diem Is Not a Threat to Bitcoin But it is another way to conduct secure online transactions. Announced with great fanfare in June 2019, Facebook’s Libra digital currency project quickly triggered the wrath of the world’s major economic powers. Facebook’s Libra was initially intended to be a stablecoin backed by a basket of the world’s major currencies (U.S. dollar, euro, yen, pound sterling, …). With the Libra Association, Facebook wanted to show that this digital currency was not its own, but a currency managed by an independent entity in which we found big names like Visa, PayPal, Mastercard, Coinbase, eBay, Spotify, Lyft, Uber, … However, everyone had understood that this digital currency was indeed the property of Facebook. Facebook’s initial project with Libra has been scaled down Under these conditions, the governments of the world’s major powers immediately saw the danger that Facebook represented for their prerogatives as States: monetary creation. American regulators were particularly severe with Facebook, demanding countless guarantees that the Libra Association was slow to present. These guarantees were linked in particular to the control of the source of funds or the respect of privacy since Facebook had planned to develop the Calibra wallet and to integrate it into all of its existing services, from WhatsApp to Facebook Messenger. The American authorities then put pressure on several big names associated with the project. Visa, PayPal, and Mastercard quickly understood that if they persisted in staying with Facebook’s Libra project, their core business would be threatened by the U.S. authorities. At the end of 2019, Visa, PayPal, Mastercard, and eBay had therefore announced that they were leaving the Libra project. While the official launch was scheduled for mid-2020, Facebook was forced to revise its plans. The ambitions of the initial project were constantly revised downwards as the months went by. Libra becomes Diem, but Facebook doesn’t fool anyone At the end of the year 2020, Facebook came back to the charge by announcing the renaming of the project. Now, Libra must be called Diem. The goal is obvious: to make this project less linked to Facebook. The Libra name was immediately associated with Facebook, which increased the fear that Facebook would be the only real decision-making entity within the association. Exit the Libra Association, from now on we will have to talk about the Diem Association. However, the Libra logo remains the same for Diem. They try to make something new out of the old, but they keep the visual elements all the same. Facebook’s Calibra wallet is also changing its name as it will now be called Novi. Basically, it doesn’t change anything, since Novi’s goal will be to allow users to send money via applications such as WhatsApp or Facebook Messenger. The ambitious goal of having 100 world-renowned participants in the project is abandoned. The Diem Association remains out of 27 members. The Diem Association will only issue the Diem dollar initially Based in Switzerland, the association plans initially to issue a stablecoin that will be pegged only to the U.S. dollar. This first stablecoin of the Diem project will be called Diem dollar. No more single stablecoin based on a basket of the world’s major currencies as Facebook originally intended. In the future, the association could then consider issuing other versions of the Diem: Diem euro, Diem yen, … Everything will depend on the success of the Diem dollar and the reception that will be reserved by Western regulators. The group’s chief executive officer, Stuart Levey, hopes that the changes made by the Diem Association will be well received by the regulators of the world’s major economic powers: “All of these design features we think make for a project we think that regulators will welcome.” Stuart Levey even went so far as to hope that central banks might find it advantageous to use the infrastructure of the Diem project in the future for their own digital currencies. The utopia of some has no limits. Some media want you to believe that Diem is a threat to Bitcoin Whether the name is Libra or Diem, governments all over the world see this corporate digital currency that Facebook wants to create as a threat. With its 2 billion users and the privacy scandals in which the group is involved, Facebook is disturbing. I have read articles in the general media by journalists explaining that the Diem is going to be a threat to Bitcoin. To read such nonsense is appalling. But with time, you get used to it. How can you expect anything else from people who don’t understand what Bitcoin is all about? Attitudes about Bitcoin are changing, but some people still lag in terms of understanding what is at stake for the world of the future. With Bitcoin, Diem will have only one thing in common: it will be a currency. Diem does not play in the same category as Bitcoin Diem can therefore be exchanged, like Bitcoin, against the U.S. dollar. Diem can be used as a medium of exchange or as a means of payment. It will not be limited to transactions on the Facebook network. Blockchain technology will be used to manage Diem transactions. However, make no mistake about it. The Blockchain of the Diem project will be centralized since it will be controlled by Facebook and the other members of the Diem Association. In a world where everything is becoming digital little by little, the Bitcoin network will always differentiate itself by the fact that it is decentralized. Anyone can become a node of the Bitcoin network. We talk about a permissionless and trustless blockchain for Bitcoin. This is the strength of Bitcoin. So the Diem project is clearly not in the same category. Without this decentralized side, Diem is nothing more than another centralized system with leaders who can dictate their law to users. Bitcoin has no leader. Each user has potentially the same weight as another. Decisions are made by consensus by the majority of the users in the network. Bitcoin is a true democracy, which Diem will never be. Diem is just another stablecoin that will try to compete with PayPal If you absolutely want to compare the Diem dollar to something from the cryptocurrency world, you should look at stablecoins like Tether or the USD Coin instead. These are very popular stablecoins backed by the U.S. dollar. At the time of writing, Tether represents a market cap of $19.4 billion, while the USD Coin represents a market cap of $2.9 billion. Another interesting example is the Celo Dollar. Its market cap is currently insignificant compared to Tether or the USD Coin since it is only 16 million dollars. But Celo Dollar is governed by an association in which we find most of the members of the Diem Association. Like any stablecoin, the Diem dollar will help to hedge against volatility. De facto, Diem dollar will not be able to protect you from the ravages of monetary inflation. Yet another difference with Bitcoin that the general media don’t understand. Developed by Facebook, the Novi wallet will allow users to use the Diem dollar as a means of payment and exchange on Facebook Messenger or WhatsApp as explained previously. In this sense, the Diem can be seen as a real competitor to PayPal. PayPal currently has 346 million users if we take into account the 25 million users of its mobile payment solution Venmo. With its more than 2 billion users, Facebook would be a serious competitor for the years to come. Final thoughts Diem should therefore be seen primarily as another way to conduct online transactions quickly and securely. But with the risk of letting Facebook obtain valuable information about your financial transactions that can be cross-referenced with your other personal data in the possession of Mark Zuckerberg’s firm. Halfway between a competitor of PayPal and a stablecoin, the Diem project will not be a revolution in my opinion. It will try to compete with the CBDC, and remind you that Bitcoin is your only real alternative if you want to protect your future regarding money. If you don’t opt for Bitcoin, you’ll end up stuck between digital currencies from central banks and digital currencies from companies like Facebook’s Diem. It’s up to you to make the right decisions to protect your future.
https://medium.com/swlh/why-facebooks-diem-is-not-a-threat-to-bitcoin-92912276e34c
['Sylvain Saurel']
2020-12-04 13:07:21.375000+00:00
['Bitcoin', 'Blockchain', 'Cryptocurrency', 'Facebook', 'Tech']
Classification with Random Forests in Python
Now let’s also look at the first five rows of data using the ‘.head()’ method: print(df.head()) The attribute information is as follows We will be predicting the class for mushrooms where the possible class values are ‘e’ for edible and ‘p’ for poisonous. The next thing we will do is convert each column into machine readable categorical variables: df_cat = pd.DataFrame() for i in list(df.columns): df_cat['{}_cat'.format(i)] = df[i].astype('category').copy() df_cat['{}_cat'.format(i)] = df_cat['{}_cat'.format(i)].cat.codes Let’s print the first five rows of the resulting data frame: print(df_cat.head()) Next, let’s define our features and our targets: X = df_cat.drop('class_cat', axis = 1) y = df_cat['class_cat'] Now let’s import the random forests classifier from ‘sklearn’: from sklearn.ensemble import RandomForestClassifier Next, let’s import ‘KFold’ from the model selection module in ‘sklearn’. We will us ‘KFold’ to validate our model. Additionally, we will use the f1-score as our accuracy metric, which is the harmonic mean of the precision and recall. Let’s also initialize the “KFold” object with two splits. Finally, we’ll initialize a list that we will use to append our f1-scores: from sklearn.model_selection import KFold kf = KFold(n_splits=2, random_state = 42) results = [] Next, let’s iterate over the indices in our data and split our data for training and testing: for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] Within the for-loop we will define random forest model objects, fit to the different folds of training data, predict on the corresponding folds of test data, evaluate the f1-score at each test run and append the f1-scores to our ‘results’ list. Our model will use 100 estimators, which corresponds to 100 decision trees: for train_index, test_index in kf.split(X): ... model = RandomForestClassifier(n_estimators = 100, random_state = 24) model.fit(X_train, y_train) y_pred = model.predict(X_test) results.append(f1_score(y_test, y_pred)) Finally, let’s print the average performance of our model: print("Accuracy: ", np.mean(results)) If we increase the number of splits to 5 we have: kf = KFold(n_splits=3) ... print("Accuracy: ", np.mean(results)) I’ll stop here but I encourage you to play around with the data and code yourself. CONCLUSIONS To summarize, in this post we discussed how to train a random forest classification model in python. We showed how to transform categorical feature values into machine readable categorical values. Further, we showed how to split our data for training and testing, initialize our random forest model object, fit to our training data, and measure the performance of our model. I hope you found this post useful/interesting. The code in this post is available on GitHub. Thank you for reading!
https://towardsdatascience.com/classification-with-random-forests-in-python-29b8381680ed
['Sadrach Pierre']
2020-06-12 02:35:44.387000+00:00
['Programming', 'Software Development', 'Python', 'Data Science', 'Machine Learning']
Do books with more reviews get better ratings?
Do books with more reviews get better ratings? How factors like number of ratings and number of reviews have an impact on book ratings. Book-reading is an activity that a lot of people enjoy. It is one that places one’s mind in a different setting while providing one with an empathetic feeling. I, myself, love reading books. My favorite genre is mystery mixed with thriller. I remember reading The Da Vinci Code by Dan Brown. It was an awesome book that instilled a love for mystery genre books in me. Everything about the book from the plot twists to the storytelling was epic. I was puzzled about who the actual antagonist was while reading the book. Moreover, I was amazed by how detailed the book was in the portrayal of the architectural buildings like the cathedral and Robert Langdon’s mannerism in solving the murders. After I read this book, I went online to find out more about the book. I discovered that this beautiful piece of work got a rating of 3.8/5 from Goodreads, which is a book collection website. Stunned by this discovery, I decided to look into the highest rated books on Goodreads. As I got higher up the list, I noticed a trend that highly rated books generally got more reviews. Therefore, it became a task for me to confirm this hypothesis of mine which is: Do books with more reviews get better ratings? Getting the Data The dataset that contains all the information regarding books and their ratings on Goodreads was obtained from Kaggle via this link. The dataset had multiple features such as book id, title, isbn, number of pages, ratings count, authors, average rating, language and text review count. The dataset got reduced to include the important features such as title, ratings count and text reviews count. It was further truncated to include only books that got more than 1000 people to rate them and more than 100 people to review them. The code used to carry out this task can be seen below: ## The dataset was truncated to include relevant information df_books = df_books[df_books['ratings_count'] > 1000] df_books = df_books[df_books['text_reviews_count'] > 100] The newly formed dataset was further analyzed then clustered. A table of the first five rows of the new dataset. Analyzing the dataset The dataset was analyzed to arrange the books’ ratings in descending order to examine the top ten books. Then, an inference was made to see whether books’ ratings increase with text reviews count and ratings count. The top ten books based on average ratings. It turns out that the books’ ratings do not necessarily mean higher ratings count or text reviews count than the ones below or above them. However, the dataset has 4815 rows. Thus, making an educational guess based on ten rows is not good enough. It would be better if we examine the entire dataset by clustering the rows into groups. Clustering the dataset The dataset was clustered to find out if there is significant difference between groups that got a lot of ratings and reviews and those that didn’t get as much. Clustering the dataset was done using a method called k-means clustering. K-means clustering is an unsupervised machine learning algorithm in which the dataset learns to classify data points into groups based on their similarities with other data points. The dataset can be clustered into a defined number of group. The book dataset was clustered into three groups. After grouping the dataset, each row was given a figure of either 0, 1 or 2. The first 25 rows of the dataset after it was grouped into three groups. A new column was appended to the table called group to signify what group each book was categorized under. The Results Upon further examination, a 3-d image was created to perfectly show how the relationship occurred between the books’ average ratings, the text reviews count and the ratings count. Below is a diagrammatic representation of the clustered dataset: A 3-d representation of the dataset. The three groups’ data points got colored purple, yellow, and green. The group labelled 0 was colored purple while groups with label 1 and 2 were colored green and yellow respectively. The purple group had most of its data points spread out across the average ratings axis but had the least number of ratings count and text reviews count. Generally, the yellow group had a smaller range compared to the purple group in terms of ratings but it had a higher number of ratings count and text reviews count than the purple group. The green group had a similar ratings’ range to the yellow group but it had a higher number of ratings count and text reviews count than the yellow group overall. A table was created to show how each of the groups differ on average. A table showing the average of each group’s average rating, ratings count, text reviews count and graphical color. Based on the table, It turns out that the purple group was far behind in terms of every aspect compared to the yellow and the green group. The purple group has an average rating of 3.94, ratings count of 25,900 and text reviews count of 968. The yellow group has an average rating of 4.03, ratings count of 570,000 and text reviews count of 13,923. The green group has an average rating of 4.12, ratings count of 2,157,575 and text reviews count of 38639. Below are scatter plots showing the relationships between the groups: A scatter plot of the average rating versus ratings count for the groups. A scatter plot of the average ratings vs text reviews count for the groups. Based on the two scatter plots, my hypothesis has been fully confirmed. In general, the more reviews a book gets, the higher its rating. Books are rated by fans and critics. Critics are more likely to review books in a very scrutinizing manner than fans are. However, there are very few critics reviewing books compared to fans. Thus, the more popular a book is, the more fans are likely to rate it as perfect which means popularity is likely a confounding variable. The full version of the code that was used to build this clustering model can be seen here.
https://towardsdatascience.com/do-books-with-more-reviews-get-better-ratings-f2f68b13fad8
['Mubarak Ganiyu']
2019-08-08 00:59:59.696000+00:00
['Data Science', 'Data Visualization', 'Books', 'Reading', 'Life']
Student Series: How 360 Storytelling on Women & Girls Creates Empathy
Women & Financial Independence Produced by Jaunt VR, Women on the Move tells the story of Fatchima’s life in Niger, West Africa where the women of her village band together to form a savings group with CARE. Fatchima is confident her granddaughter Nana will have a better life because of the work the women have done and the trust they have in each other. Jaunt calls their 360 videos “cinematic,” which shows in the quality and production style of videos such as this one. The camera positions in this experience are particularly effective at creating a sense of presence, as if they are sitting beside Fatchima, rather than peering down at her. This video expertly utilizes fade-out transitions by strategically pairing them with the narration, forcing the viewer to focus only on Fatchima’s voice in building anticipation for the next scene. While all the experiences included here fit the “storytelling” theme, the narration style and shooting of Women on the Move makes it feel especially anecdotal. Women & Restarting from Crisis In contrast to Jaunt VR’s style, CNN VR’s 360 videos feel less cinematic and more journalistic. Born of War focuses on the story of 17-year-old Blessing and how she fits into the fastest growing refugee crisis of South Sudanese people fleeing to Uganda. This is the only video out of the five included here that relies on a reporter to tell the story. The footage is less clear in this video — the stitch lines are more visible and the image is slightly blurry. But, because this video is not as focused on aesthetics and instead centers on telling a more “news focused” story this works. This video is unique in the way it uses real stories to humanize statistics. For example, we learn that 86 percent of the refugees are women and children — which is an impactful figure on it’s own, but it becomes even more powerful when combined with footage of Blessing and her newborn baby “War.” This video is a great example of how to respectfully honor individuals and their stories when covering human rights crises.
https://medium.com/go-fovrth/student-series-how-telling-stories-in-360-about-women-girls-in-developing-world-creates-empathy-e19d81131ff2
['Fovrth Studios']
2017-10-09 12:55:14.245000+00:00
['United Nations', 'Women', 'Student Series', 'Virtual Reality', 'Storytelling']
Facebook for diplomats
Facebook for diplomats From Facebook Live to civic engagement, a digital diplomacy toolbox to create and nurture global communities. In the hours after president Donald Trump announced his decision to withdraw the US from the Paris Agreement on climate, newly-elected French president Emmanuel Macron went live on Facebook (and Periscope) to address the people of France and the world and express his and the leaders of Italy and Germany’s disappointment and commitment in fighting climate change. Dès ce soir, avec l’Allemagne et l’Italie, nous avons tenu à réaffirmer notre engagement pour l’Accord de Paris. Shortly after going live, using hashtag #MakeOurPlanetGreatAgain, Macron published on Facebook and his social media profiles a recorded video in English to address the American people. Now, let me say a few words to our American friends. Climate change is one of the major issues of our time. It is already changing our daily lives but it is global. Everyone is impacted. And if we do nothing, our children will know a world of uncontrolled migrations, of wars, of shortages. A dangerous world. This is not the first time a world leader goes live on Facebook, or posts videos on the platform. After all videos and images have become the bread and butter of engagement on social media and Facebook has been investing quite a bit in providing a seamless and engaging experience for users on both ends, those who broadcast and the audience on the receiving side. THE LIVE VIDEO REVOLUTION Back in 2014, when Facebook was still toying with idea of live videos and how to make them accessible to all its users, Indian prime minister Narendra Modi partnered with the platform to live stream his speech at Madison Square Garden in New York, during his visit to the US. Fast forward two years, in April 2016, Facebook opens live video capabilities to all on the platform, making it a valuable tool not only for world leaders and politicians, but also diplomats and all stakeholders engaging in diplomacy and foreign policy and wanting to better their outreach, create and nurture online communities, as well as every government, business or private entity interested in upping their civil engagement. “Live is like having a TV camera in your pocket,” Mark Zuckerberg, founder and chief executive officer of Facebook, wrote in a post. “Anyone with a phone now has the power to broadcast to anyone in the world.” When you interact live, you feel connected in a more personal way. This is a big shift in how we communicate, and it’s going to create new opportunities for people to come together. And since then, the live video shift, has been fully embraced by Facebook in its efforts to gain a bigger market within politics and foreign policy. “Even just broadcasting your press conferences and your speeches, you can reach a lot more people by going live than you could maybe reach who is there in person,” Katie Harbath, Facebook’s global politics and government outreach director, said shortly after Zuckerberg opened Live to all users. Harbath said the Live represents a new way to engage audiences, citizens, and constituents who might otherwise not participate in things like public meetings or online townhalls. She also stressed that those who are drawn to the live feeds watch three times longer than prerecorded video and engage as much as 10 times more. Obviously going live requires time and efforts, but if you have never tried it you might be surprise at how much easy and brainless the process is. And how engaging it can be. Former Italian prime minister Matteo Renzi has been known to use Facebook Live throughout his administration to answer citizens’ questions, rather than a tool of digital diplomacy. His Matteo Risponde — which translates as: Matteo Answers — eventually became so popular that he replicated it on Twitter as well. In my experience, simply using a smartphone, you can post very engaging live videos that bring you behind the scenes of diplomacy and politics. During Renzi’s visit to Washington DC for the Obama’s last state dinner in October last year, at the Embassy of Italy in US we used videos, including Live, to make the experience available to all our followers and to those interested in the relations between Italy and the US. Just using a smartphone, we went live for the arrival ceremony at the White House South Lawn, the joint press conference in the Rose Garden, the prime minister’s arrival for the State Dinner, the toasts, and Gwen Stefani’s performance at the end. The engagement was much higher than usual allowing us to reach 200,000+ users the night of the State Dinner. We repeated the experience in April during prime minister Paolo Gentiloni’s visit to the White House to meet president Donald Trump; and we keep using live videos on Facebook to nurture the conversation around Italy. And all with just a smartphone. “Some of the most engaging videos I see are coming from people just taking them with their phone,” Harbath said. “You want people to feel they are an active participant in what you are putting up on Facebook, not a passive observer.” Live is certainly the trend for many world leaders, ambassadors, and diplomats. “A now established trend, but one that will populate social media more is the proliferation of live ‘broadcasting’ — Facebook’s Live feature and Twitter’s Periscope have become useful tools that allow anyone to broadcast on the spot,” the 2016 Soft Power 30 report by Portland Communications and Facebook reads. It states: “The ability to create rich video content is a huge asset for savvy, well-spoken diplomats with something to say. Likewise, foreign ministries and world leaders can now open up meetings, speeches, events, and other diplomatic activities to the public with a smartphone and a Wi-Fi connection. These live video apps will likely become a regular feature of digital diplomacy practiced through social media platforms.” In February, I asked on Twitter: Beyond the more traditional press conferences and official statements, among some of the latest examples using live videos are: Australian foreign minister Julie Bishop live on Facebook for a Q&A on Australia’s Foreign Policy White Paper. The host asked her: “What do you try to achieve with a Facebook live event?.” She replied: “We think it’s very important to have as broad a consultation as possible about what Australian want to see about their foreign policy — […]to make foreign policy less foreign — so people can relate to how foreign policy relates to their day-to-day lives.” Given that social media is such an important platform — and so much about our lives — it would be great to get a feedback from Facebook. The latest visit of Argentinian president Mauricio Macri to the White House was live on Facebook. Prince William and Lady Gaga, via the British Royal Family’s Facebook page, went live to discus the Heads Together initiative to promote openness about mental health issues. Using just a smartphone, the president of India Pranab Mukherjee went live to celebrate the Holi festival of colors at at Rashtrapati Bhavan, showing how music and traditions can engage a broad audience worldwide. CanadianPM Justin Trudeau live on Facebook for Nobel recipient Malala Yousafzai’s speech at the Canadian Parliament in Ottawa, the youngest person ever to address Canadian representatives and parliamentarians. On International Women’s Day, Swedish foreign minister Margot Wallström went live on Facebook — as well as Youtube and Periscope — to host “the first public digital meeting of foreign ministers.” The foreign ministers of Panama, Kenya, and Liechtenstein joined remotely to discuss discusses peacebuilding, gender equality, and the role of women in the international agenda. Promoting the digital diplomacy event with their audiences on social media, the ministry highlighted the behind-the-scenes nature of the initiative: Have you ever wished you could be a fly on the wall in high-level diplomatic meetings? The challenge for the political and diplomatic communities now seems to re-tool their live presence on Facebook for public diplomacy and to explain foreign policy to their audiences at home and abroad, rather than just for campaigning and elections purposes. And the same can be said for pre-recorded videos and online campaigns primarily driven by videos. VIDEOS, VIDEOS, VIDEOS How to forget the video posted by the French Ministry of Foreign Affairs with edits to a previous video by the White House on the withdrawal from the Paris Agreement? That video, however, never made it to their Facebook pages, including that of their embassy in Washington DC. Posted on the ministry’s Twitter profile in English, the video went viral with more than 10,000 retweets and 13,000 likes. Here’s the original video posted on Facebook by the White House: Some have already embraced videos, both live and pre-recorded. Back in May, for example, the ambassadors of the Canada, the European Union, Sweden, and the United States, worked a collaborative initiative and went live on the Facebook pages of their respective embassies to discuss press freedom and to commemorate World Press Freedom Day. To date, their videos garnered a total of more than 30,000 views and countless interactions and comments: more than 22,000 views on the page of the US Embassy (below), 5,000 views on the page of the Canadian Embassy, almost 3,000 views on the EU page, and 500 views on the Swedish page. After the G7 Summit in Taormina, Sicily, the Italian Ambassador to the US went live on Facebook and Twitter to talk about the highlights of the Summit, the agenda forward, and the Italian presidency of the G7 for 2017, as well as Italy-US relations. And here’s a look at the behind the scenes during the live shoot, empowering Facebook’s 360 capabilities. Videos, live or not, are gaining popularity among diplomacy players. And the more relatable and funny they are, the more engaging. It is the case, for example, of a video posted in September 2016 by the Embassy of Canada in Myanmar, part of a series on experiencing the countries traditions. This one was on Canadians’ reaction to betel nut! In December last year, the German Embassy in Washington posted a video that fed on the viral #MannequinChallenge craze. Watch our video to see diplomats, staff and interns frozen in place. Tell us, who has the best pose? Also, the North Atlantic Treaty Organization (NATO) has recently launched #WeAreNATO, its first major communications campaign in nearly a decade following the leaders’ summit in Brussels a few weeks back and controversies fueled by president Trump’s comments on the Alliance. The 5-year campaign, which encompasses a wide variety of communications, public affairs and creative media relations, embraces videos and images to build a brand that fits all NATO’s social media channels, including Facebook. “This has been an exciting project for our team,” Doug Turner, partner at Agenda, told The Holmes Report. Together with MHP Communications, Agenda crafted the campaign. Turner described the experience: “Helping NATO reach audiences in more than 28 member countries and to explain its mission of guaranteeing peace and security for its citizens in the kind of work we love to do.” He added: “It’s an entire brand for the alliance. We built the messaging framework that each country can use and develop on their own.” “It’s crucial that all of our citizens, particularly young people who have grown up in times of peace, understand what NATO is and what we do,” said Tacan Ildem, NATO’s assistant secretary general for public diplomacy describing the campaign. “Our continued success depends on our citizens and understanding the essential role that NATO plays in our security, on which our prosperity is based. We will remain fully transparent and proactive in expanding our essential work to the outside world.” The examples are many and get more and more engaging by time as governments, embassies, and diplomats are feeling more at ease with video tools and publishing public diplomacy content that can be attractive to younger audiences as well as experts and the foreign policy community at large. Videos and live videos, however, have been under scrutiny by the media and public opinion as they make more difficult for the company to tackle hate speech and terrorism and violence online. “Given the importance of this, how quickly live video is growing, we wanted to make sure that we double down on this and make sure that we provide as safe of an experience for the community as we can,” Mark Zuckerberg, founder and chief executive officer of Facebook, told investors in May as he announced the company will hire 3,000 more people over the next year to speed up the removal of videos showing murder, suicide and other violent acts. The so-called phenomenon of fake news is also of concern. In a recent interview on soft power published by Harvard University’s Wheatherhead Center for International Affairs, professor Joseph Nye said: “In the past, during the Cold War, you had the Voice of America, for example. And now you have Facebook. And the interesting question will be how will social media avoid being manipulated with fake news.” He added: “We’ve seen the beginnings of efforts to counter this… by using social media for positive purposes. It’s like a game of cat and mouse; it goes back and forth. I don’t see the cat or the mouse winning the definitive battle.” FACEBOOK FOR DIGITAL DIPLOMACY As of March 2017, Facebook registered 1.94 billion monthly active users (MAUs) and 1.28 billion daily active users (DAUs). The US and Canada accounted for 234 million MAUs, Europe for 354 million, and the Asia-Pacific region for 716 million. The latest World Map of Social Networks highlights Facebook’s dominance as the platform of choice in most countries around the world. The map, compiled by digital expert Vincenzo Cosenza since 2009, shows that Facebook is the leading social media in 119 out of 149 countries analyzed. Among Western countries, Facebook doesn’t dominate only in Japan, where Twitter is ranked first and Facebook is second. According to the latest Twiplomacy study by communications and public affairs firm Burson-Marsteller, Facebook is the second most popular social media tool among world leaders and governments. It is, however, “where they have the biggest audience,” the study highlights. The study shows that “the heads of state and government, and foreign ministers, of 169 countries are present on the platform, representing 88 percent of all UN member states.” There is one world leader who’s noticeably absent from Facebook: the Pope. Despite having both a Twitter account — launched in 2012 by his predecessor Pope Benedict XVI — and an Instagram account (2016), Pope Francis has not yet launched on Facebook. “Where people are, the Church is,” said Msgr. Lucio Adrian Ruiz, secretary of the Vatican Secretariat for Communications (SPC) and a former head of the Vatican Internet Service, during a digital diplomacy workshop in Rome, hosted by SPC and the British Embassy to the Holy See. “This is why the Pope is present on Twitter and Instagram.” So… Why not Facebook then? That is a question that not only world leaders, but also politicians, ambassadors, and diplomats should ask themselves. The 606 Facebook pages included in the Twiplomacy census account for a combined audience of 283 million likes. On average, Facebook pages are more popular than Twitter accounts. In fact, Facebook pages register a “median average of 38,891 likes per page, compared to 16,848 followers for each Twitter account,” the study reads. Indeed, similar numbers are mirrored when it comes to how many ambassadors and diplomats are on Facebook — or using Facebook for digital diplomacy rather than personal purposes — compared to Twitter. Embassies around the world have certainly embraced the platform, but how can an embassy be relatable? How many know what an embassy is and does? People are interested in people. They’re interested in going behind closed doors, behind the scenes of major meetings, events, and summits; finding out what it means to represent a country abroad; sitting down at a table with an ambassador or a foreign minister; being able to relate to them and ask them questions. “Because there’s more people online, because there are more empowered, they want to have these conversations,” said Harbath at the launch of the 2016 Soft Power report. “They want to have this engagement and they know that it shouldn’t be a one way conversation.” Governments and government officials are getting better at making use of digital platforms like Facebook, but the next step should really be empowering those platforms beyond putting out press releases and broadcasting official messages. “Success in 21st Century Statecraft will belong to those who know how to effectively identify, build, and deploy soft power via public diplomacy and the effective use of digital tools and technology,” Arturo Sarukhan, former Mexican ambassador to the US, writes in the 2016 Soft Power 30 report. In his book The Future of #Diplomacy, Philip Seib, professor of journalism, public diplomacy, and international relations at the University of Southern California and former director of USC Center on Public Diplomacy, asks: “ Is Facebook a gimmick, a useful tool, or something more?” “It certainly cannot be ignored,” he writes. He adds: “Diplomats might be excused for dismissing Facebook as being outside their realm of concerns. But then again… Connections among more than a billion people must mean there are ways to put Facebook to work.” AMBASSADORS ON FACEBOOK I asked Ambassador Sarukhan, a pioneer in the use of social media for public diplomacy, his thoughts on Facebook and whether ambassadors in particular are shying away from it. “I don’t think there is a one-size fits all response,” he said. “Clearly, Twitter has morphed into the much more popular and relevant platform for politics, diplomacy, and public policy issues, and therefore concentrates a higher number of relevant actors and opinion-makers.” He identified a few reasons: “A reason may be precisely that it forces users to engage (one would hope) succinctly and intelligently (though that is certainly not the prevailing norm, unfortunately!). Another is that many policymakers may feel that Facebook is a victim to its own success and branding, much more a truly social network, where the personal and social interconnections (whether it’s friends, travel, tastes, and general opinions) weigh more.” He continued: “When I decided to use these platforms as a digital diplomacy and public diplomacy tool, I certainly made a deliberate decision to use one platform over the other, precisely because I wanted to avoid the perception that my endeavor was anything but driven by Statecraft and street-craft. Nonetheless, I see more and more government agencies and ministries and public officials and politicians using a wider roster of social media tools, Facebook prominently amongst them, to complement reach and impact.” Similarly, Tom Fletcher, author of The Naked Diplomat and former British ambassador to Lebanon, said that he “wanted to do one medium well, rather than spread myself across several.” “If I was starting again, I would do more Facebook and Instagram,” he told me. “But Twitter still feels like the place where the best debates are, and where you can pick the right arguments. Facebook still feels a bit more lightweight and social. I’m probably wrong!” According to Jan Melissen of Clingendael, Netherlands Institute on International Relations, “Social media make things more personal. And bring people who traditionally operate in the shadows into the limelight, giving an ambassador a face. You can find out what they are doing by following them on their social media account. People also get more ‘digital personality’.” There are some interesting examples out there of Ambassadors on Facebook. Back in October last year, for example, Israel Ambassador to UNESCO Carmel Shama-Hacohen used his personal page to look for someone to translate an article from Hebrew to French… On Shabbat! “Shabbat shalom, lovers of Israel,” he wrote. “The State of Israel needs a little help.” 240 reactions, 39 shares, and 34 comments later, we hope he found help. Another great example is from Casper Klynge, outgoing Ambassador of Denmark to Indonesia and newly appointed — and first-ever — Danish Ambassador to Silicon Valley’s tech groups. In one of his last posts as Danish envoy in Jakarta, he posted: “As a principle, I use FB as a professional tool and thus never post anything private. I will make one exception: Friday was the last day of school for our two children after three years at Jakarta International (Intercultural) School. This is a short video from the elementary school ‘graduation’ with the obligatory Dragon Cheer. It says it all!” Ambassador-designate Calista Gingrich, whose personal page on Facebook produces high engagement, posted the same day President Donald Trump announced her nomination as US Ambassador to the Holy See. FROM DIGITAL DIPLOMACY TO GLOBAL COMMUNITIES As for every social media tool, my advise is always to be personal, intimate, and natural when it comes to communicating and engaging with your audiences. And Facebook’s added value is not only the potential size of your audience, but also the ability to tap into groups, conversations, and even move the conversation offline via events, and civic engagement initiatives. After all, Zuckerberg himself has recently stressed how he wants Facebook and the Facebook community to shape itself in the future. “Facebook stands for bringing us closer together and building a global community, he wrote in what many called a manifesto. “Every year, the world got more connected and this was seen as a positive trend. Yet now, across the world there are people left behind by globalization, and movements for withdrawing from global connection. There are questions about whether we can make a global community that works for everyone, and whether the path ahead is to connect more or reverse course.” In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us. And I believe the diplomatic community and all foreign policy stakeholders, traditional and less-traditional, have a great role to play in reshaping the way not only we communicate with social media tools, but also how we use them to build, nurture, and expand global — or better, glocal if you will — communities in our own backyard and around the world.
https://medium.com/digital-diplomacy/facebook-for-diplomats-c50d1e2f890d
['Andreas Sandre']
2017-06-12 15:21:50.734000+00:00
['Technology', 'Digital Diplomacy', 'Facebook', 'Tech', 'Social Media']
Future according to Designing Data-Intensive Applications
Designing Data-Intensive Applications by a University of Cambridge researcher Martin Kleppmann is a great book for those who want to understand how different databases function and how to choose between them. But there is more to it than technical details explained in a systematic way. The book also paints a high-level overview of the current state of tools and techniques for managing data and the emerging tendencies shaping the future. In this article we will analyze some of the interesting thoughts presented there: how the distinction between non-relational and relational databases (and even messaging systems and databases) becomes less sharp what is the approach of modern databases to transactions and consistency in general which ways of handling Big Data are no longer recommended and what to do instead how to use polyglot persistence (different databases for different purposes) without running into problems how to organize client-server communication so that client applications are responsive and don’t present outdated information to the user Converging data models Originally the term “NoSQL” was understood simply as “no-SQL” or “no-relational”. You could think there is hardly any place for compromise. Currently it is rather advertised as “not only SQL”, and for a good reason from the marketing point of view: there is a growing interest in databases that support more than one way of accessing the data. For example, RethinkDB, a document database, introduced a feature from the world of relational databases — table joins. The good old SQL standard gained JSON support and you can use it in both popular commercial and open-source (MySQL, PostgreSQL) database engines. As a result, the distinction between relational and non-relational databases is no longer as sharp as before, which gives you more flexibility in modelling your data — if you have learned how those new capabilities can be applied. For a nice overview of practical case studies of storing JSON in PostgreSQL head over to a comprehensive article by Leigh Halliday . Convergence can be also seen when we look more broadly on data-processing systems, not just on databases. There are messaging systems that offer durability guarantees just as databases do (Kafka) and databases that can be treated as message queues (Redis). In the world of messaging systems, Apache Pulsar (not mentioned in the book) claims to provide both high-performance event streaming (in a Kafka way) and traditional queueing. Consistency is taken more seriously Not long ago, when the hype around the term “NoSQL” was growing, you could easily find people blogging or tweeting how “obsolete” were relational databases. If you had asked where to find transactions, you might have heard you don’t understand the “modern” databases. And if you had had doubts whether the data would be correct under high traffic, the popular answer would have been that you need to learn to live with “eventual consistency”. As Martin points out, times have changed. It occurred that with a database giving little consistency guarantees you end up solving difficult distributed programming problems on your own — and it’s too easy to make mistakes. The expectation now is that a database will take some of the problems away from you so that you can focus on your business model and leave distributed programming to people who specialize in it (i.e., database creators). As the “eventual consistency” slogan ceased to be an acceptable excuse, database creators finally started to be more clear about their approach to consistency. Some, like Datomic and Fauna DB, highlight “transactions” and “consistency” when advertising their products. The first thing you could see in the announcement of MongoDB version 4 was “Multi-Document ACID Transactions”. New ways to think about consistency There is a rise of new databases attempting to provide ACID guarantees without requiring too much coordination that slows down distributed systems. They try different approaches than traditional database engines, for example by ensuring that transactions are very short and deterministic, and executing them in a single thread, like VoltDB. There are even changes in the way we discuss consistency. Researchers have deemed transaction isolation levels defined by SQL standard as flawed. The once-popular “CAP theorem” is also heavily criticized by Martin as being confusing (there is a post on his blog Please stop calling databases CP or AP). Things are beginning to improve and perhaps soon database creators will adopt more precise vocabulary when describing their products, and we will be able to talk about “reading your own writes” consistency, causal consistency etc. Trust but verify Until recently the problem was not only that database producers weren’t clearly stating what consistency guarantees they provided, plus their claims weren’t verified. It may seem shocking that this situation changed only recently, as we are talking about systems that can cost lots of money. The open-source tool Jepsen is used by creators of various distributed systems like Cassandra and Elasticsearch to hunt bugs in their implementations. Reports from analyses are open to the public. As mentioned before, databases are meant to take care of some coordination work away from us and be a proven solution to problems of data sharing. The reality is a bit horrifying: you don’t need to scatter your database over a network to run into bugs — some systems behave incorrectly even when running on a single machine. Martin Kleppmann tested popular relational databases for transaction isolation (see results in github.com/ept/hermitage) and found multiple issues. For example, that “repeatable read” isolation level means different concepts in different databases, or that some anomalies appear although by definition they shouldn’t. You might think that the relational model is so mature that everything has already been found out and these times long-existing SQL databases are just polishing details and adding extra features like JSON integration. The book shows that research on the topic is still active and it has practical consequences. Not only can we learn about mistakes in transaction implementations in some products, but sometimes new ways of solving old problems are found. In 2008 a publication introduced an improvement to the well-known snapshot isolation that fixed some anomalies without the performance penalty of locks used in typical serializable isolation. This “serializable snapshot isolation” was implemented in PostgreSQL three years later (and there are no other popular implementations as for now). The database has long advertised itself as “the World’s Most Advanced Open Source Relational Database” — but maybe now the reality is, if you remove “open source” from the statement, it will still be true. Modern alternatives to MapReduce Let’s leave “typical” databases for a moment to focus on processing big amounts of data. The book highlights the limitations of the MapReduce paradigm represented by tools like Hadoop. The whole idea originated from Google, which admitted in 2014 that they no longer use MapReduce anymore. This way of processing forces storing each intermediate result on disk, which adds lots of overhead. A crazy thing is that MapReduce, still an immensely popular solution, has such a suboptimal design because it was created as an open-source implementation of an approach described in a Google paper, while not correcting for the very specific Google environment (where MapReduce-like jobs were frequently killed to give way to more important tasks, so dumping data on disk made lots of sense). Modern Big Data processing should therefore choose more high-level and better-optimized solutions like Spark and Flink. Handling derived data When building a big system with various ways of accessing data, it’s hard to achieve high performance when using just one kind of database. It’s better to use several data systems, each optimized for a particular access pattern, for example a “normal” database to store all information and ElasticSearch for efficient full-text search on some parts. The book strongly discourages using application code for updating various data stores. The risk is too big that with changes happening fast, one database will apply them in a different way than another type of database, creating two views that will contradict each other. The suggested solution is to use Kafka and Change Data Capture to distribute updates. Durability and ordering guarantees from Kafka will ensure derived data will be kept consistent. As there are connectors to many popular databases, streaming changes is possible without writing any custom code. Constructing derived data this way allows us to build less coupled and more performant systems in the microservices architecture by making the communication purely event-based and asynchronous. A service could then own a local database kept up-to-date by reading updates from Kafka. As a result, the service will be able to quickly query this local database and get data in the format ideally suited for it, instead of sending synchronous requests to other services. A nice example of this approach can be found in the presentation The Database Unbundled: Commit Logs in an Age of Microservices, which features a “refactoring” of a synchronous microservices architecture. Another benefit Martin mentions is better schema evolution. Traditionally, if a data format no longer fitted the purpose, migrating to a new one was painful. Client code had to be changed, also running the migration job sometimes required outages. To avoid this, we can create a new derived view by reprocessing a Kafka log and experiment with the view to see if it works better than the old one. Then we can gradually modify clients, finally removing the old view that is no longer used. Pushing the state to the client When your app gets data from a server, displays it and allows to modify it, it acts similarly to a database replica — and network connectivity issues create problems similar to those from the database world (welcome conflict resolution). Another new trend Martin mentions is applying the previous idea not just to “derived” databases, but also to client applications. Instead of asking the server to do the heavy lifting of preparing a big bunch of data computed for the needs of the client, an alternative is to keep an open communication channel and send very basic data changes to the client. Just as in Change Data Capture, but with something like WebSocket or Server-Sent Events instead of Kafka, and a Redux-like engine (see our post on ngrx ) instead of a database with derived data. The advantage: the client does not work on an outdated view of the system, but is frequently updated as the state changes on the server. In Nexocode we use the Change Streams feature of MongoDB to send changes (added, modified or removed documents) to our Angular-based web application and it gives a great user experience. Conclusion We could assume that because databases have been developed for more than 50 years, we know well how to use them. But we are still learning and we make mistakes. Fortunately, it seems that academic research helps improve existing products, we are better informed about what different solutions are capable of, and old assumptions are revised so that systems work better in a distributed environment. For those storing data in different kinds of databases, distributed log systems like Apache Kafka emerge as a proper way of propagating changes. Similarly, Kafka-like platforms allow to cut synchronous calls between microservices and build more responsive systems. Relational databases continue to be an important building block. NoSQL databases may adopt results of ongoing research on offering meaningful consistency guarantees without sacrificing performance — this way existing engines could find wider adoption or completely new ones will gain popularity. As always, there is no silver bullet. New tools and new features appear, but a solid understanding of data systems principles is needed in order to choose solutions appropriate for a concrete project.
https://medium.com/nexocode/future-according-to-designing-data-intensive-applications-44bb15e3c55e
['Piotr Kubowicz']
2020-06-17 14:52:11.347000+00:00
['Big Data', 'Database', 'Kafka', 'Software Development', 'NoSQL']
🌻The Best and Most Current of Modern Natural Language Processing
Over the last two years, the Natural Language Processing community has witnessed an acceleration in progress on a wide range of different tasks and applications. 🚀 This progress was enabled by a shift of paradigm in the way we classically build an NLP system: for a long time, we used pre-trained word embeddings such as word2vec or GloVe to initialize the first layer of a neural network, followed by a task-specific architecture that is trained in a supervised way using a single dataset. Recently, several works demonstrated that we can learn hierarchical contextualized representations on web-scale datasets 📖 leveraging unsupervised (or self-supervised) signals such as language modeling and transfer this pre-training to downstream tasks (Transfer Learning). Excitingly, this shift led to significant advances on a wide range of downstream applications ranging from Question Answering, to Natural Language Inference through Syntactic Parsing… “Which papers can I read to catch up with the latest trends in modern NLP?” A few weeks ago, a friend of mine decided to dive in into NLP. He already has a background in Machine Learning and Deep Learning so he genuinely asked me: “Which papers can I read to catch up with the latest trends in modern NLP?”. 👩‍🎓👨‍🎓 That’s a really good question, especially when you factor in that NLP conferences (and ML conferences in general) receive an exponentially growing number of submissions: +80% NAACL 2019 VS 2018, +90% ACL 2019 VS 2018, … I compiled this list of papers and resources 📚 for him, and I thought it would be great to share it with the community since I believe it can be useful for a lot of people.
https://medium.com/huggingface/the-best-and-most-current-of-modern-natural-language-processing-5055f409a1d1
['Victor Sanh']
2020-08-31 15:00:46.038000+00:00
['Machine Learning', 'Deep Learning', 'NLP', 'Research', 'AI']
Apple Is Aiming To Release The Apple Car In 2024
Apple Is Aiming To Release The Apple Car In 2024 Apple Is Aiming To Release Apple Car With Next Level Battery Technology Apple has a car project by the name of Project Titan. According to different online sources, apple started this project in 2014 and still is working on it. Apple was aiming to release the car in 2020 but for global pandemic and other global business reasons, they deny it. I already talked about apple’s 2020 car release mission in this article Year In Review 2020. And now Apple Inc is looking forward to producing self-driving vehicles with its own breakthrough battery technology, reported by Reuters. The Apple Car project has changed leadership several times and hundreds of employees have been laid off during the course of development, but it is now under the leadership of John Giannandrea, Apple’s AI and machine learning chief, who took over the reins from Bob Mansfield after Mansfield retired in 2020.
https://medium.com/macoclock/apple-is-targeting-to-release-the-apple-car-in-2024-f811fec9cf1
['Ghani Mengal']
2020-12-29 06:24:52.985000+00:00
['Technology', 'Tech', 'Business', 'Apple', 'Apple Car']
JavaScript.Linked Lists. Get last element in the list. Clear the list.
First of all, we need to check if there is any element, if it’s not we just return null. Then we create variable node where we point to the head(first element).We create while loop to move till the last element. If the next element is null, we return the node. getLast() method We are using example that I was sharing with you above. Example Result class Node { constructor(data, next = null) { this.data = data; this.next = next; } } class LinkedList { constructor() { this.head = null; } insertFirst(data) { const node = new Node(data, this.head); this.head = node; } size() { let counter = 0; let node = this.head; while (node) { counter++; node = node.next; } return counter; } getFirst() { return this.head.data; } getLast() { if (!this.head) { return null; } let node = this.head; while (node) { if (!node.next) { return node; } node = node.next; } } } const list = new LinkedList(); list.insertFirst(“a”); list.insertFirst(“b”); console.log(list.getLast()); //returns ‘b’ Clear the list Function → “clear()” Directions Empties the linked list of any nodes Example
https://medium.com/dev-genius/javascript-linked-lists-get-last-element-in-the-list-clear-the-list-d46a00769c51
['Yuriy Berezskyy']
2020-07-23 07:30:39.896000+00:00
['React', 'Data Structures', 'Linked Lists', 'JavaScript', 'Algorithms']
Freehand Drawing in Angular
Freehand Drawing in Angular I wanted to something fun for the holiday season, so I decided to port a variable-width stroke from the Flex Freehand Drawing Library I created back in the early 2010’s. This stroke actually has a venerable history, going back to about 1983, as an exercise I was assigned as a teaching assistant for a graduate course in computational geometry. The instructor’s company recently obtained a very expensive tablet. This system allowed users to scan or load drawings already in electronic form into a display and annotate them with hand-drawn notes using a fixed-width stroke. The instructor had an idea for a variable-width (speed-dependent) stroke that would be the basis for a number of lab exercises. My job was to get his idea working in Fortran (yes, now you can laugh at my age). Of course, the Tektronix graphics displays we had at the university did not have the ability to input sequences of pen coordinates, so we had to simulate them with arrays of x- and y-coordinates. Now, you can really laugh at my age! I breathed some life into this code when it was converted to ActionScript for use in a Flash project and then later formalized into a Flex-based drawing library. It has now been converted to Typescript and packaged into an Angular attribute directive. This directive allows you to imbue a container (primarily a DIV) with freehand drawing ability. Of course, before we begin, point your friendly, neighborhood browser to this GitHub so that you can obtain the code to use in your own projects. Drawing The Stroke A stroke in general consists of three distinct actions, the first of which is executed on an initial mouse press. The second is executed continually during mouse move. The final action is executed on mouse up. Actions on mouse-down are largely bookkeeping; record the first mouse press, create an appropriate container in the drawing environment, and initialize all relevant computation variables. The code that accompanies this article draws into a Canvas (using PixiJS). If there is suitable interest, I’ll be glad to publish another article showing how to draw the same stroke into either Canvas or SVG and satisfy the drawing contract at runtime using Angular’s DI system. Mouse-move actions are a bit more complex. Smoothing is applied to the sequence of mouse coordinates in order to average out some of the ‘shakiness’ in the drawing. An initial width is applied to the stroke, and that width either expands or contracts with mouse speed. The current algorithm increases stroke width with higher mouse velocity, although you could modify the code to enforce the opposite condition. A minimum threshold on stroke width is enforced in the code. The stroke is divided into ‘endpoints,’ the first end of the stroke and the tip. In between, opposite sides of the stroke are drawn using a sequence of quadratic Bezier curves. Each side of the stroke is essentially a quadratic spline with C-1 continuity, meaning that the spline matches coordinate values and magnitude of first derivative at each join point. The points through which each spline pass are determined by using the direction of the most recently smoothed segment, projected perpendicular in opposite directions based on the variable-width criteria. Since smoothing is employed and smoothing is a lagging computation, the smoothed stroke computations run behind the current mouse position. The ‘tip’, which extends from most recently smoothed point to current mouse point is drawn with a couple of straight lines and a circle. So, how does this all work in detail? Well, it’s like … blah, blah, math, blah, blah, API. There, we’re done :) . Now, if you are a seasoned Angular developer, then you are already familiar with attribute directives. Spend five minutes on a high-level review of the demo and you are ready to drop the freehand drawing directive into an application. If you prefer a more detailed deconstruction and are just starting out with Angular, the remainder of the article discusses how the Typescript code to implement the stroke algorithm is packaged into an Angular attribute directive. Freehand Drawing Directive To conserve space, I’ll cover the high points of the directive; review the source code to deconstruct the fine details. /src/app/drawing/freehand-drawing.directive.ts The directive selector is ‘freehand’, and the directive can be applied in multiple ways ranging from self-contained interactivity to no internal interactivity. Several parameters may be control by Inputs. The main app component template, /src/app/app.component.html illustrates several use cases, <!-- minimal usage <div class="drawingContainer" freehand></div> --> <!-- caching control and begin/end stroke handlers <div class="drawingContainer" freehand [cache]="cacheStrokes" (beginStroke)="onBeginStroke()" (endStroke)="onEndStroke()"></div> --> <!-- control some drawing properties --> <div class="drawingContainer" freehand [fillColor]="'0xff0000'"></div> Note that freehand drawing is applied to a container (most likely a DIV) as an attribute. The directive’s constructor obtains a reference to the container and initializes the PixiJS drawing environment. The drawing environment is tightly coupled to the directive in this implementation for convenience. Since Inputs are defined, the Angular OnChanges interface is implemented. The ngOnChanges method performs light validation of inputs. Mouse handlers are assigned or removed if interactivity is turned on or off. Caveat: If no Inputs are defined in the HTML container, ngOnChanges is not called. Ensure that all Input values have reasonable defaults. The OnDestroy interface is also implemented since mouse handlers may be defined. If so, these need to be removed when the directive is is destroyed. A drawing may contain multiple strokes, so this implementation of the directive stores all containers for each stroke. The coordinates for a single stroke are cached, if desired. This makes it possible to query the x- and y-coordinates for a single stroke. The directive allows for complete external control. It is possible to load raw mouse coordinates from a server, for example, (i.e. previously stored strokes) and then exercise the API as if the same coordinates were obtained via mouse motion. Previously drawn strokes may be completely redrawn in this manner. It may also be more convenient to control mouse interaction at a higher level than the container. For these reasons, the directive exposes a public API for beginning, updating, and then ending a stroke. public beginStrokeAt(x: number, y: number, index: number = -1): void public updateStroke(x: number, y: number):void public endStrokeAt(x: number, y: number): void A stroke may also be erased, public eraseStroke(index: number): boolean The entire stroke collection may be cleared and the drawing area made available for a new set of strokes, public clear(): void The bulk of the work (and the math) is performed in the updateStroke() method. It’s really just some smoothing, analytic geometry, and a couple of quadratic splines with a dynamic tip at the end. As I mentioned at the beginning of the article, don’t credit the drawing algorithm to me; it goes back at least to 1983 to Dr. Tennyson at the University of Texas at Arlington. On the subject of credit, how about giving yourself some credit for a new dynamic drawing application in Angular? Grab the code, copy and paste, and enjoy some fun holiday coding! Good luck with your Angular efforts. EnterpriseNG is coming EnterpriseNG is a two-day conference from the ng-conf folks coming on November 19th and 20th. Check it out at ng-conf.org
https://medium.com/ngconf/freehand-drawing-in-angular-a982e36f90a2
['Jim Armstrong']
2020-09-30 19:55:08.961000+00:00
['Geometry', 'Angular', 'JavaScript', 'Math', 'Typescript']
Standing Out With Exceptional Customer Experiences: What Works For Project Army Founder Viktor Nagornyy
The Nitty-Gritty: How Project Army founder Viktor Nagornyy discovered his opportunity in the website support & hosting market The reason he decided to do the opposite of industry standards when it comes to key customer policies How he landed on competitive pricing without having to slash expenses or sacrifice customer experience Why prioritizing exceptional customer experiences has led to significant business growth I’ve been building websites with Wordpress for almost 11 years now. In the beginning, I used the cheap web hosts you’re probably already familiar with — I won’t name names, though. I relied on the support those web hosts offered to teach me just about everything I know about name servers, MX records, cPanel, and common errors you get when screwing around in the backend of Wordpress. I asked, they answered. Then, something changed. Over time, the support got less and less reliable. It got less and less helpful. It was less and less personable. And somewhere along the line, the support started to suck. At the same time, I started to notice I just wasn’t getting the same level of service from these companies that I had in the past. My website was down frequently. They started to tell me I needed to upgrade and then upgrade again. That’s when I jumped ship. Today’s guest noticed the same crap happening in the web support & hosting industry. Instead of pursuing a marginally better solution, he decided to take advantage of the situation and use exceptional customer experiences as a way to stand out in a very crowded market. Viktor Nagorynyy is the founder of Project Army. What started as an SEO and digital marketing consultancy has blossomed into a full-service website support & hosting company that prioritizes customer service and experience. Viktor shares how doing the opposite of what everyone else is doing has led to big results, why customer service is so important to him, how prioritizing customer service has helped the company grow, and how he utilizes social media to offer help to anyone — even if they’re not a customer. Now, let’s find out what works for Viktor Nagorynyy!
https://medium.com/help-yourself/standing-out-with-exceptional-customer-experiences-what-works-for-project-army-founder-viktor-574ae34cf2b5
['Tara Mcmullin']
2019-11-12 16:42:02.265000+00:00
['Business', 'Podcast', 'Small Business', 'Entrepreneurship', 'Customer Experience']
Do you want to code ‘by the book’?
Photo by Sven Mieke on Unsplash I remember what it was like to be completely new in programming. I wanted to learn everything. Everything. And to do things the right way. But the worst part is probably, I thought I had to. But what’s the right way, really? When I started my education to become a web developer, I deeply thought that there was such a thing as the ‘right’ way to code. But I quickly came to other conclusions. So, what do I mean by code ‘by the book’? Writing code by the book, to me, feels more like fixating around all the processes more than the purpose or goal. A fixation that there is a textbook example of how you should do it. I remember how I, as well as the majority of my classmates, believed in this in the beginning. We were asking for the right solutions, how problems really should be solved. This is partly no big surprise since most experienced workers in a lot of other fields of work have perfected their methods and ways of working, so one might easily think that you can be taught expert ways and book examples in this occupation also. Don’t get me wrong, there’s a lot of things you could learn from experienced developers, but it’s a dangerous thing in this line of work to rely on others’ expertise solely because of their expertice in order to solve your problems. So to this, I say no. Do not write code by the book. Write code by the domain. Or more specifically, write code by the domain you are working with right now. Your domain should dictate the code you write and at the very least how you write it. Why? Because code is documentation, in a lot of ways. “We’re not engineers! We’re writers!” — Robert Tublén (my main teacher) The code you write is an explanation for your ways of solving problems in your domain. And solving them with textbook examples, which are often very generic, will most likely dictate your domain instead of the other way around. And what’s the problem with that you might ask. Well, a lot of things. Say you’re owning a car rental business. You have decided that you should have a website that customers can use to book cars. You come across a fantastic, widely used booking library. But as it turns out, there’s no support integrating big parts of your business logic. All the items that can be booked have to be yellow, have two wheels at the most, and also none of the items can be booked for longer than a day. But the library is great! You could almost say that it’s a textbook example! So what do you do now? Sell your fleet of cars to buy a bunch of yellow vehicles with only two wheels? Congratulations, you’re now the owner of a bike rental business. This example is not very likely to come across in real life, but I think you get the point. Shiny processes, what others think is good, what others are using, what others are praising, or even textbook examples never come with any guarantees that it will suit your needs. So think about, what problems am I trying to solve? What is there to suit my needs? You’ll be minimizing the risk of saying “Well that library didn’t have support for that so now we can’t do that anymore” But why should your domain dictate the way you write code? Because it will make everything easier. It will make it easier for yourself and everyone reading your code. I heard Kevlin Henney in one of his talks say “Code in the language of your domain”. What does this mean? Your code represents a lot of things, your knowledge or skill of the language you’re writing in, but also your understanding of the domain. So, if you write code in the language of your domain you’re making it easier for everyone else to understand what piece of domain logic you’re covering and what problems you are solving. Take a look at these two. Lets say we need to get every ID of every booking a specific user has done in our system. Yes, we’re back in the car rental business.
https://medium.com/dev-genius/do-you-want-to-code-by-the-book-1ed0691b7e31
['Philip Englund']
2020-06-18 12:40:39.825000+00:00
['Software Architecture', 'Development', 'Learning To Code', 'Software Development']
Build a Bouncing Basketball App with Anime.js
Setting up our JavaScript file Now that we understand how to create a bounce effect with Anime.js, let’s work on our script.js file. We will first create a reference to the container div we created in our index.html file and store this in a variable called container. We will also create a variable called ballCount which will help us keep track of the number of balls and assign each ball a different id. const container = document.querySelector('.container'); let ballCount = 1; In our app, we will create a new bouncing basketball every time we click in the window. First, add a click event listener to the window. This click event will then run the following steps. window.addEventListener('click', e => { }) 1. Create the Ball We first use document.createElement to create a div and an image. For the image element, we will set the attribute of src to be the route to the basketball image, and set an alt to be called basketball. For the div, we will set the class to ball, and give it an id of ball plus the ballCount number we declared earlier. This is to keep track of this ball element and animate it later. Finally, we will append the image as a child element to the ball div. const ball = document.createElement('div'); const image = document.createElement('img'); image.setAttribute('src', './basketball.png'); image.setAttribute('alt', 'basketball'); ball.classList.add('ball'); ball.id = 'ball' + ballCount; ball.appendChild(image); 2. Set the Position of the Ball We will then set the starting position of the ball to be where we clicked on the screen. Using e.view.innerHeight, we can get the height of the screen. Remember, our ball div will be in absolute positioning, so we will set the bottom and left properties on the ball div. The bottom property will be set to the screen height minus the value of e.y minus 50. We subtract an additional 50 because the height of our ball image is 100, so we want to get the center of it. Then, we set the left property of the ball with e.x minus 50. const screenHeight = e.view.innerHeight; const bottom = screenHeight - e.y - 50; ball.style.bottom = `${bottom}px`; ball.style.left = `${e.x - 50}px`; 3. Append the Ball to the Container Now that we have the ball created and set in place, we will append it as a child to the container div. container.appendChild(ball); 4. Add the Anime.js Functions Next, we will add the bounceUp and bounceDown functions which we explained earlier. As you can see, the target for the animation will be the id of the ball we created. For the translateY values, we are using the bottom of the screen as one point, and the point we clicked as another point. const bounceUp = anime({ autoplay: false, targets: '#ball' + ballCount, translateY: [bottom, 0], duration: 575, easing: 'easeOutQuad', complete: () => { bounceDown.restart(); } }): const bounceDown = anime({ autoplay: false, targets: '#ball' + ballCount, translateY: [0, bottom], duration: 575, easing: 'easeInQuad', complete: () => { bounceUp.restart(); } }); bounceDown.play(); At the end of the code snippet above, we are starting the bounce animation by calling play on bounceDown. This is then run on a loop because once one function is completed, it calls the other function. 5. Increment the Ball Count Finally, we will increment the ballCount variable, so we can create a new ball id on the next click. ballCount++; The final script.js file should look like below.
https://medium.com/javascript-in-plain-english/build-a-bouncing-basketball-app-with-anime-js-90eb5b4630d1
['Chad Murobayashi']
2020-12-10 09:46:46.732000+00:00
['JavaScript', 'Animation', 'Web Development', 'Programming', 'Design']
Welcome to the Uncanny Valley of ‘Reborn’ Baby Dolls
Welcome to the Uncanny Valley of ‘Reborn’ Baby Dolls Thousands of people collect hyperrealistic vinyl babies — but how do they sleep at night? I want to stroke Alma’s silky wisp of hair, put ointment on her peeling ankles, kiss the place where a drop of blood has dried on her teeny heel. I keep scrolling. Eloisa stares at me with vacant green eyes, her fists delightfully wrinkled but eerily glossy. I keep scrolling. Red-eyed and deathly pale, Isadora makes my heart stop. Beneath her button nose, the minuscule mouth dribbles blood, sports fangs. “Adopted,” the caption reads. Painted and designed by an 11-year-old — under her mother’s supervision — Vampire Isadora was sold at a discount. At reborns.com, anyone can become a happy parent. With the help of a drop-down menu, you can narrow the 657 lifelike doll options by price ($100–$5,000), ethnicity, gender, eye material (glass, acrylic, polyglass). Select “boo boo,” and the faces scrunch into pouts. Choose “realborn,” and the vampires, chimpanzees, and waxy misproportioned monstrosities all blessedly disappear — replaced by something which, in its own way, is even eerier: dolls made from 3D-printed babies. (Where do the models for the dolls come from? Bountiful Babies, the top supplier of 3D-printed doll parts, is suing dollmaker Stephanie Ortiz for libel over alleged ties to the Kingston Clan, of polygamy and child marriage fame.) Reborns.com lies deep in the uncanny valley: that terrifying twilight zone whose residents appear almost-but-not-quite human. Is this website a Toys “R” Us or a slave market? Are these dolls babies or playthings, dead or alive? Unable to settle on a characterization, my mind churns; my stomach churns with it. Not everyone feels that way, though. The community of hyperrealistic doll enthusiasts has been steadily growing since 1989, when Joyce Moreno created the first “reborn” doll. The original process of “reborning” involved stripping store-bought dolls of their paint to give them a more lifelike makeover. These days, most artists use unpainted, purpose-built doll kits instead, but the name has stuck. There are now tens of thousands of reborn artists and collectors worldwide. They chat on specialized forums and buy the dolls on eBay, Etsy, Facebook, even Walmart.com. Rather than being put off by ambiguity, the reborn community appears to thrive on it. A reborn “mother” might find her baby at a convention, displayed next to bags of disembodied, unpainted doll parts. She won’t mind knowing that the womb this doll came from was the oven that helped dry and set the paint. Or perhaps she had her baby shipped by mail from an online “nursery.” In this case, she might post a carefully choreographed unboxing video on YouTube. Like a mother at a baby shower, she’ll coo over the accessories that come with the purchase: the cardigans, onesies, and itty-bitty shoes. Then comes the birth certificate and, finally, the doll itself. Tradition dictates that the feet are unwrapped first, precious toes squeezed while the head and torso remain swaddled in a blanket. Unboxing complete, the new mom might cradle and rock the doll like a real baby, even change its diapers — only to plonk it unceremoniously to the ground, the neck lolling back as if snapped.
https://humanparts.medium.com/at-nightmares-edge-lifelike-dolls-13bf23265a79
['Eve Bigaj']
2020-11-10 17:42:37.481000+00:00
['Parenting', 'Psychology', 'Culture', 'Art', 'This Is Us']
7 Ways to Conquer the Blank Page
I’ve made a full-time living as a writer for more than eight years. During my career, I’ve published more than 4 million words. Most of those words were for clients. But I have also published three books and countless articles. Having written so many words about so many different subjects, you would think that the blank page would no longer hold any terror for me. You would be wrong. The worst part about being a writer is that every day you are starting from scratch. The blank page is there mocking you — daring you to try your feeble skills against its eternal emptiness. The best part about being a writer is that every day you are starting from scratch. You have a fresh chance to create something fantastic. I approach writing the same way no matter if I’m writing for a business client, a magazine, a book, or myself. Over the years, I have learned how to keep my terror of the blank page at bay long enough to get started. If you can manage to start, to write a few lines — no matter how awful — you will have gained the upper hand. Here are seven tactics I use in my daily battles against the blank page. Enter Battle with an Arsenal Image by Jason McBride A blank page and a blinking cursor create a lot of pressure. It’s like when you are out with your friends and someone mentions that you’re funny, and some stranger demands that you tell them a joke. The pressure to perform can cause your brain to freeze. Many writers get stuck because they wait until they are in front of the computer to write. This is too late. If you come to the blank page with a skeletal outline of what you plan on writing that day, you take off all the pressure. You don’t have to figure out what to write. Instead, you look at your outline and get to work. Sometimes I will write an introduction to a piece in my head while doing dishes or some other mindless task. If I like what I have, I will take a moment to jot down my ideas in my phone. Show Up with Confidence Image by Jason McBride Researchers recently conducted an in-depth meta-study on willpower. It used to be the consensus that willpower was finite. The meta-study showed that whether willpower is finite or not depends entirely on your beliefs. If you think willpower can be depleted by overuse, yours will be. If you don’t believe your willpower can be depleted by overuse, yours won’t be. Writer’s block works the same way. If you believe in writer’s block, you will be susceptible to it. I don’t have the luxury of believing in writer’s block. If I don’t write, I don’t make any money. When you show up with the confidence of knowing that writer’s block is a myth, you never have to worry about being stymied by it. Instead, you can start writing. Cheat Image by Jason McBride If you are having a hard time starting something, cheat. Instead of writing the beginning first, start at the end or the middle. The writing police will not come bust down your door and haul you away if you start somewhere else besides the beginning. Often, my best pieces start with a conclusion. I then work backwards to construct a story and argument that leads up to the conclusion. Bivouac Image by Jason McBride Rest is one of the most important parts of my writing success. I get eight hours of sleep most nights. This means that I start each day well-rested and eager to get started. I have four children. When they were younger, I rarely got eight hours of sleep. Many nights, I barely got any sleep. On days where I am tired, I take several long breaks and take at least a short nap. Even on days when I am well-rested, I still take frequent breaks, including taking one or more long walks. If I am struggling to finish something, I will take a break and come back to it later. If possible, I will let it sit overnight. Level Up Image by Jason McBride In video games, you often have to level up your character before taking on a powerful enemy. If the blank page is regularly overpowering you, you need to level up. The best way to level up is to absorb as much creativity as you can. Read, watch TV, go to an art museum, doodle, or color. These activities fill your brain with resources that you can draw on consciously and sub-consciously when you write. Don’t limit yourself to content in the same genre or niche you are writing. The best ideas often come when two unrelated thoughts collide and form a new element in a strange nuclear reaction. Surrender is Not an Option Image by Jason McBride The blank page wants you to quit. That is the only way it can win. You can win by writing something, anything. You can always edit or rewrite lousy prose. You cannot edit a blank page. If you tell yourself you can always write later, you rob yourself of the ability to write now. Writing success is more about mindset than it is about skill. Even if you are struggling, never walk away from a blank page. You may never come back. Tell yourself you will take a break after you write one paragraph or even one sentence. If you can write something down, you will have won the day. Switch Targets Image by Jason McBride Because I’m a freelancer, I always have more than one project going at a time. Usually, I focus on a single client project before moving on to anything else. Sometimes when I’m stuck in the middle of a piece, instead of banging my head in frustration, I switch to something else. Often the simple act of changing topics will free your mind. Switching projects also allows you to keep your writing momentum and limits the negative self-talk that often happens when you are stumped by an assignment. Every writer starts with a blank page. It’s the great equalizer. If you are serious about your career as a writer, you will have to face the same enemy for the rest of your working life. However, the power of the blank page remains the same. If you come prepared and build your skills, you will find it easier to beat this enemy at each encounter.
https://medium.com/escape-motivation/7-ways-to-conquer-the-blank-page-1d6799d1c31a
['Jason Mcbride']
2020-10-12 12:32:48.573000+00:00
['Business', 'Advice', 'Life Lessons', 'Freelancing', 'Writing']
The Art of Getting Things Done for Developers
Don't sweat the small stuff Cycling — is the new thing that I am trying to win over these past few months, not for a race, and definitely not for some pro. Don’t get me wrong, I know this is not some developer stuff that you wanted to hear, but still here me out — because like I always said, becoming a self-taught developer isn’t just about learning how to program, most of the battles are win inside our minds. Our ability to make good choices, our discipline and our perseverance are more important than choosing the perfect programming stack. One of the things I’ve learned is that when you can control your mind — you will be able to do anything that you desire. If you can control your mind, you will control your body, and eventually your environment too. After beating myself into learning and working as a developer for almost 5 years — I know I needed a new challenge, I wanted to learn how to stop all the noises, from my legs into my body, I needed to silent them so I can focus on getting things done. Getting things done — means you don’t stop when you’re tired, exhausted, and uninspired, you can rest, but you can’t quit, because the truth is, getting things done means you stop when you are done and not when you are tired. Getting things done and beating procrastination is probably such a powerful combination in mastering our minds. The more you win against all your excuses, the more you exercise that strength and being in control, the powerful you become. After cycling every weekend outdoors since October, I decided to put into my bucket list the 100kms-ride before the 2020 ends, and last weekend, I finally did it, it was exhausting, but after a night's rest, I was in shock that my body can take on another round. Here’s the crazy thing, days before we decided to take on the challenge I was so scared and I’ve been overthinking a lot, but after finishing the 100kms, all the things I was worried about didn’t even happen, not one — the truth is after hitting that goal, I’ve been craving for more. The day will never be perfect, there will always be problems or noises, but it doesn’t matter because you will take it, solve it, fix it, and stand above it. Programming, in a nutshell, is solving problems, and you should know it by now. So stop sweating the small stuff and instead happily embrace all the challenges that are standing in your way, one after another, you will win it.
https://medium.com/for-self-taught-developers/the-art-of-getting-things-done-developers-edition-383bff81afdb
['Ann Adaya']
2020-11-26 15:22:21.510000+00:00
['Web Development', 'Software Development', 'Software Engineering', 'Work', 'Programming']
E-book : AngularJS Notes for Professionals Book
E-book : AngularJS Notes for Professionals Book Download AngularJS’s E-book free by GoalKicker.com Download Here : http://goalkicker.com/AngularJSBook/ The AngularJS Notes for Professionals book is compiled from Stack Overflow Documentation, the content is written by the beautiful people at Stack Overflow. Text content is released under Creative Commons BY-SA. See credits at the end of this book whom contributed to the various chapters. Images may be copyright of their respective owners unless otherwise specified Book created for educational purposes and is not affiliated with AngularJS group(s), company(s) nor Stack Overflow. All trademarks belong to their respective company owners 200 pages, published on January 2018 Chapters
https://medium.com/easyread/e-book-angularjs-notes-for-professionals-book-14244747dab6
[]
2018-01-29 23:41:27.996000+00:00
['Books', 'Front End Development', 'eBooks', 'JavaScript', 'Angularjs']
Why Invest in Digital Anthropology?
Why Invest in Digital Anthropology? A Q&A with Trigg Hutchinson, Data Acquisition Lead at One Concern One Concern’s best-in-class hazard models incorporate static and dynamic data from an array of different sources: public data sets from the USGS, World Bank, and U.S. Census Bureau; live updates on social media; remote sensing data; and private repositories. Together, these paint a near-complete picture of the built, natural, and human environment. Occasionally, we can’t easily get the data required for high-resolution hazard maps. Some governments may not have collected enough historical building information. Other times, there isn’t enough time to locate the best international partner before an urgent deployment. This is where digital anthropology fills in the gaps. Trigg Hutchinson leads our Data Acquisition efforts. Over the next few weeks, we’ll be highlighting his work by publishing a series of posts he’s written from the field. Today, Trigg is sharing the ins and outs of his unique line of work. What do you do for digital anthropology at One Concern? I manage all data acquisition for One Concern. Before and after natural disasters, it’s critical for us to ingest a ton of different data. That’s for two purposes. First, we have to make sure that we have the most up-to-date, accurate, and comprehensive data to model a city. Second, we also need robust training data to ensure our AI-powered models are accurate. Prior to a natural disaster, we might be deploying our product to a new city. I’ll lead the effort to acquire or create new data sets that help us better understand the built, natural, and population environment of a city. Then after a disaster happens, I’ll lead the effort to organize building tagging, which is the identification of damage data. Our product incorporates a ton of data sources to feed the machine learning model. How does your work on the ground fit into the bigger picture there? One Concern has categorized an enormous amount of different data types. When we start to look at modeling a new city, we’ll first inventory what’s actually out there. So we’ll go through these dozens and dozens of categories of data explaining a single building, or a single piece of critical infrastructure. Then, we’ll identify and inventory what we have and don’t have. The gaps in those catalogs are where we start exploring for new data creation. Can you tell me about an especially challenging data acquisition project? The most complicated data acquisition process that we’ve worked on so far was Dhaka. Bangladesh is a really challenging country: in terms of density, traffic, pollution, and the city being just enormous. We were charged with collecting building damage data for a representative sample of the city, but there were small things there wouldn’t be issues anywhere else. For instance, taking a photo of the building was much harder. The roads are narrow, and traffic is crazy. It was weirdly challenging to find a camera angle where our computer vision models can see and identify features of the building. For our data acquisition team in Dhaka, we got everybody smartphones. But the app we use for tagging had issues with GPS access because coverage is really bad. So we had this smartphone-enabled methodology, but paired it with pen and paper. It’s funny because you think this is an AI company, right? We’re applying these really high-tech solutions to complex physical problems. And as amazing as it is to have all the power of artificial intelligence at our fingertips, sometimes these really simple solutions can also help answer these questions. What experiences and background did you have before this role? I was military before, and I’d worked in a data collection capacity for a couple of years. Then, I got a business degree at UCLA where I was focused on streamlining operations and supply chain for natural disaster response. Understanding the impact of hazards and optimizing relief has always been something that I’ve been really passionate about. What I really like about working at One Concern is that it allows me to put those interests front and center. I can take a lot of my previous skills and experiences and apply them to solving a problem that I’m really passionate about. How did your experience in the military inform your approach to your current work? I get really weird about meetings starting late. That would be the first thing — I’m insanely punctual to a fault [laughs]. But a lot of the military — especially the expeditionary side — is about working in complex, ambiguous environments. You learn very quickly that everything is about preparation and planning. The more you know about a place and the better you plan, the better off you’ll be when you actually get on the ground. Concurrently, I think the military infuses a sense of flexibility. As great as your plan may be, things may be totally different in practice. A lot of our data acquisition involves techniques that have never been tried before, and we’re rolling out methodologies or equipment that we’re building ourselves. So sometimes things go really, really wrong — it’s my job to just figure it out. What does ‘resilience’ mean to you? That’s a really good question. That’s a really unfair question to ask. The easy answer is robustness and rapidity. Okay, hard answer? The hard answer is keeping a family from having to sleep in a tent for six months at a time. When we use words like robustness and rapidity, we’re looking at it from a very academic perspective. It’s detached. But when you go somewhere like Lombok, you see people who own a countable number of items, they’ve lost all of it, and they’re sleeping on a blue tarp. That gives you perspective on the importance of growing resilience in a way that looking at things on a balance sheet or making a plan in a government office never will. So, resilience to me is real-world application of the theory and concepts that we discuss in this office. I think there’s a face to it.
https://medium.com/one-concern/why-invest-in-digital-anthropology-28b8c54b1651
['One Concern']
2019-08-23 23:36:08.961000+00:00
['Careers', 'Startup', 'Data', 'Career Advice', 'Data Science']
Opening the ‘black box’ of artificial intelligence
Opening the ‘black box’ of artificial intelligence Scientists are hoping to increase the transparency of how algorithms make decisions. by Tom Cassauwers Artificial intelligence is growing ever more powerful and entering people’s daily lives, yet often we don’t know what goes on inside these systems. Their non-transparency could fuel practical problems, or even racism, which is why researchers increasingly want to open this ‘black box’ and make AI explainable. In February of 2013, Eric Loomis was driving around in the small town of La Crosse in Wisconsin, US, when he was stopped by the police. The car he was driving turned out to have been involved in a shooting, and he was arrested. Eventually a court sentenced him to six years in prison. This might have been an uneventful case, had it not been for a piece of technology that had aided the judge in making the decision. They used COMPAS, an algorithm that determines the risk of a defendant becoming a recidivist. The court inputs a range of data, like the defendant’s demographic information, into the system, which yields a score of how likely they are to again commit a crime. How the algorithm predicts this, however, remains non-transparent. The system, in other words, is a black box — a practice against which Loomis made a 2017 complaint in the US Supreme Court. He claimed COMPAS used gender and racial data to make its decisions, and ranked Afro-Americans as higher recidivism risks. The court eventually rejected his case, claiming the sentence would have been the same even without the algorithm. Yet there have also been a number of revelations which suggest COMPAS doesn’t accurately predict recidivism. Adoption While algorithmic sentencing systems are already in use in the US, in Europe their adoption has generally been limited. A Dutch AI sentencing system, that judged on private cases like late payments to companies, was for example shut down in 2018 after critical media coverage. Yet AI has entered into other fields across Europe. It is being rolled out to help European doctors diagnose Covid-19. And start-ups like the British M:QUBE, which uses AI to analyse mortgage applications, are popping up fast. These systems run historical data through an algorithm, which then comes up with a prediction or course of action. Yet often we don’t know how such a system reaches its conclusion. It might work correctly, or it might have a technical error inside of it. It might even reproduce some form of bias, like racism, without the designers even realising it. This is why researchers want to open this black box, and make AI systems transparent, or ‘explainable’, a movement that is now picking up steam. The EU White Paper on Artificial Intelligence released earlier this year called for explainable AI, major companies like Google and IBM are funding research into it and GDPR even includes a right to explainability for consumers. ‘We are now able to produce AI models that are very efficient in making decisions,’ said Fosca Giannotti, senior researcher at the Information Science and Technology Institute of the National Research Council in Pisa, Italy. ‘But often these models are impossible to understand for the end-user, which is why explainable AI is becoming so popular.’ Diagnosis Giannotti leads a research project on explainable AI, called XAI, which wants to make AI systems reveal their internal logic. The project works on automated decision support systems like technology that helps a doctor make a diagnosis or algorithms that recommend to banks whether or not to give someone a loan. They hope to develop the technical methods or even new algorithms that can help make AI explainable. ‘Humans still make the final decisions in these systems,’ said Giannotti. ‘But every human that uses these systems should have a clear understanding of the logic behind the suggestion. ’ Today, hospitals and doctors increasingly experiment with AI systems to support their decisions, but are often unaware of how the decision was made. AI in this case analyses large amounts of medical data, and yields a percentage of likelihood a patient has a certain disease. For example, a system might be trained on large amounts of photos of human skin, which in some cases represent symptoms of skin cancer. Based on that data, it predicts whether someone is likely to have skin cancer from new pictures of a skin anomaly. These systems are not general practice yet, but hospitals are increasingly testing them, and integrating them in their daily work. These systems often use a popular AI method called deep learning, that takes large amounts of small sub-decisions. These are grouped into a network with layers that can range from a few dozen up to hundreds deep, making it particularly hard to see why the system suggested someone has skin cancer, for example, or to identify faulty reasoning. ‘Sometimes even the computer scientist who designed the network cannot really understand the logic,’ said Giannotti. Natural language For Senén Barro, professor of computer science and artificial intelligence at the University of Santiago de Compostela in Spain, AI should not only be able to justify its decisions but do so using human language. ‘Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result,’ said Prof. Barro. He is scientific coordinator of a project called NL4XAI which is training researchers on how to make AI systems explainable, by exploring different sub-areas such as specific techniques to accomplish explainability. He says that the end result could look similar to a chatbot. ‘Natural language technology can build conversational agents that convey these interactive explanations to humans,’ he said. ‘Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result.’ - Prof. Senén Barro, University of Santiago de Compostela Spain Another method to give explanations is for the system to provide a counterfactual. ‘It might mean that the system gives an example of what someone would need to change to alter the solution,’ said Giannotti. In the case of a loan-judging algorithm, a counterfactual might show to someone whose loan was denied what the nearest case would be where they would be approved. It might say that someone’s salary is too low, but if they earned €1,000 more on a yearly basis, they would be eligible. White box Giannotti says there are two main approaches to explainability. One is to start from black box algorithms, which are not capable of explaining their results themselves, and find ways to uncover their inner logic. Researchers can attach another algorithm to this black box system — an ‘explanator’ — which asks a range of questions of the black box and compares the results with the input it offered. From this process the explanator can reconstruct how the black box system works. ‘But another way is just to throw away the black box, and use white box algorithms, ’ said Giannotti. These are machine learning systems that are explainable by design, yet often are less powerful than their black box counterparts. ‘We cannot yet say which approach is better,’ cautioned Giannotti. ‘The choice depends on the data we are working on.’ When analysing very big amounts of data, like a database filled with high-resolution images, a black box system is often needed because they are more powerful. But for lighter tasks, a white box algorithm might work better. Finding the right approach to achieving explainability is still a big problem though. Researchers need to find technical measures to see whether an explanation actually explains a black-box system well. ‘The biggest challenge is on defining new evaluation protocols to validate the goodness and effectiveness of the generated explanation,’ said Prof. Barro of NL4XAI. On top of that, the exact definition of explainability is somewhat unclear, and depends on the situation in which it is applied. An AI researcher who writes an algorithm will need a different kind of explanation compared to a doctor who uses a system to make medical diagnoses. ‘Human evaluation (of the system’s output) is inherently subjective since it depends on the background of the person who interacts with the intelligent machine,’ said Dr Jose María Alonso, deputy coordinator of NL4XAI and also a researcher at the University of Santiago de Compostela. Yet the drive for explainable AI is moving along step by step, which would improve cooperation between humans and machines. ‘Humans won’t be replaced by AI,’ said Giannotti. ‘They will be amplified by computers. But explanation is an important precondition for this cooperation.’ The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media. More info XAI NL4XAI
https://medium.com/horizon-magazine/opening-the-black-box-of-artificial-intelligence-77fd081652d3
[]
2020-12-01 10:27:21.651000+00:00
['AI', 'Algorithms', 'Black Box', 'Artificalintelligence']
How to invest smartly in the cryptocurrency market
Investing in cryptocurrency these days it’s not a new thing. A lot of people invest in different currencies for different reasons. Cryptocurrencies are based on blockchain, a decentralized distributed public technology. If blockchain proves to be efficient scalable and secure, it could seriously disrupt the legacy payment systems operated by banks. The very idea of blockchain is to reduce the role of intermediaries in transactions and deals, this way making it easier for both individuals and companies to trade without the need of third parties. Everyone can use some help when it comes to investing, especially in the cryptocurrencies that are growing more and more each and every day. There are four essential ways that everyone should know when investing in cryptocurrencies, and we are going to have a look at each option: 1. Mining Cryptocurrencies need miners that can verify their cryptocurrencies transactions. To start mining you need to have hardware with high-performance processors in order to make the necessary calculations, that’s why you need to pay attention to performance, electricity consumption and price of hardware. Obviously it is not easy to start mining especially if you are a beginner, but experience and knowledge can lead you on earning a regular income in different cryptocurrencies. 2. Initial Coin Offerings Initial Coin Offerings or (ICO) is an unregulated and controversial means of crowdfunding via use of cryptocurrency, which can be a source of capital for startup companies. Different from an IPO, an ICO offers no legal rights or claims to underlying assets. According to coinschedule.com ICOs have attracted over $3bn through 234 issues in 2017 to date. It is important to know about an ICO before you decide to invest, it is essential to believe that their project will be successful therefore you invest your time and money on them. 3. Trading on cryptocurrency markets The first step to start trading is choosing a crypto market. By having a crypto wallet, participants are available to buy and sell cryptocurrencies. Trading also involves risks, as many crypto markets are located in risky jurisdictions, with no regulator to control them and guarantee trader rights, so making it just as possible to lose many trading, as to make profit. There is doubt cryptocurrencies are unusually risky compared to traditional asset classes. Hacking is one thing people are always susceptible especially after now nonexistent Mt Gox exchange was hacked in 2015 and around 850.000 bitcoins went missing. 4. Trading cryptocurrencies using AI Trading these days can get quite overwhelming, technical analysis, multiple exchanges, different types of stocks or currencies, different sources of news, and you literally find yourself paralyzed not knowing which financial asset to trade or hold for the long term. This is exactly what AI can help us with. Being able to handle most of these tasks better than a human is capable of, AI has the capability to learn from your trading patterns/habits and ease the process of trading every day. For example, the AI Chatbot developed by AiX, incorporates all of these features and many more. Powered by cognitive reasoning technology, AiX’s chatbot executes trading calls/bids and provides historical analysis of trading through the evidence tree. It can incorporate many exchanges in one interface as per your preference, provides news and recommendation for which financial asset you should trade and which you should avoid, this way making your trading easier cheaper and more efficient
https://medium.com/ai-x/how-to-invest-smartly-in-the-cryptocurrency-market-c260af3e60c5
['Ana Podrimaj']
2018-03-30 11:10:17.380000+00:00
['Mining', 'ICO', 'Cryptocurrency', 'AI', 'Bitcoin']
Follow these steps to build production-grade workflow with Docker and React
Recently, I decided to extend my skills to include creating software images using Docker containers. I struggled a bit to connect the different pieces of puzzles, but finally, Thanks for Stephen Grider’s Udemy course “ Docker and Kubernetes: The Complete Guide” I got it to work. So, as usual, I am going to document what I learned so it will be easy for me and for anyone to implement the same concept in their application. In this post, I will teach you how to configure the Dockerfile for both development and production environment. Also, I will walk you through on setting the Travis file to use Travis-CI integration tool. The project folder is available on my GitHub account for your reference. What is Covered in this tutorial? We will cover the following topics: 1. Building a single docker container for development environment — including port mapping using create-react-app npm package. Configure Dockerfile.dev file Configure docker-compose.yml file to map the services inside the container. Configure travis.yml file for testing codebase as a preparation step for deploying. 2. Deploy the application image up to the AWS cloud services; this includes: Create a Dockerfile for the production environment. Create an account on AWS Configure travis.yml for deploying. This tutorial doesn’t cover building sophisticated images with multi-containers — it will be included in a separate post. Pre-requisites This tutorial assumes you have some a good knowledge of the following: npm — have already installed create-react-app package, if not download here. Familiar with React-js Git and GitHub familiarity. Docker Basics, how it works. Docker downloaded on your local machine — including Docker-CLI, Download here. DockerHub account — Download here. YAML syntax, if not you can refer to this site. Travis-CI account, you can sign-up here using your GitHub account. AWS account — here if you don’t have one. What is Docker??? Since I first heard of Docker I was curious to learn and use it. Docker is not necessary for any application; however, it is a tool in the normal development flow process which makes the developer’s life much more comfortable. Not only developers, but also end users who are supposed to use that piece of software, and who are familiar with Redis, know how painful it is to install it into the local machine without using docker. Also, docker allows you set up the application infrastructure pipeline once and then any small changes can be done quickly without the need to re-architect your application infrastructure. In short: Docker makes it easy to install and run software without worrying about setting up or dependencies. ONE: Generate the React project In this tutorial, I prefer to use the npm package (create-react-app) because it has a lot of pre-configured files which will make us focus on docker as a concept. So, start with opening up your terminal and type create-react-app project_name. Then assign that directory to be your current working directory. TWO: Dockerfile for Development In this tutorial, there are three commands we have to deal with: npm run start: valid only for the development environment — starting the server at localhost. npm run test: run any tests associated with the codebase. npm run build: building a production version of the application suitable for production — zipping all the application into one folder suitable for deploying. I am going to deploy to the AWS cloud, so I will follow the approach to create a separate dockerfile for the development environment. It will be responsible for running the image container in the development environment. The other Dockerfile will be responsible for building the image inside the production environment — we will create this later. For the npm run test command, we will write a configuration for Travis-CI to handle it. NOW, create a new file in the root directory of the project and name it Dockerfile.dev — remember this is only to run our server inside the container. Note: the configuration steps are almost the same for other application written in different languages. THREE: Fetching the necessary images To run the application as a Docker container image on any computer and be able to run the image without any errors, we have to instruct Docker which base image to use. Think of it like what OS the user should use to run the software, and which should be used to run the image successfully. Go to dockerhub.com and sign-in, then navigate to the explore button — it will show all the available images in the alpine base image. Search for the node repository image. When you click on the official image repo, scroll down to the title “ How to use this image” then you will see that this image is based on the popular Alpine Linux project.
https://medium.com/free-code-camp/follow-these-steps-to-build-production-grade-workflow-with-docker-and-react-a860f695cf14
['Salma Elshahawy']
2019-04-10 16:57:29.500000+00:00
['JavaScript', 'Docker', 'React', 'Tech', 'Programming']
Modularity Maximization
Modularity Maximization Greedy Algorithm Current article is based on these 4 hypotheses to define a community Based on the hypothesis a random network does not have community structure, the local modularity concept was formulated [1]. It compares the partition of a given network with the analogous degree-preserved randomization. Considering a network with N nodes and L links, and a partition of it with nc communities, each with Nc nodes and Lc links: Local modularity compares the number of links between the real network wiring of a subgraph Cc and the randomly rewired subgraph: pij results from randomly rewiring the whole network, maintaining the expected degree of each node: If Mc = 0, then the subgraph is randomly wired (thus, no community structure). If Mc < 0, Cc has fewer edges than expected and should not be considered a community. Manipulating (2), a simplified formula is obtained: Generalizing local modularity to the whole network, the best partition of the graph is identified. This way, the network’s modularity becomes the sum over all communities of (4): Similarly, the higher the modularity, the higher the quality of the partition of the network. It can take positive, null or negative values. Whenever the whole network is defined as one community: In the extreme case, each node is a single community, Lc = 0 and the modularity becomes negative. Consequently, none of the previous structures can be classified as a community. Several algorithms use modularity to partition a network. Greedy Algorithm Greedy algorithm maximizes modularity at each step [2]: 1. At the beginning, each node belongs to a different community; 2. The pair of nodes/communities that, joined, increase modularity the most, become part of the same community. Modularity is calculated for the full network; 3. Step 2 is executed until one community remains; 4. Network partition with the higher modularity is chosen. In terms of computational complexity, since modularity variation can be calculated in constant time, step 2 requires O(L) calculations. After merging communities, the adjacency matrix of the network is updated in a worst-case scenario of O(N). Each merging event is repeated N-1 times. Thus, the overall complexity is: O((L+N)N) or O(N^2), in a sparse graph. Although modularity is computationally suited and an accurate way of community detection, two pitfalls should be highlighted: resolution limit and modularity maxima. Resolution Limit First, a way to calculate the network’s modularity variation when communities A and B are merged is introduced [2]: This means two communities should be joined whenever: Assuming there is one link between communities A and B: In other words, when communities A and B are connected even only by one link, they will be merged if their size is lower than a threshold. Becoming impossible to detect communities below a certain size. Assuming, The resolution limit depends on the size of the network. One way to overcome this limitation is by subdividing larger communities into smaller ones and partition them. Modularity Maxima The fourth hypothesis presented at the beginning of the article, relies on the assumption that higher modularity implies a better partition of the network. Although, in some graphs, significantly different partitions may have similar modularity. This becomes a relevant issue when the number of nodes in the network increases. Becoming harder to separate the network’s best partition from the lower quality ones. In the case of the algorithms that optimize modularity, this is a central issue, once they iterate until its variation is lower than an input threshold. Figure 1 Significantly different partitions of the same network can have similar modularity [2] Upon analyzes of the network in Figure 1, the number of links inside each cluster is approximated to: Considering kA = kB = kC and applying (7) to the previous network, modularity variation is calculated in terms of the number of nodes nc: Merging two random clusters of the previous network in the same community will decrease modularity at most 2/nc^2. In the limit, this variation is undetectable: Empirically, the best partition should be the one that groups each 5-node cluster in different communities. Although, modularity increases by 0.003 whenever two adjacent communities are merged. Moreover, if random 5-node clusters are assigned to a community, even if they are not directly connected, it results in a modularity variation close to 0 around the one detected for the optimal partition. This complies with the vision of the plateau in the modularity graph that may distort the choice of the best partition in Figure 1. This plateau explains why a large number of modularity maximization algorithms can quickly detect high modularity partitions — they are not unique. Modularity optimization algorithms are part of a larger set of problems that are solved by optimizing a quality function. References [1] A.-L. Barabási, “Network Science Book,” [Online]. Available: http://networksciencebook.com. [Accessed 15 May 2019] [2] M. E. J. Newman, “Fast algorithm for detecting community structure in networks,” Physical Review E, vol. 69, no. 6, 2004
https://towardsdatascience.com/modularity-maximization-5cfa6495b286
['Luís Rita']
2020-05-31 03:04:08.462000+00:00
['Modularity', 'Greedy Algorithms', 'Community', 'Clustering', 'Maximization']
Meet The “Sinclair Broadcast Group” of Local Newspapers
Meet The “Sinclair Broadcast Group” of Local Newspapers Lee Enterprises is one of the largest owners of local newspapers in America. Their apparent indifference to the truth, ethics, and transparency, is terrifying. Artwork by Author Before I introduce you to Lee Enterprises: The Sinclair Broadcast Group of Local Newspapers, a little background on Sinclair… Until July 2017, Sinclair Broadcast Group operated its media empire in relative obscurity. Then, John Oliver used his platform as host of HBO’s Last Week Tonight to shine a bright light on the Sinclair media machine; the thesis of Oliver’s 19-minute segment on Sinclair (which has now been viewed over 19-million times on YouTube) was that behind the mask of being an innocent and impartial broadcaster of local news, the media giant — operating in the shadows and taking advantage of the public’s overwhelming trust in “local news”— is really a giant corporation operating a thinly veiled right-wing sleaze machine with little — if any — interest in and commitment to ethical, honest, fact-first reporting. Put simply: local people thought they were getting local news they could trust, but they were getting the opposite. What made Oliver’s comedic-expose of Sinclair so gripping, beyond the content Sinclair produced and pushed to local communities, was the immense scale of what Oliver was describing: “We did some math, and we found out that when you combine the most-watched nightly newscasts on Sinclair and Tribune stations in some of their largest markets, you get an average total viewership of 2.2 million households. And that is a lot! It’s more than any current primetime show on Fox News!” Indeed, an average nightly viewership of 2.2 million households is a lot! However, “Local News” is delivered in many forms — not just television. Perhaps more influential in the local news ecosystem are “local” print publications —particularly, newspapers, which are often trusted by the average Joe and Jane even more than their local TV news stations. Which brings us to the subject of this article, The Sinclair Broadcast Group of Local Newspapers… Lee Enterprises.
https://medium.com/discourse/meet-the-sinclair-broadcast-group-of-local-newspapers-lee-enterprises-899ffae87ea5
['Andrew Londre']
2020-11-18 21:52:01.892000+00:00
['Journalism', 'Politics', 'News', 'Ethics', 'Leadership']
Using Pythonista and Boto3
The Apple iPhone and iPad have this wonderful Python development environment called Pythonista. If you can import a module using the Pythonista StaSh extension, then you can get working with boto3 in Pythonista. However, boto3 requires your AWS credentials to be configured either in a shared credentials file, in environment variables, or specified when the boto3 session, resource, or client is created. Working with Pythonista requires some degree of creativity when dealing with issues like this one. However, there is no reason why you can’t get the same shared credentials file you use on other computing devices onto your iPad for use with Pythonista. Let’s look at how to do this. Get and Launch StaSh StaSh is a bash-like implementation that can run within Pythonista. There have been many improvements since its first release. To get and install StaSh, launch Pythonista and in the interactive console, execute the command import requests as r; exec(r.get('http://bit.ly/get-stash').text) The command downloads StaSh allowing you to execute it like any other python tool. Once installed, find the launch_stash.py file and run it. This results in a console window where you can interact with the shell. Install boto3 Once StaSh is running, you can expand the console view and display the StaSh window. The StaSh Console (image by author) At this point, run pip install boto3 , assuming Python3, which is then nicely installed into the correct location in the Pythonista installation. All we have to do now is set up the credentials. Create the Credential and Config Files Unfortunately, I haven’t been able to get the AWS CLI running within Pythonista, even though it is a Python application. That doesn’t mean all is lost, however. From within Pythonista, create a config file. Here is an example: [default] region = us-east-1 [preview] sdb = true These options set the default region AWS commands are executed against unless a different region is chosen. The [preview] section allows using the sdb service which is in preview mode in the CLI. Next, we create the credentials file. [default] aws_access_key_id = YOUR_ACCESS_KEY_ID aws_secret_access_key = YOUR_SECRET_ACCESS_KEY If you want multiple profiles, you can also configure those following the AWS documentation. These files will have some sort of extension, which was possibly added when you created them with your favorite editor. We will remove when we move the files to the location they need to go, which is ~/.aws . Open the StaSh console, and execute the following commands: [~/Documents]$ cd .. [~]$ mkdir .aws [~]$ cd .aws [~/.aws]$ mv ../Documents/config.txt config [~/.aws]$ mv ../Documents/credentials.py credentials [~/.aws]$ ls -l config (50.0B ) 2020-10-10 01:31:53 credentials (116.0B ) 2020-10-10 01:31:08 [~/.aws]$ We are set. Boto3 will now have access to the credentials we have just configured. It is important to store the files in the ~/.aws directory in the StaSh environment. Otherwise, Pythonista won't be able to find it when your Python script is executed. Try it out This view illustrates a piece of sample Python code and the Pythonista console output. Output from an AWS Service (image by author) This code sample connects to the Amazon SimpleDB service and prints some information about the SimpleDB domain the code is evaluating. Conclusion Giving Pythonista the ability to execute the Python code you are working on to interface with AWS services makes it easier to write, test, and debug in the Pythonista interface. And since this isn’t something you are likely to do very often, you are likely to forget (which is why I wrote this article — I forgot and needed to update my secret key.) References Boto3 Credentials Configuration and Credential file Settings Shell Like an Expert in Pythonista About the Author Chris is a highly-skilled Information Technology, AWS Cloud, Training and Security Professional bringing cloud, security, training, and process engineering leadership to simplify and deliver high-quality products. He is the co-author of seven books and author of more than 70 articles and book chapters in technical, management, and information security publications. His extensive technology, information security, and training experience make him a key resource who can help companies through technical challenges. Chris is a member of the AWS Community Builder Program. Copyright This article is Copyright © 2020, Chris Hare.
https://labrlearning.medium.com/using-pythonista-and-boto3-536f5a1fddb4
['Chris Hare']
2020-10-12 05:34:39.357000+00:00
['Boto3', 'AWS', 'Pythonista', 'Python3']
The Top 10 Python Libraries for Data Science
The Top 10 Python Libraries for Data Science Getting started in data science? Start here Photo by Chris Liverani on Unsplash. Python has become the most widely used programming language today — especially in the world of data science because it is a high-performance language, easy to learn and debug, and has extensive library support. Each of these libraries has a particular focus. Some manage image and textual data, while others focus on data mining, neural networks, and data visualization. Python can be used for statistical analysis and building predictive models. When it comes to solving data science tasks and challenges, data enthusiasts, analysts, engineers, and scientists are leveraging the power of Python. In this article, I will be talking about the top ten most useful Python libraries for data science and machine learning. Some of the libraries have been installed if you are using the Anaconda distribution. All you have to do is import the library or install it with pip if it is not available on your machine.
https://medium.com/better-programming/top-10-python-libraries-for-data-science-21e6cd95ca55
['Olufunmilayo Aforijiku']
2020-09-07 16:04:06.455000+00:00
['Machine Learning', 'Data Science', 'Python', 'Deep Learning', 'Data Visualization']
My Favorite Pieces of Syntax in 8 Different Programming Languages
We love to criticize programming languages. We also love to quote Bjarne Stroustrup: “There are only two kinds of languages: the ones people complain about and the ones nobody uses.” So today I decided to flip things around and talk about pieces of syntax that I appreciate in some of the various programming languages I have used. This is by no means an objective compilation and is meant to be a fun quick read. I must also note that I’m far from proficient in most of these languages. Focusing on that many languages would probably be very counter-productive. Nevertheless, I’ve at least dabbled with all of them. And so, here’s my list: List Comprehension def squares(limit): return [num*num for num in range(0, limit)] Python syntax has a lot of gems one could pick from, but list comprehension is just something from heaven. It’s fast, it’s concise, and it’s actually quite readable. Plus it lets you solve Leetcode problems with one-liners. Absolute beauty. Spread Operator let nums1 = [1,2,3] let nums2 = [4,5,6] let nums = [...nums1, ...nums2] Introduced with ES6, the JavaScript spread operator is just so versatile and clean that it had to be on this list. Want to concatenate arrays? Check. let nums = [...nums1, ...nums2] Want to copy/unpack an array? Check. let nums = [...nums1] Want to append multiple items? Check. nums = [...nums, 6, 7, 8, 9, 10] And there are many other uses for it that I won’t mention here. In short, it’s neat and useful, so that earns my JS syntax prize. Goroutines go doSomething() Goroutines are lightweight threads in Go. And to create one, all you need to do is add go in front of a function call. I feel like concurrency has never been so simple. Here’s a quick example for those not familiar with it. The following snippet: fmt.Print("Hello") go func() { doSomethingSlow() fmt.Print("world!") }() fmt.Print(" there ") Prints: Hello there world! By adding go in front of the call to the closure (anonymous function) we make sure that it it is non-blocking. Very cool stuff indeed! Case & Underscore Indifference proc my_func(s: string) = echo s myFunc("hello") Nim is, according to their website, a statically typed compiled systems programming language. And, according to me, you probably never heard of it. If you haven’t heard of Nim, I encourage you to check it out, because it’s actually a really cool language. In fact, some people even claim it could work well as Python substitute. Either way, while the example above doesn’t show it too much, Nim’s syntax is often very similar to Python’s. As such, this example is not actually what I think is necessarily the best piece of syntax in Nim, since I would probably pick something inherited from Python, but rather something that I find quite interesting. I have very little experience with Nim, but one of the first things I learned is that it is case and underscore-insensitive (except for the first character). Thus, HelloWorld and helloWorld are different, but helloWorld , helloworld , and hello_world are all the same. At first I thought this could be problematic, but the Docs explains that this is helpful when using libraries that made use of a different style to yours, for example. Since your own code should be consistent with itself, you most likely wouldn’t use camelCase and snake_case together anyway. However, this could be useful if, for instance, you want to port a library and keep the same names for the methods while being able to make use of your own style to call them. In-line Assembly function getTokenAddress() external view returns(address) { address token; assembly { token := sload(0xffffffffffffffffffffffffffffffffffffffff) } return token } Solidity is the main language for writing smart contracts on the Ethereum blockchain. A big part of writing smart contracts is optimizing the code, since every operation on the Ethereum blockchain has an associated cost. As such, I find the ability to add in-line Solidity assembly right there with your code extremely powerful, as it lets you get a little closer to the Ethereum Virtual Machine for optimizations where necessary. I also think it fits in very nicely within the assembly block. And, last but not least, it makes proxies possible, which is just awesome. For-Each Loop for (int num : nums) { doSomething(num); } In a language generally considered verbose, the for-each loop in Java is a breath of fresh air. I think it looks pretty clean and is quite readable (although not quite Python num in nums readable). Macros #define MAX_VALUE 10 I got introduced to C-style macros when building my first Arduino project and for a while had no clue exactly what they did. Nowadays, I have a better idea of what macros are and am quite happy with the way they are declared in C. Not hating on C by any means, but, like Java, there’s little about the actual syntax that stands out, so these last two ones are a little meh, unfortunately. ‘using namespace’ using namespace std; Sorry :( And that’s it! So, what are your favorite pieces of syntax?
https://medium.com/swlh/my-favorite-pieces-of-syntax-in-8-different-programming-languages-ba37b64fc232
['Yakko Majuri']
2020-09-26 22:28:17.954000+00:00
['Programming', 'Software Development', 'Technology', 'Software Engineering', 'JavaScript']
10 Interview Questions Every JavaScript Developer Should Know
Good to hear: Classes: create tight coupling or hierarchies/taxonomies. Prototypes: mentions of concatenative inheritance, prototype delegation, functional inheritance, object composition. Red Flags: No preference for prototypal inheritance & composition over class inheritance. Learn More: 4. What are the pros and cons of functional programming vs object-oriented programming? OOP Pros: It’s easy to understand the basic concept of objects and easy to interpret the meaning of method calls. OOP tends to use an imperative style rather than a declarative style, which reads like a straight-forward set of instructions for the computer to follow. OOP Cons: OOP Typically depends on shared state. Objects and behaviors are typically tacked together on the same entity, which may be accessed at random by any number of functions with non-deterministic order, which may lead to undesirable behavior such as race conditions. FP Pros: Using the functional paradigm, programmers avoid any shared state or side-effects, which eliminates bugs caused by multiple functions competing for the same resources. With features such as the availability of point-free style (aka tacit programming), functions tend to be radically simplified and easily recomposed for more generally reusable code compared to OOP. FP also tends to favor declarative and denotational styles, which do not spell out step-by-step instructions for operations, but instead concentrate on what to do, letting the underlying functions take care of the how. This leaves tremendous latitude for refactoring and performance optimization, even allowing you to replace entire algorithms with more efficient ones with very little code change. (e.g., memoize, or use lazy evaluation in place of eager evaluation.) Computation that makes use of pure functions is also easy to scale across multiple processors, or across distributed computing clusters without fear of threading resource conflicts, race conditions, etc… FP Cons: Over exploitation of FP features such as point-free style and large compositions can potentially reduce readability because the resulting code is often more abstractly specified, more terse, and less concrete. More people are familiar with OO and imperative programming than functional programming, so even common idioms in functional programming can be confusing to new team members. FP has a much steeper learning curve than OOP because the broad popularity of OOP has allowed the language and learning materials of OOP to become more conversational, whereas the language of FP tends to be much more academic and formal. FP concepts are frequently written about using idioms and notations from lambda calculus, algebras, and category theory, all of which requires a prior knowledge foundation in those domains to be understood. Good to hear: Mentions of trouble with shared state, different things competing for the same resources, etc… Awareness of FP’s capability to radically simplify many applications. Awareness of the differences in learning curves. Articulation of side-effects and how they impact program maintainability. Awareness that a highly functional codebase can have a steep learning curve. Awareness that a highly OOP codebase can be extremely resistant to change and very brittle compared to an equivalent FP codebase. Awareness that immutability gives rise to an extremely accessible and malleable program state history, allowing for the easy addition of features like infinite undo/redo, rewind/replay, time-travel debugging, and so on. Immutability can be achieved in either paradigm, but a proliferation of shared stateful objects complicates the implementation in OOP. Red flags: Unable to list disadvantages of one style or another — Anybody experienced with either style should have bumped up against some of the limitations. Learn More: 5. When is classical inheritance an appropriate choice? The answer is never, or almost never. Certainly never more than one level. Multi-level class hierarchies are an anti-pattern. I’ve been issuing this challenge for years, and the only answers I’ve ever heard fall into one of several common misconceptions. More frequently, the challenge is met with silence. “If a feature is sometimes useful and sometimes dangerous and if there is a better option then always use the better option.” ~ Douglas Crockford Good to hear: Rarely, almost never, or never. A single level is sometimes OK, from a framework base-class such as React.Component . . “Favor object composition over class inheritance.” Learn More: 6. When is prototypal inheritance an appropriate choice? There is more than one type of prototypal inheritance: Delegation (i.e., the prototype chain). (i.e., the prototype chain). Concatenative (i.e. mixins, `Object.assign()`). (i.e. mixins, `Object.assign()`). Functional (Not to be confused with functional programming. A function used to create a closure for private state/encapsulation). Each type of prototypal inheritance has its own set of use-cases, but all of them are equally useful in their ability to enable composition, which creates has-a or uses-a or can-do relationships as opposed to the is-a relationship created with class inheritance. Good to hear: In situations where modules or functional programming don’t provide an obvious solution. When you need to compose objects from multiple sources. Any time you need inheritance. Red flags: No knowledge of when to use prototypes. No awareness of mixins or `Object.assign()`. Learn More: 7. What does “favor object composition over class inheritance” mean? This is a quote from “Design Patterns: Elements of Reusable Object-Oriented Software”. It means that code reuse should be achieved by assembling smaller units of functionality into new objects instead of inheriting from classes and creating object taxonomies. In other words, use can-do, has-a, or uses-a relationships instead of is-a relationships. Good to hear: Avoid class hierarchies. Avoid brittle base class problem. Avoid tight coupling. Avoid rigid taxonomy (forced is-a relationships that are eventually wrong for new use cases). Avoid the gorilla banana problem (“what you wanted was a banana, what you got was a gorilla holding the banana, and the entire jungle”). Make code more flexible. Red Flags: Fail to mention any of the problems above. Fail to articulate the difference between composition and class inheritance, or the advantages of composition. Learn More: 8. What are two-way data binding and one-way data flow, and how are they different? Two way data binding means that UI fields are bound to model data dynamically such that when a UI field changes, the model data changes with it and vice-versa. One way data flow means that the model is the single source of truth. Changes in the UI trigger messages that signal user intent to the model (or “store” in React). Only the model has the access to change the app’s state. The effect is that data always flows in a single direction, which makes it easier to understand. One way data flows are deterministic, whereas two-way binding can cause side-effects which are harder to follow and understand. Good to hear: React is the new canonical example of one-way data flow, so mentions of React are a good signal. Cycle.js is another popular implementation of uni-directional data flow. Angular is a popular framework which uses two-way binding. Red flags: No understanding of what either one means. Unable to articulate the difference. Learn more: 9. What are the pros and cons of monolithic vs microservice architectures? A monolithic architecture means that your app is written as one cohesive unit of code whose components are designed to work together, sharing the same memory space and resources. A microservice architecture means that your app is made up of lots of smaller, independent applications capable of running in their own memory space and scaling independently from each other across potentially many separate machines. Monolithic Pros: The major advantage of the monolithic architecture is that most apps typically have a large number of cross-cutting concerns, such as logging, rate limiting, and security features such audit trails and DOS protection. When everything is running through the same app, it’s easy to hook up components to those cross-cutting concerns. There can also be performance advantages, since shared-memory access is faster than inter-process communication (IPC). Monolithic cons: Monolithic app services tend to get tightly coupled and entangled as the application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability. Monolithic architectures are also much harder to understand, because there may be dependencies, side-effects, and magic which are not obvious when you’re looking at a particular service or controller. Microservice pros: Microservice architectures are typically better organized, since each microservice has a very specific job, and is not concerned with the jobs of other components. Decoupled services are also easier to recompose and reconfigure to serve the purposes of different apps (for example, serving both the web clients and public API). They can also have performance advantages depending on how they’re organized because it’s possible to isolate hot services and scale them independent of the rest of the app. Microservice cons: As you’re building a new microservice architecture, you’re likely to discover lots of cross-cutting concerns that you did not anticipate at design time. A monolithic app could establish shared magic helpers or middleware to handle such cross-cutting concerns without much effort. In a microservice architecture, you’ll either need to incur the overhead of separate modules for each cross-cutting concern, or encapsulate cross-cutting concerns in another service layer that all traffic gets routed through. Eventually, even monolthic architectures tend to route traffic through an outer service layer for cross-cutting concerns, but with a monolithic architecture, it’s possible to delay the cost of that work until the project is much more mature. Microservices are frequently deployed on their own virtual machines or containers, causing a proliferation of VM wrangling work. These tasks are frequently automated with container fleet management tools. Good to hear: Positive attitudes toward microservices, despite the higher initial cost vs monolthic apps. Aware that microservices tend to perform and scale better in the long run. Practical about microservices vs monolithic apps. Structure the app so that services are independent from each other at the code level, but easy to bundle together as a monolithic app in the beginning. Microservice overhead costs can be delayed until it becomes more practical to pay the price. Red flags: Unaware of the differences between monolithic and microservice architectures. Unaware or impractical about the additional overhead of microservices. Unaware of the additional performance overhead caused by IPC and network communication for microservices. Too negative about the drawbacks of microservices. Unable to articulate ways in which to decouple monolithic apps such that they’re easy to split into microservices when the time comes. Underestimates the advantage of independently scalable microservices. 10. What is asynchronous programming, and why is it important in JavaScript? Synchronous programming means that, barring conditionals and function calls, code is executed sequentially from top-to-bottom, blocking on long-running tasks such as network requests and disk I/O. Asynchronous programming means that the engine runs in an event loop. When a blocking operation is needed, the request is started, and the code keeps running without blocking for the result. When the response is ready, an interrupt is fired, which causes an event handler to be run, where the control flow continues. In this way, a single program thread can handle many concurrent operations. User interfaces are asynchronous by nature, and spend most of their time waiting for user input to interrupt the event loop and trigger event handlers. Node is asynchronous by default, meaning that the server works in much the same way, waiting in a loop for a network request, and accepting more incoming requests while the first one is being handled. This is important in JavaScript, because it is a very natural fit for user interface code, and very beneficial to performance on the server. Good to hear: An understanding of what blocking means, and the performance implications. An understanding of event handling, and why its important for UI code. Red flags:
https://medium.com/javascript-scene/10-interview-questions-every-javascript-developer-should-know-6fa6bdf5ad95
['Eric Elliott']
2020-08-27 20:03:06.809000+00:00
['Functional Programming', 'JavaScript', 'Development', 'Technology']
How, When, and Why Should You Normalize / Standardize / Rescale Your Data?
Before diving into this topic, lets first start with some definitions. “Rescaling” a vector means to add or subtract a constant and then multiply or divide by a constant, as you would do to change the units of measurement of the data, for example, to convert a temperature from Celsius to Fahrenheit. “Normalizing” a vector most often means dividing by a norm of the vector. It also often refers to rescaling by the minimum and range of the vector, to make all the elements lie between 0 and 1 thus bringing all the values of numeric columns in the dataset to a common scale. “Standardizing” a vector most often means subtracting a measure of location and dividing by a measure of scale. For example, if the vector contains random values with a Gaussian distribution, you might subtract the mean and divide by the standard deviation, thereby obtaining a “standard normal” random variable with mean 0 and standard deviation 1. After reading this post you will know: Why should you standardize/normalize/scale your data How to standardize your numeric attributes to have a 0 mean and unit variance using standard scalar How to normalize your numeric attributes between the range of 0 and 1 using min-max scalar How to normalize using robust scalar When to choose standardization or normalization Let’s get started. Why Should You Standardize / Normalize Variables: Standardization: Standardizing the features around the center and 0 with a standard deviation of 1 is important when we compare measurements that have different units. Variables that are measured at different scales do not contribute equally to the analysis and might end up creating a bais. For example, A variable that ranges between 0 and 1000 will outweigh a variable that ranges between 0 and 1. Using these variables without standardization will give the variable with the larger range weight of 1000 in the analysis. Transforming the data to comparable scales can prevent this problem. Typical data standardization procedures equalize the range and/or data variability. Normalization: Similarly, the goal of normalization is to change the values of numeric columns in the dataset to a common scale, without distorting differences in the ranges of values. For machine learning, every dataset does not require normalization. It is required only when features have different ranges. For example, consider a data set containing two features, age, and income(x2). Where age ranges from 0–100, while income ranges from 0–100,000 and higher. Income is about 1,000 times larger than age. So, these two features are in very different ranges. When we do further analysis, like multivariate linear regression, for example, the attributed income will intrinsically influence the result more due to its larger value. But this doesn’t necessarily mean it is more important as a predictor. So we normalize the data to bring all the variables to the same range. When Should You Use Normalization And Standardization: Normalization is a good technique to use when you do not know the distribution of your data or when you know the distribution is not Gaussian (a bell curve). Normalization is useful when your data has varying scales and the algorithm you are using does not make assumptions about the distribution of your data, such as k-nearest neighbors and artificial neural networks. Standardization assumes that your data has a Gaussian (bell curve) distribution. This does not strictly have to be true, but the technique is more effective if your attribute distribution is Gaussian. Standardization is useful when your data has varying scales and the algorithm you are using does make assumptions about your data having a Gaussian distribution, such as linear regression, logistic regression, and linear discriminant analysis. Dataset: I have used the Lending Club Loan Dataset from Kaggle to demonstrate examples in this article. Importing Libraries: import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt Importing dataset: Let’s import three columns — Loan amount, int_rate and installment and the first 30000 rows in the data set (to reduce the computation time) cols = ['loan_amnt', 'int_rate', 'installment'] data = pd.read_csv('loan.csv', nrows = 30000, usecols = cols) If you import the entire data, there will be missing values in some columns. You can simply drop the rows with missing values using the pandas drop na method. Basic Analysis: Let’s now analyze the basic statistical values of our dataset. data.describe() The different variables present different value ranges, therefore different magnitudes. Not only the minimum and maximum values are different, but they also spread over ranges of different widths. Standardization (Standard Scalar) : As we discussed earlier, standardization (or Z-score normalization) means centering the variable at zero and standardizing the variance at 1. The procedure involves subtracting the mean of each observation and then dividing by the standard deviation: The result of standardization is that the features will be rescaled so that they’ll have the properties of a standard normal distribution with μ=0 and σ=1 where μ is the mean (average) and σ is the standard deviation from the mean. CODE: StandardScaler from sci-kit-learn removes the mean and scales the data to unit variance. We can import the StandardScalar method from sci-kit learn and apply it to our dataset. from sklearn.preprocessing import StandardScaler scaler = StandardScaler() data_scaled = scaler.fit_transform(data) Now let's check the mean and standard deviation values print(data_scaled.mean(axis=0)) print(data_scaled.std(axis=0)) As expected, the mean of each variable is now around zero and the standard deviation is set to 1. Thus, all the variable values lie within the same range. print('Min values (Loan Amount, Int rate and Installment): ', data_scaled.min(axis=0)) print('Max values (Loan Amount, Int rate and Installment): ', data_scaled.max(axis=0)) However, the minimum and maximum values vary according to how spread out the variable was, to begin with, and is highly influenced by the presence of outliers. Normalization (Min-Max Scalar) : In this approach, the data is scaled to a fixed range — usually 0 to 1. In contrast to standardization, the cost of having this bounded range is that we will end up with smaller standard deviations, which can suppress the effect of outliers. Thus MinMax Scalar is sensitive to outliers. A Min-Max scaling is typically done via the following equation: CODE: Let’s import MinMaxScalar from Scikit-learn and apply it to our dataset. from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() data_scaled = scaler.fit_transform(data) Now let’s check the mean and standard deviation values. print('means (Loan Amount, Int rate and Installment): ', data_scaled.mean(axis=0)) print('std (Loan Amount, Int rate and Installment): ', data_scaled.std(axis=0)) After MinMaxScaling, the distributions are not centered at zero and the standard deviation is not 1. print('Min (Loan Amount, Int rate and Installment): ', data_scaled.min(axis=0)) print('Max (Loan Amount, Int rate and Installment): ', data_scaled.max(axis=0)) But the minimum and maximum values are standardized across variables, different from what occurs with standardization. Robust Scalar (Scaling to median and quantiles) : Scaling using median and quantiles consists of subtracting the median to all the observations and then dividing by the interquartile difference. It Scales features using statistics that are robust to outliers. The interquartile difference is the difference between the 75th and 25th quantile: IQR = 75th quantile — 25th quantile The equation to calculate scaled values: X_scaled = (X — X.median) / IQR CODE: First, Import RobustScalar from Scikit learn. from sklearn.preprocessing import RobustScaler scaler = RobustScaler() data_scaled = scaler.fit_transform(data) Now check the mean and standard deviation values. print('means (Loan Amount, Int rate and Installment): ', data_scaled.mean(axis=0)) print('std (Loan Amount, Int rate and Installment): ', data_scaled.std(axis=0)) As you can see, the distributions are not centered in zero and the standard deviation is not 1. print('Min (Loan Amount, Int rate and Installment): ', data_scaled.min(axis=0)) print('Max (Loan Amount, Int rate and Installment): ', data_scaled.max(axis=0)) Neither are the minimum and maximum values set to a certain upper and lower boundaries like in the MinMaxScaler. I hope you found this article useful. Happy learning! References
https://medium.com/towards-artificial-intelligence/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff
['Swetha Lakshmanan']
2020-06-11 16:42:06.956000+00:00
['Machine Learning', 'Python', 'Data Analysis', 'Data Science', 'Programming']
As Lyft Heads Toward IPO, Does Its Key Metric Justify A Unicorn Valuation?
By Clement Thibault With the announcement in late February that Lyft, the San Francisco-based ride-hailing service, was planning to start its roadshow in early March — the run-up to an initial public offering — the 2019 IPO season is finally underway. Lyft is one of a number of closely watched, privately-held, Unicorn companies whose public listing has been highly anticipated by investors. Other firms expected to go public this year include Uber, Airbnb, and Slack. After delays caused by U.S. President Donald Trump’s partial government shutdown, Lyft is the first one out of the gate. Though its official IPO date is still unknown, it’s expected to happen in the coming weeks, possibly as early as the end of March. The company will likely trade under the ticker symbol LYFT. Lyft’s S-1 filing, the mandatory financial disclosure form that precedes an IPO, was made public last Friday. Here are the numbers and key fundamentals that can and will affect both the offering and its forward valuation. Lyft Valuation A word of warning first: the risks associated with investing in any initial public offering are higher than a similar investment in equities that have been trading for a considerable period of time. Newly publicly traded stocks tend to be more volatile, and with little historical data it’s more difficult to call which way shares might head next. As well, even with Lyft’s S-1 information, there’s much that remains unknown about the company. We’ll have to wait for its quarterly reports for answers to those questions. Still, we now have a better understanding of many of Lyft’s fundamentals. The company, which competes directly with Uber, Gett and traditional taxi services, operates a platform that matches independent drivers with customers looking for transportation, via Lyft’s app. Much like Uber, Lyft collects service fees and commissions from drivers for using the company’s platform. Those fees account for Lyft’s total revenue stream right now. Estimates for Lyft’s valuation range between $20 and $25 billion dollars. There won’t be an exact number released until just days before the IPO, so for this post, we’ll assume it will be in the $22 billion range. Revenue And Net Income The S-1 provides insight into the company’s financials for the past three years. Lyft’s revenue growth has been substantial. In 2016, the company brought in $343 million. That grew to $1.05 billion in 2017, then doubled to $2.15 billion in 2018. Over two years, that’s an impressive 500% increase. However, like many young companies fueled by venture capital, net income has not kept pace, nor to be honest has it been a point of emphasis thus far. In 2016 and 2017, Lyft lost $682 and $688 million, respectively. Losses were even higher in 2018: $911 million dollars. Reasons given were costs of developing the platform, plus additional budgeting for marketing, R&D and operations. One positive: the losses-to-revenue ratio is shrinking. In 2016, losses were twice the size of its revenue; by 2018 losses were just 42% of the size of its revenue. Nonetheless, we don’t see Lyft becoming profitable anytime soon. But then Wall Street is infamous for caring more about growth than profits. As such, Lyft is likely to get a pass on profitability if it can manage to continue its impressive growth streak. Key Make-Or-Break Metric When it comes to growth, more than anything else, Wall Street loves user growth. Not-yet-profitable companies such as Netflix (NASDAQ:NFLX), Twitter (NYSE:TWTR) and most notably Tesla (NASDAQ:TSLA), have been boosted or hammered solely on this metric. At the same time, investors almost completely disregarded their revenues and profits. We believe Lyft will be judged similarly, with investors focusing mostly on its active rider metrics. An active rider is anyone who uses the service at least once during the quarter. Lyft: Active Riders All charts courtesy Lyft’s S-1 Lyft’s active riders figures are trending upward, growing every quarter, rising from 3.5 million in Q1 2016 to 18.6 million during Q4 2018. Year-over-year growth for active riders grew 104% between 2016 and 2017, and 57% between 2017 and 2018. Of course, Lyft will need to continue to grow rapidly on this metric in order to justify a $22 billion valuation with $2 billion in revenue and almost $1 billion in losses. Lyft: Revenue per Active Rider To its credit, Lyft has done well at efficiently monetizing its active riders. In Q4 of 2018, the company’s average revenue per active user was $36.04, compared with $27.34 in the same quarter a year prior. The strong uptick was made possible primarily by fee increases and a reorganization of the driver incentive structure. But number of trips taken by active riders has slowed considerably. Between Q4 2016 and Q4 2017, the number of rides per active rider accelerated, from 7.96 rides to 9.23, +16%. However, in the past year, that metric moved from 9.23 to just 9.59, an uptick of only 4%. Unfortunately, based on the information in its S-1, both Lyft’s financials and growth don’t justify a $22 billion valuation. And there are additional negatives. Market Positioning Lyft isn’t a market leader. According to reports, it has a 28% share of the U.S. market while Uber dominates with an approximate 70% share. Its revenue in Q3 2018, $563 million, pales in comparison to Uber’s reported $3 billion over the same period. We believe much of Lyft’s growth is a result of Uber’s management scandals, as well as legal and safety issues. In addition, since Uber’s S-1 hasn’t yet been made public, it’s difficult to do a true comparison. Once Uber’s filings are completed we’ll be able to more effectively gauge which of the two companies is measurably better — and might be positioned to deal a serious blow to its competitor. Bottom Line Considering all the known factors — revenue growth, user expansion, competitive strength — it’s difficult to recommend investing in Lyft at the estimated valuation. But IPO valuations, ahead of the event, are often overly optimistic. And newly listed shares tend to progress or plummet during the days following their listing. Over a longer period it’s possible Lyft shares will provide a better entry-point for interested investors. For that reason, the company’s very first earnings report will be crucial. We’ll revisit our recommendation once it’s released.
https://medium.com/investing-com/as-lyft-heads-toward-ipo-does-its-key-metric-justify-a-unicorn-valuation-55b1f12d5e5d
[]
2019-03-07 14:00:07.916000+00:00
['Stocks', 'Business', 'Startup', 'Lyft', 'Stock Market']
Zebras Fix What Unicorns Break
WHY IS IT SO HARD TO BUILD ZEBRA COMPANIES? In the last year we’ve spoken to countless founders, investors, foundations, and thought leaders who believe zebra companies are crucial to our society’s success. Yet zebras struggle for survival because they lack the environment to encourage their birth, let alone to support them through maturity. “I wonder how many change-makers are stuck under the demands of unicorn investors,” said TJ Abood of Access Ventures, who added that he worried about “the opportunity cost to society” under this model. From our conversations with stakeholders, we distilled the most common challenges facing zebra companies: 1. The problem isn’t product, it’s process. Tech isn’t a silver bullet. Building more won’t solve the biggest challenges we face today. An app won’t address the homelessness crisis in San Francisco or unite bitterly divided partisan politicians. The obstacle is that we are not investing in the process and time it takes to help institutions adopt, deploy, and measure the success of innovation, apps or otherwise. 2. Zebra companies are often started by women and other underrepresented founders. Three percent of venture funding goes to women and less than one percent to people of color. Although women start 30 percent of businesses, they receive only 5 percent of small-business loans and 3 percent of venture capital. Yet when surveyed, women — who perform better overall than founding teams composed exclusively of men — say they are in it for the long haul: to build profitable, sustainable companies. 3. You can’t be it if you can’t see it. Look hard outside of Silicon Valley and you’ll find promising zebra companies. But existing and aspiring business owners haven’t seen enough proof that they’ll have a higher chance of becoming financially successful and socially celebrated if they follow sustainable business practices. They lack heroes to emulate, so they default to the “growth at all costs” model. Imagine if every fund allocated a small percentage for zebra experiments. The investing firm Indie.vc has bravely stepped into this space, but it shouldn’t stand alone. 4. Zebras are stuck between two outdated paradigms, nonprofit and for-profit. For young companies pursuing both profit and purpose, the existing imperfect structures (hybrid for-profit/nonprofit, Public Benefit Corps, B-Corps, L3Cs) can be prohibitively expensive. The expense comes not only in legal fees, but in the consumption of a founder’s most precious commodity: time. Months are lost searching for aligned, strategic investors who are both familiar and comfortable with alternative models. This presents a chicken-and-egg problem for foundations, philanthropists, and investors alike. They are spooked by unproven alternative models, but companies can’t prove their models work without experiments to fund them in the first place. Moreover, the current tax system doesn’t reward — or even acknowledge — anything other than for-profit (tax) or nonprofit (deduction) strategies. From the IRS’s perspective, there is nothing akin to a “50 percent financial return, 50 percent social impact” investment. This leaves many potential investors in a straitjacket. 5. Impact investing’s thesis is detrimentally narrow and risk-averse. Much of the $36 billion in impact investment funding is restricted to verticals like clean technology, microfinance, or global health. This immature market limits innovation in other sectors — like journalism and education — that could desperately use it. “So how will investors turn a profit and mitigate risks?” you may be asking. Dividends? Equity crowdfunding? We don’t have all the answers. But we’ve seen how a company’s business model and values can negatively affect the bottom line (#deleteuber). So what if the opposite is also true? What if more-enlightened dollars invested in more-enlightened companies led to stronger returns? What if companies that stood for something were in fact more profitable? Patagonia, Warby Parker, Zingerman’s, Etsy, Mailchimp, Basecamp, and Kickstarter are a start — but the world needs so much more. MAKE ZEBRAS: JOIN US If you believe technology and capital must do better, if you are building a zebra company or want to help carve out a space for them to thrive: join us Our goal is to gather zebra founders, philanthropists, investors, thinkers, and advocates to meet in person this year for DazzleCon (November 15–17 in Portland, Oregon)— a group of zebras is called a dazzle! — to learn from one another and pool resources, ideas, and best practices, to collectively advance this set of ambitions. From this gathering, we will capture and share the unique patterns that zebra founders and funders are finding, and we’ll turn a loose network into a powerful, cohesive movement. Are you in? Go here.
https://medium.com/zebras-unite/zebrasfix-c467e55f9d96
['Jennifer', 'Mara', 'Astrid']
2017-07-13 16:11:08.367000+00:00
['Women In Tech', 'Social Enterprise', 'Startup', 'Ethics', 'Venture Capital']
Understanding the Analytic Development Lifecycle
As discussed in one of my earlier articles, it is critical to understand the analytic development lifecycle. Your analytics will not last forever, and instead, become obsolete and need to be retired. As you collect more and newer data, you will need to continue maintaining your current analytics while creating new ones. It is essential to understand what you will need to do at each stage of the analytic lifecycle. As seen below, I view the analytic lifecycle as five critical components to development: R&D, Deployment, Testing & Validation, Maintenance, and Retirement. So let’s walk through each element together! Analytic development lifecycle — image created by the author using LucidChart Research and Development Research and development encompass the first few steps in the analytic development lifecycle. After receiving your data, you will spend time looking it over, understanding its structure, and cleaning it. As you go about this process, you need to consider what analytic opportunities you see from the data, any business problems you are aware of, and how they overlap. As you work through developing your problem statement, you can begin to test out analytic concepts and develop your analytic. This process varies if you are working with existing analytics. For current analytics, you will want to understand the areas that need improvement, methods you will utilize, and any subject matter expertise (SME) you have acquired to help you through these updates. Having followed up conversations with an SME will help you validate if your updates are going in the correct direction or not. I find the research and development phase of the process to be the most interesting as it is where you can learn the most about your data. I work alongside other data scientists, data engineers, and subject matter experts daily. These individuals help in creating data sets and developing a data dictionary to understand what the data represents. I can then understand how to combine this data with other datasets to build my analyses. I want to make sure I see the larger picture and tell a compelling story before moving further down the process.
https://towardsdatascience.com/understanding-the-analytic-development-lifecycle-2d1c9cd5692e
['Rose Day']
2020-11-08 03:50:11.464000+00:00
['Machine Learning', 'Artificial Intelligence', 'Software Development', 'Technology', 'Data Science']
Civic Data Initiatives
Needless to say, civic data initiatives also differ from governmental institutions, which are reluctant to share any more that they are legally obligated to. Many governments in the world simply dump scanned hard copies of documents on official websites instead of releasing machine readable data, which prevents systematic auditing of government activities. Civic data initiatives, on the other hand, make it a priority to structure and release their data in formats that are both accessible and queryable. Civic data initiatives also deviate from general purpose information commons such as Wikipedia. Because they consistently engage with problems, closely watch a particular societal issue, make frequent updates, even record from the field to generate and organize highly granular data about the matter. In fact, the purpose of civic data initiatives is not necessarily to inform public about what is happening, but to provide dependable data based on specific facts and evidences. Civic data initiatives proactively conduct research and converge data from their own field records (interviews and examinations), existing empirical research (other studies), public information (government records to media reports), and private sources (leaks and what not). They organize data into structures, connect the dots, employ data standards, form databases, and provide APIs (Application Programming Interfaces). They systematically publish data supported with analysis and stories. Once generated, civic data moves. It supports advocacy campaigns, mediates focusing attention on the perpetrator of a cause, fuels investigative journalism, helps build resilient civil positions, becomes a stepping stone for another NGO’s maneuver against status quo, contributes the development of solidarity among struggling communities. The work of civic data initiatives becomes a useful reference for anyone who care about societal issues including journalists, activists, advocates, lawyers, artists, designers, technologists, academics and other civil society organizations. As a civic database evolves, a distinctive vocabulary emerges, a vocabulary that prioritizes the civil society and freedoms as opposed to the status quo. When applications and interfaces use such data, the vocabulary would circulate with protocological interventions, which would allow the public explore the issues from the perspective of civil society instead of the government and corporations. Thus, it would help gain positions of influence that can develop counter-hegemony for the socialist movement, as Gramsci puts it in his writings the War of Position / War of Manoeuvre. In fact, systematic abuse of power, bluntly oppressive or subtly hypocrite, has to be confronted with systematic struggle. Civic data work emerges as one particular mode of contributing to such struggle. Civic data initiatives Several civic data initiatives generate data on variety of issues at different geographies, scopes, and scales. The non-exhaustive list below have information on founders, data sources, and financial support. It is sorted according to each initiative’s founding year. Please send your suggestions to contact at graphcommons.com. See more detailed information and updates on the spreadsheet of civic data initiatives. Open Secrets tracks data about the money flow in the US government, so it becomes more accessible for journalists, researchers, and advocates. Founded as a non-profit in 1983 by Center for Responsive Politics, gets support from variety of institutions. PolitiFact is a fact-checking website that rates the accuracy of claims by elected officials and others who speak up in American politics. Uses on-the-record interviews as its data source. Founded in 2007 as a non-profit organization by Tampa Bay Times. Supported by Democracy Fund, Bill & Melinda Gates Foundation, John S. and James L. Knight Foundation, Ford Foundation, Knight Foundation, Craigslist Charitable Fund, and the Collins Center for Public Policy. Littlesis is a database of who-knows-who at the heights of business and government. Their data derives from government filings, news articles, and other reputable sources. It also provides an API for retrieving the data. Founded in 2009 as a project of Project of Public Accountability Initiative. Financially supported by institutions including Sunlight Foundation, Chorus Foundation, and Arca Foundation. OpenCorporates aims to have a URL for every company in the world. Founded as a for-profit company by Chrinon Ltd (Registered in UK) in 2010. Data source from official company registers, company website scrapes, recently from Word Bank Institute and Oknf. They generate income from paid plans for using their API. OpenSpending maps the money worldwide — that is, to track and analyse public financial information globally. It is meant to be a resource for individuals and groups who wish to discuss and investigate public financial information, including journalists, academics, campaigners, and more. Founded as a project in 2011 by Open Knowledge Foundation, it gets financial support from Open Knowledge Foundation, 4iP, Open Society Foundation, Knight Foundation, Omidyar Network, Hewlett Foundation OpenOil is a consultancy, publishing house and training provider, specialised on open data products and services around natural resources. Provides API access to the database. Founded as a company in 2011 by a Reuters correspondent. Supported by Shuttleworth Foundation and also provides consulting to international organizations. EJOLT is a global research project bringing science and society together to catalogue and analyze ecological distribution conflicts and confront environmental injustice. Founded as a platform in 2011 with many constituents. Financially supported by the European Union 7th Framework Programme. Istanbul Worker’s Health and Safety Labor Watch reports on the workplace homicides, accidents, occupational diseases and safety conditions across Turkey. Founded in 2011 as a platform by members from Istanbul Medical Chamber, Chamber of Turkish Engineers and Architects, Scholars from Istanbul, Unions. Financially supported by individual donations, Istanbul Medical Chamber, Petrol-İş Union, TEK-GIDA İş Union, voluntary work. Open Duka provides a freely accessible database of information on Kenyan entities. Founded by Open Institute in 2012, the project scrapes data from various sources that range from shareholder information, procurement information, legal cases and company information. It is built in partnership with the National Council of Law Reporting and funded by A.T.T.I. InfoAmazonia provides timely news and reports of the endangered Amazon region. Founded by oEco, Internews in 2012, the project combines data from government databases, Open Street Map, and other NGO databases. Supported by ICFJ, Avina, CDKN, Skoll Foundation. Networks of Dispossession maps the relations of capital and power in Turkey. Founded as a working group in June 2013 during the Gezi Park resistance, the project compiles data from government databases, media reports, and company websites. Publishes interactive maps and provides a search engine and an API via the Graph Commons platform. It is a voluntary work and does not have any financial support. Poderopedia Mapping issues of local-regional socio-economic development, public investments, and ecology in Chile, Venezuela, and Colombia. Founded in 2013, the project is funded by Knight Foundation and International Center for Journalists. Open Interests is a catalogue of political and commercial actors related to the European Union. Founded by the LobbyFacts project team in 2014, the project compiles data from European Lobby Register, Register of Expert Groups membership, Financial Transparency System, Tenders Electronic Daily (TED) and provides a search engine, which can be used to quickly retrieve information about the activities of companies, people and institutions in a European context. The project is supported by Corporate Europe Observatory (CEO), Friends of the Earth Europe (FoEE) and LobbyControl, OKFN Labs and Knight-Mozilla OpenNews. La Fabrique de La loi (The Law Factory) maps issues of local-regional socio-economic development, public investments, and ecology in France. Started in 2014, the project builds a database by tracking bills from government sources, provides a search engine as well as an API. The partners of the project are CEE Sciences Po, médialab Sciences Po, Regards Citoyens, and Density Design. Mapping Media Freedom identifies threats, violations and limitations faced by members of the press throughout European Union member states, candidates for entry and neighbouring countries. Initiated by Index on Censorship and European Commission in 2004, the project identifies threats, violations and limitations faced by members of the press throughout European Union member states, candidates for entry and neighbouring countries. Regional Governance and Local Democarcy maps issues of local-regional socio-economic development, public investments, and ecology in Turkey. Commissions researchers and participants from the region, maps the generated data and publishes along with video discussions . Founded in 2015 by Helsinki Citizens’ Assembly and financially supported by European Commission, Turkey’s Ministry of European Union. Open Contracting Partnership open up public contracting through disclosure, data and engagement so that the huge sums of money involved are spent honestly, fairly, and effectively. Data source is the participating governments in their program. Spun out of the World Bank in 2015 as a non-profit, and financially supported William and Flora Hewlett Foundation, Omidyar Network, Open Society Foundation, Laura and John Arnold Foundation, GIZ, Hivos. This is a non-exhaustive list, please send your suggestions to contact at graphcommons.com. See more detailed information and updates on the spreadsheet of civic data initiatives. This article was originally published in Turkish (11.06.2016) and translated to Kurdish (15.06.2016), both published on Bianet.org. Thank you Ahmet Kizilay and Zeyno Ustun for proofreading and suggestions.
https://medium.com/graph-commons/civic-data-initiatives-c4a0f40d9a23
['Burak Arikan']
2017-11-28 16:25:09.673000+00:00
['Journalism', 'Nonprofit', 'Open Data', 'Civictech', 'Data Visualization']
My Earth Story On ClimateTube —By @OlumideIDOWU
📌What is the environmental/climate situation where you live?. 📌What are the climate/environmental impacts on your community? 📌Is there anything you have done to help your community build resilience to the environmental/climate crisis facing you?. You can watch below:
https://medium.com/climatewed/my-earth-story-on-climatetube-by-olumideidowu-2901d279172c
['Iccdi Africa']
2020-08-01 11:18:15.449000+00:00
['Agriculture', 'Earth', 'Wash', 'Climate Change', 'Renewable Energy']
Build a Sentence Parsing Adventure Game with JavaScript and Compromise
Build a Sentence Parsing Adventure Game with JavaScript and Compromise Building a Representational Sentence Graph in JavaScript In this article I’ll show you how to use the Compromise JavaScript library to interpret user input and translate it to a hierarchical sentence graph. I’ll be using Compromise to interpret player input in an Angular interactive fiction game, but you can use Compromise for many different things including: Analyzing text for places, names, and companies Building a context-sensitive help system Transforming sentences based on tenses and other language rules Learning Objectives In this article we’ll cover: What compromise is How you can use compromise to analyze sentences Making inferences about sentence structure based on compromise Note: this article is an updated and more narrowly scoped version of an older article I wrote on Compromise. This information works with modern versions of Angular as well as modern versions of Compromise. What is Compromise? Compromise is a JavaScript library aiming to be a compromise between speed and accuracy. The aim is to have a client-side parsing library so fast that it can run as you’re typing while still providing relevant results. In this article I’ll be using Compromise to analyze the command the player typed into a text-based game and build out a Sentence object representing the overall structure of the sentence they entered. This sentence can then be used in other parts of my code to handle various verbs and make the application behave like a game. Installing and Importing Compromise To start with compromise, you first need to install it as a dependency. In my project I run npm i --save compromise to save the dependency as a run-time dependency. Next, in a relevant Angular service I import Compromise with this line: Thankfully, Compromise includes TypeScript type definitions, so we have strong typing information available, should we choose to use it. String Parsing with Compromise Next let’s look at how Compromise can be used to parse text and manipulate it. Take a look at my parse method defined below: Here I use nlp(text) to have Compromise load and parse the inputted text value. From there I could use any one of a number of methods Compromise offers, but the most useful thing for my specific scenario is to call .termList() on the result and see what Compromise has inferred about each word in my input. Note: the input text doesn’t have to be a single sentence, it could be several paragraphs and Compromised is designed to function at larger scales should you need to analyze a large quantity of text. When I log the results of Compromise’s parse operation, I see something like the following: Note here that the Term array contains information on a few different things, including: text — the raw text that the user typed — the raw text that the user typed clean — normalized lower-case versions of the user’s input. This is useful for string comparison — normalized lower-case versions of the user’s input. This is useful for string comparison tags — an object containing various attributes that may be present on the term, based on Compromise’s internal parsing rules. This tags collection is the main benefit to Compromise that I’ll be exploring in this article (aside from its ability to take a sentence and break it down into individual terms as we’ve just seen). Here we see that the tags property of the Open term contains {Adjective: true, Verb: true} . This is because English is a complex language and open can refer to the verb of opening something or an object's state, such as an open door. We’ll talk a bit more about this disambiguation later on, but for now focus on Compromise’s ability to recognize English words it knows and make inferences on words it doesn’t know based on patterns in their spelling and adjacent terms. Compromise’s intelligence in this regard is its main selling point for me on this type of an application. Compromise gets me most of the way there on figuring out how the user was trying to structure a sentence. This lets me filter out words I don’t care about and avoid trying to codify the entire English language in a simple game project. Adding an Abstraction Layer If you scroll back up to my parse method, you'll note it has a : Sentence return type specified. This is because I believe in adding abstraction layers around third party code whenever possible. This has a number of benefits: If third party behavior or signatures change significantly, you only need to adapt signatures in a few places since everything else relies on your own object’s signature If you need to change out an external dependency with another, you just need to re-implement the bits that lead up to the abstraction layer Wrapping other objects in my own makes it easier for me to define new methods and properties that make working with that code easier For Compromise, I chose to implement two main classes, a Word class and a Sentence class: I won’t stress any of the details of either of these implementations except to state that they wrap around Compromise’s Term class while allowing me to do integrated validation and structural analysis of the entire sentence. Validating Sentences Once I have a Sentence composed of a series of Word objects, I can make some inferences on word relationships based on how imperative (command-based) sentences are structured in English. Note that for the purposes of my application that I treat all input as a single sentence regardless of punctuation. My validation rules catch cases with multiple sentences fairly easily so I don’t see a need to distinguish on sentence boundaries. Specifically, I validate that the first word in a sentence is a verb. This makes sense only for imperative sentences such as Eat the Fish or Walk North , but that's the types of sentences we expect in a game like this. Next I validate that a sentence only contains a single verb (Term with a Verb tag). Anything with two or more is too complex for the parser to be able to handle. Once these checks are done, I can start to analyze words in relation to each other. Making Inferences about Sentences I operate under the assumption that the sentence is mainly oriented around one verb and zero or more nouns. I then loop over each word in the sentence from the right to the left and apply the following rules: If the word is an adverb, I associate it with the verb If the word is not a noun, verb, or adverb, I associate it with the last encountered noun, if any. The full method can be seen here: Once that’s done, I have a hierarchical model of a sentence. For ease of illustration, here is a debug view of a sample sentence: Next Steps With parsing in place the sentence contains a fairly rich picture of the structure of the sentence. This doesn’t mean that the player’s sentence makes logical or even grammatical sense, or even refers to something present in the game world. The sentence can, however, be passed off to a specific verb handler for the command entered, which in turn can try to make sense of it and come up with an appropriate reply, though this is out of the scope of this article, so stay tuned for a future article on game state management.
https://medium.com/javascript-in-plain-english/adventure-game-sentence-parsing-with-compromise-c4ead901da54
['Matt Eland']
2020-02-20 07:50:56.229000+00:00
['JavaScript', 'Web Development', 'Typescript', 'Coding', 'Programming']
Query Expansion for Snackable Content
At Media Distillery we have been working on a solution we’ve dubbed Snackable Content™. Using technologies like speech recognition, face recognition, and topic recognition, we can automatically create short clips based on a consumer’s favourite person, topic, or interest from TV content. With this solution, TV operators can present bite-sized content extracted from long-form video content. In this article, we discuss in-depth one of these technologies and how it contributes to our innovative Snackable ContentTM solution. For the main use-case of Snackable Content™ we look for interesting topic clips in TV content. In this process, we decide which topics to search for and how to present them. We can also allow the user themselves to provide a user query (or “topic”). This way the task becomes more closely related to a classic information retrieval problem which we refer to as “topic search.” While we have been investigating the topic search use-case, we have experimented with a number of technologies to improve its performance. Let’s now have a look at what we learned from our experiment with query expansion. Query Expansion In a nutshell, query expansion is a common technology used in information retrieval to increase the number of relevant search results (“recall”) by adding related terms to a user query. There are roughly two steps in this process: how to find new terms and how to add these new terms to the existing query. Step 1: How to find new terms Finding new terms can be done in a bunch of different ways. One simple way is by finding synonyms of each term in the original query. You can do this in an automated way for the English language through the readily available WordNet lexical database which has modeled these types of word relations. The benefit of this method is that the terms are guaranteed to be relevant to the input term, which depending on the method is not always guaranteed. One of the downsides is that lexical databases in languages other than English aren’t as readily available, which in our case could make it more difficult for us to scale our solution to other countries. Another more scalable method is using word embeddings. Each word embedding represents a point in high-dimensional space where the embeddings are trained in such a way that word embeddings that are closer together represent words that are semantically related. By converting each term in the user query to a word embedding we can find semantically related terms. A common method to accomplish this is called Word2Vec. Many more methods have been developed since its original creation, but Word2Vec has remained very popular. Since it’s easy to start and scale with, this is the method we’ve mostly been focusing on. Step 2: How to add these new terms to the existing query Once you have the expanded terms the next part is to create the expanded query. This can be as trivial as appending the expanded terms to the original terms. A possible side-effect of this — and query expansion in general — is that the expanded terms may cause the user query to drift too far from what the user originally intended. A simple way to mitigate this is to introduce a form of weighting on the expanded query. The weight of the expanded terms, w.r.t. finding related clips, is reduced as compared to the original terms. If and how this would work depends greatly on how the search is done. Now we would like to demonstrate through some code examples our experiences and main findings using Word2Vec approaches for Query Expansion. Demo We start by setting up a Python 3.6 virtual environment. We’ll be working with the Word2Vec models provided by the library Gensim. These are not the models we used internally, but they are adequate for this demonstration. Start by installing the library itself. The model will automatically be downloaded and stored locally the first time you try to load the model in a script. Now that we have a model let’s see what we get with a simple topic like “cooking”. We get back a list of the top 10 (by default) most similar terms to the input term and an associated confidence score for each. We could directly add these terms to the query, but we may end up adding terms that are completely unrelated to the original query. We can mitigate this by filtering only terms with a high confidence. Now what if we want to expand user queries that contain multiple words? As expected, most Word2Vec models are trained on single words. It doesn’t recognize “clean energy” as a single term, but it surely will recognize “clean” and “energy” separately. This means we need some extra preprocessing. This step can get much more complex, but for now, we’ll keep it simple and use an existing utility from Gensim. As we see, we can still expand the query by expanding each term separately. Lastly, we create a utility function to combine the preprocessing, thresholding, and query (re-)building. This approach appears to work well enough for the query “cooking”. The model adds useful contextual information to the query. Unfortunately for the “clean energy” example it mostly adds noise. Words like “stick” and “value” have very little to do with “clean energy”. Putting it differently, through preprocessing we have lost the meaning, or “sense”, of the query. We can address this issue using Sense2Vec. Sense2Vec is a word embedding model that was trained to preserve the sense of its words. For example, “clean” and “energy,” separately, have different meanings than “clean energy” together. For our use-case, this means the model can work directly with multi-word phrases and preserve the sense within them. We can easily access the model without having to download it through a handy RESTful API. Using that API let’s try the same query as before. To re-use the previous “expand_query” function we’ll need a small wrapper class and an extra dependency. We lowered the threshold for the “cooking” query for demonstrative purposes. We see it is able to expand the query with words that are intuitively more closely related to the original query. With the “clean energy” example we see by the expanded terms it is able to preserve the original sense of the terms in the query. Concluding Thoughts During the experimentation on our platform, while we found the implementation of query expansion did not significantly improve our topic search use-case to find more related content, it certainly didn’t hurt our overall performance either. For some topics, query expansion introduced relevant results that we otherwise wouldn’t have found and for others, it introduced irrelevant results, resulting in a small overall improvement that we concluded wasn’t statistically significant. We only had a limited amount of annotated data available which was most likely a big factor in this result. Regardless, we gained some useful knowledge from our experimentation: We only looked at very simple user queries. While this was already sufficient to get decent results, there are much more advanced tokenization and preprocessing steps that can be applied to deal with more complex queries. With both approaches, we needed to set some kind of threshold. Setting it too low meant potentially too many irrelevant terms were added to the query. Conversely, setting it too high meant nothing might be added to the query. We didn’t go in-depth to figure out a good setting for these thresholds, but they are still worth exploring further. You can’t expand out-of-vocabulary words. Concrete examples are “corona” or “COVID”. These terms either didn’t exist yet when the models were trained or didn’t carry the same weight they do now. This means any model we use would need to be frequently updated to keep up with current affairs. Query expansion is more focused on recall, whereas for our use-case precision is much more important. Often, not finding any results is better than finding irrelevant results. We did not explore the usefulness of lexical databases like WordNet for query expansion. (One of our issues was introducing irrelevant results.) This approach could help to mitigate that. While the output from our experiments with query expansion for topic search wasn’t a smashing success, we did get multiple interesting findings, and there are plenty more avenues to explore. Based on these findings and experiences, we continue to be excited to explore the possibilities of query expansion in future projects. We hope you gained some useful insights, as well, about whether your use-case could benefit from this approach to query expansion, how you could quickly and easily get started, and how you could deal with some of the issues you may encounter in the process.
https://mediadistillery.medium.com/query-expansion-for-snackable-content-62bc17853720
['Media Distillery']
2020-12-03 14:36:58.549000+00:00
['Snackable Content', 'Query Expansion', 'AI', 'Machine Learning']
The Ultimate Beginners Guide to Regression in Python
The Ultimate Beginners Guide to Regression in Python Machine Learning is making the computer learn from studying data and statistics Photo by Antoine Dautry on Unsplash Machine Learning is a step into the direction of artificial intelligence (AI). Machine Learning is a program that analyses data and learns to predict the outcome. What is Regression? The term regression is used when you try to find the relationship between variables. In Machine Learning and statistical modeling, that relationship is used to predict the outcome of future events. Linear Regression Linear Regression uses the relationship between the data-points to draw a straight line through all of them. This line can be used to predict future values. Python has methods for finding a relationship between data-points and to draw a line of linear Regression. We will show you how to use these methods instead of going through the mathematic formula. An example: x = [5,7,8,7,2,17,2,9,4,11,12,9,6] y = [99,86,87,88,111,86,103,87,94,78,77,85,86]plt.scatter(x, y) plt.show() This displays a scatter plot: Image By Author Import ‘scipy’ and draw the line of Linear Regression: import matplotlib.pyplot as plt from scipy import stats Create the arrays that represent the values of the x and y-axis: x = [5,7,8,7,2,17,2,9,4,11,12,9,6] y = [99,86,87,88,111,86,103,87,94,78,77,85,86] Execute a method that returns some critical fundamental values of Linear Regression: slope, intercept, r, p, std_err = stats.linregress(x, y) Create a function that uses the ‘slope’ and ‘intercept’ values to return a new deal. This new value represents where on the y-axis, the corresponding x value will be placed: def myfunc(x): return slope * x + intercept Run each value of the x array through the function. This will result in a new collection with new values for the y-axis: mymodel = list(map(myfunc, x)) Draw the original scatter plot: plt.scatter(x, y) Draw the line of linear Regression: plt.plot(x, mymodel) Display the diagram: plt.show() Multiple Regression Multiple Regression is like linear Regression, but with more than one independent value, meaning that we try to predict a value based on two or more variables. Table by W3Schools — Image by Author We can predict the CO2 emission of a car based on the engine’s size, but with multiple Regression, we can throw in more variables, like the car’s weight, to make the prediction more accurate. In Python, we have modules that will do the work for us. Start by importing the Pandas module. import pandas The Pandas module allows us to read CSV files and return a DataFrame object. df = pandas.read_csv("cars.csv") Then make a list of the independent values and call this variable x. Put the dependent values in a variable called y. X = df[['Weight', 'Volume']] y = df['CO2'] We will use some methods from the sklearn module, so we will have to import that module as well: from sklearn import linear_model From the sklearn module, we will use the ‘LinearRegression’ method to create a linear regression object. regr = linear_model.LinearRegression() regr.fit(X, y) Now we have a regression object that is ready to predict CO2 values based on a car’s weight and volume: predictedCO2 = regr.predict([[2300, 1300]]) Full Code Example import pandas from sklearn import linear_model df = pandas.read_csv("cars.csv") X = df[['Weight', 'Volume']] y = df['CO2'] regr = linear_model.LinearRegression() regr.fit(X, y) #predict the CO2 emission of a car where the weight is 2300kg, and the volume is 1300ccm: predictedCO2 = regr.predict([[2300, 1300]]) print(predictedCO2) Polynomial Regression Polynomial Regression, like linear Regression, uses the relationship between the variables x and y to find the best way to draw a line through the data points. Python has methods for finding a relationship between data-points and to draw a line of polynomial Regression. We will show you how to use these methods instead of going through the mathematic formula. In the example below, we have registered 18 cars as they were passing a certain tollbooth. The x-axis represents the hours of the day, and the y-axis represents the speed: import matplotlib.pyplot as plt x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22] y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100] plt.scatter(x, y) plt.show() Result Image by Author Import the modules you need: import numpy import matplotlib.pyplot as plt Create the arrays that represent the values of the x and y-axis: x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22] y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100] NumPy has a method that lets us make a polynomial model: mymodel = numpy.poly1d(numpy.polyfit(x, y, 3)) Then specify how the line will display; we start at position one and end at position 22: myline = numpy.linspace(1, 22, 100) Draw the original scatter plot: plt.scatter(x, y) Draw the line of polynomial Regression: plt.plot(myline, mymodel(myline)) Display the diagram: plt.show() It is essential to know how well the relationship between the values of the x- and the y-axis is if there is no relationship, the polynomial Regression can not be used to predict anything. The relationship is measured with a value called the r-squared. The r-squared value ranges from 0 to 1, where 0 means no relationship, and one means 100% related. How well does my data fit in a polynomial regression? import numpy from sklearn.metrics import r2_score x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22] y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100] mymodel = numpy.poly1d(numpy.polyfit(x, y, 3)) print(r2_score(y, mymodel(x))) The result 0.94 shows that there is a perfect relationship, and we can use polynomial Regression in future predictions. Predict Future Values Now we can use the information we have gathered to predict future values. Predict the speed of a car passing at 5 P.M: import numpy from sklearn.metrics import r2_score x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22] y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100] mymodel = numpy.poly1d(numpy.polyfit(x, y, 3)) speed = mymodel(17) print(speed) Conclusion I hope after this article, you have a basic understanding of Regression and how to use it in Python and that you will be able to run yourself some scripts of Machine Learning now!
https://medium.com/python-in-plain-english/the-ultimate-beginners-guide-to-regression-in-python-e85bc328c10d
['Bryan Dijkhuizen']
2020-11-22 08:31:45.836000+00:00
['Programming', 'Data Science', 'Technology', 'Software Development', 'Software Engineering']
How My Life Changed from Being Orthodox to Exotic
A family member of mine was visiting me once for the night. He was yawning, tired of the 5-hour drive, his eyes as heavy as a bag of potatoes with sleepiness. I suggested politely, “You can retire for the evening, you look tired, we can chat tomorrow”. “It’s only 11:00 pm” came the reply yawning. So all this breaking away from the norms, the urge to look and do different created a culture. A way of life. That lifestyle is now followed by the majority I believe. The originality of this movement, Do whatever you want, is now lost, to say the least. Lately, it only translates to just do opposite as much as possible to what everyone else is doing. Don’t account for your personal will only if it leans on the side of the old ways. Otherwise, it’s just fine. How Tradition Became Exotic Since custom has its place in our lives, the new custom was to be fashionably late for work, for meetups, and basically for everything. Be always occupied in activities not because you like them but because they are the talk of the town. This has become the new normal. The old ways now stand unique. Since we are so enticed about breaking away from the society, we feel that being able to do the things right way (read: the old way how it was done originally before all that do whatever you want thing) exotic. It seems out of the ordinary. This burning itch to look different has now again brought the spotlight on how to wake up early in the morning. How to live your life with discipline, how to be a gentleman. A plethora of articles are written on how to be organized, live a life of purpose. There is an entire movement with articles, apps and routines to help people wake up early in the morning. So where do I fit in? What was once a routine has now become a fashion. After more than two decades of my seemingly mundane, extremely boring, colourless, dull, and stock lifestyle has made me an icon of perfectionism. What adds to the personality is my ability to do it without much effort. Still, people don’t appreciate me for what I do. They like it because it is extremely opposite to what people are normally doing. What was once a common practice, a stock, a custom has now become unique, out of the ordinary, highly customized, and against the new form tradition. Going to bed at 10’o clock is exotic (seriously). Being able to limit your life to a few things so you have free time seems like a fairy tale. Ironed clothes, being on time is like playing a character from a movie because who does that today? Nobody. That’s what makes it so exciting. Rest in peace “Do whatever you want”. Join My Newsletter! It’s free Only one email per week with my highly popular articles
https://medium.com/illumination-curated/how-my-life-changed-from-being-orthodox-to-exotic-82ab35402a6f
['Ahsan Chaudhry']
2020-12-29 16:22:12.013000+00:00
['Life Lessons', 'Inspiration', 'Self Improvement', 'Life', 'Creativity']
5 Things I Have Learned Using the M1 MacBook Air
4. Install Homebrew Homebrew is handy when managing packages in Macs. As a software developer, I use Homebrew to install Ruby, Python, Git, and lots of different software. Installing Homebrew in Intel-based Macs is straightforward: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" But when running the command on the M1 chip MacBook Air, there is an error: Rerun the Homebrew installer under Rosetta 2. If you really know what you are doing and are prepared for a very broken experience you can use another installation option for installing on ARM: https://docs.brew.sh/Installation Homebrew is not (yet) supported on ARM processors!Rerun the Homebrew installer under Rosetta 2.If you really know what you are doing and are prepared for a very brokenexperience you can use another installation option for installing on ARM: Error when installing Homebrew by Eric Yang There are different ways to install Homebrew on ARM-based Macs. Using the unstable, in-development ARM-based Homebrew By following the installation documentation, we first make a separate folder to install Homebrew: % cd /opt % mkdir homebrew && curl -L https://github.com/Homebrew/brew/tarball/master | tar xz --strip 1 -C homebrew % sudo chown -R $(whoami) /opt/homebrew And add these paths to the environment: % sudo nano /etc/path Add the two paths /opt/homebrew/bin /opt/homebrew/opt Then restart the terminal and brew update . Running with the prefix arch -x86_64 at ARM-based terminal Use the following command to install Intel-based Homebrew: And then use it by prefix with arch -x86_64 : arch -x86-64 brew update Running terminal from Rosetta 2 To run the terminal/iTerm from Rosetta 2, right-click on the app in Applications, then select Get Info and tick Open using Rosetta. Then when launching the terminal/iTerm, it will automatically run in Rosetta 2 now. The command for Intel-based Macs works now! /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
https://medium.com/better-programming/5-things-i-have-learned-when-using-the-m1-chip-macbook-air-a77f93c50381
['Eric Yang']
2020-12-04 00:13:31.399000+00:00
['Programming', 'Apple', 'Software Development', 'Xcode', 'Macos']
Quick & Easy Alerting for Apache Airflow
A Task Doomed to Fail As our starting point let’s define a DAG with a single task that runs a simple BashOperator. Our eternally failing task. We’ve made sure this operator always fails by simply calling exit 1 so we can test what we want to do off the back of the failure. The Power of BaseOperator All operators in Airflow inherit from the BaseOperator class, which means for any operator we’re using, including our BashOperator, we’ll have access to the parameters available in this class. The BaseOperator parameter we’ll be taking advantage of specifically is: on_failure_callback (callable) — a function to be called when a task instance of this task fails. a context dictionary is passed as a single parameter to this function. Context contains references to related objects to the task instance and is documented under the macros section of the API. This can be configured to call a custom function in the event of our operator failing, which is exactly what we’re going to do next with the BashOperator we defined earlier. Our task now does something when it fails, thanks to on_failure_callback. The do_something function that’s triggered when our task fails currently isn’t defined, but we’ll get there. The context dictionary that’s mentioned in the documentation is also key for us, this will allows us to get metadata about the instance of the task that has just failed. Later on we’ll use this context dictionary to customise our alerts, making them informative and user friendly. Setting up our Connection Airflow comes with a bunch of built in hooks, some of these are published by the Airflow team and others are community contributed. We’ll be using the community contributed OpsgenieAlertHook in our example, but you can find pretty much anything you need here. If you’re using a different alerting platform with an API you can achieve pretty much anything with HttpHook, which is also the parent class of OpsgenieAlertHook. For this hook to work we need to configure a connection to Opsgenie, looking at the code for this hook it’s already setup to use the existing opsgenie_default connection that comes with Airflow. For simplicity I can edit this existing connection in the Airflow UI, this will save and encrypt it in our Airflow database. However, if you’re not keen on having this state lying around because you’re using the default in-memory Airflow database or haven’t encrypted your secrets, then you can also configure connections using environment variables or at run-time in your Airflow code. This is great if you want to grab the credentials from your secrets manager and the way I’d recommend doing this in a production environment. For now we’re just going to drop our Opsgenie API key into the password field and save this. Configuring the Opsgenie connection in the Airflow UI. Note: When you revisit this connection in the Airflow UI you won’t see anything in the password field, it isn’t even visible but obfuscated, however it should be persisted! Leveraging the Hook Let’s make our do_something function actually… do something, this will be our hook into our alerting platform. Let’s rename do_something to opsgenie_hook and import this functionality from a different Python file. This will allows us to build something generic and then easily re-use this code in all of our DAG definition files. We’ve got a basic hook implemented. So hook.execute() will, as configured in OpsgenieAlertHook by default, send a POST request to the default Opsgenie alerts API endpoint at https://api.opsgenie.com/v2/alerts. It’s able to authenticate against the Opsgenie API utilising the opsgenie_default connection that we just configured. We’re supplying the name of this connection as the argument to OpsgenieAlertHook constructor. As you’ve probably noticed, the execute function currently takes an empty object, but not for much longer, this will be our JSON payload. Configuring & Customising our Alert Remember we talked about the context dictionary earlier? You may have noticed that our opsgenie_hook is taking context as an argument, the contents of which we can use to build and customise our JSON payload. Our finalised hook, complete with a custom message. There’s a little bit going on here so let us unpack it piece by piece. Firstly we’re extracting some useful stuff from the context dictionary: dag is the name of our DAG is the name of our DAG task is the name of our Task is the name of our Task ts is the timestamp of when our Task was scheduled to run is the timestamp of when our Task was scheduled to run log_url is the location of the logs for the instance of this Task on our Airflow instance These seemed the most useful but you can see all that’s available in the context dictionary in the TaskInstance class. We’re using dag, task and a slightly nicer formatted ts to construct a short informative message from which you can immediately discern what’s wrong. An example would be: Airflow DAG eternal_failure, failed to run task_that_always_fails, scheduled at 2019–06–21 16:00. We can also embed the log_url as metadata within the JSON payload itself, this makes our alert actionable, anybody responding to this alert can immediately head to the first port of call when troubleshooting an issue. We’re replacing localhost with the actual DNS of our airflow instance to avoid a manual step here too. The rest of this payload is pretty Opsgenie specific, but of course we’re routing the alert to the appropriate team, setting the appropriate priority for the alert, adding useful tags and so on. These are all things you’ll obviously want to tweak depending on your own implementation. Our actionable alert arriving in the Opsgenie platform. Opsgenie, like any alerting platform, also takes care of notifying us through various channels like Email, SMS and Slack. Having a concise, informative and actionable alert is critical when you’re on call. It’s also great to be able to interact with these alerts via Slack directly. Interacting with our actionable alert in Slack. Conclusion We can all agree it’s important to know immediately when and why things aren’t working in production, especially in an era where we’re responsible for building, running and owning our complete stack end to end. The tasks you run as part of your Airflow DAGs are no exception to this, especially as it’s often likely you’re holding up some downstream workloads that can be business critical. There’s also no reason why setting up this kind of alerting should be a hassle either. Airflow delivers that ease with some really simple functionality that’s built in and trivial to leverage. All of the Gists embedded here are available publicly and a concise final version of this example is available here.
https://medium.com/unruly-engineering/quick-easy-alerting-for-apache-airflow-53c3f1ba2ca
['Raouf Aghrout']
2019-06-25 13:37:52.996000+00:00
['Airflow', 'Open Source', 'DevOps', 'Opsgenie', 'Data Engineering']
Shake your hair girl with your ponytail
Dearest darlings, A song with three acts: an incredibly gorgeous opus to love, youth and beauty. Strap yourselves in: we are going for a musical ride with Roxy Music and their track “If There Is Something” from their first eponymously-titled album “Roxy Music”. You know, the one with the girl on. Okay, they all had girls on the cover. The track starts immediately with a honky-tonk country guitar and a western twangy sound, done with all the irony of a boy born in County Durham with an art school hair-flick and his mates having a bit of a laugh. If there is something that I might find Look around corners Try to find peace of mind I say Where would you go if you were me Try to keep a straight course not easy Immediately you imagine you’re in a spit and sawdust bar, probably holding a beer and eyeing up a cowboy. Well, I am. He’s seeking approval, validation even for his wayward behaviour, wanting someone to tell him he’s okay. But he doesn’t care, almost smirking at us and laughing out the final word “E-e-easy”, justifying himself to us. His words are weak: “might”, “try” which is repeated…he’s not really putting his back into this is he? Somebody special looking at me A certain reaction we find What should it try to be I mean If there are many Meaning the same Being specific just a game Oh now — now we cut to the chase. He’s worried what she thinks. And he can’t tell if she likes him or not, or how he should behave. Bryan Ferry, hung up on a girl? So he does care — and this time urges “ Being specific” but then “just a game” which seem to contradict each other. He’s accusing her of playing a game, or is he admitting he is? To trick her into thinking that he doesn’t care either way? Ahah. No wonder he’s fake laughing the last line out. He’s scared. The country ditty squawks its way out into the second part of the track, with a real change of pace and sentiment, sax and guitar screeching to a crescendo with the famous three violin track behind: I would do anything for you I would climb mountains I would swim all the oceans blue I would walk a thousand miles Reveal my secrets More than enough for me to share I would put roses round our door Sit in the garden Growing potatoes by the score His tone has completely changed, now there is no “try” and he’s “would” ing. On the final word of each line, he’s almost ululating his basest desires. Suddenly this is serious, he’s smitten and you can hear the desperation in his voice as all these promises come tumbling out. He’s practically offering to do a triathlon for her. Or, settle down with her in some country idyll with roses growing potatoes. Bryan Ferry “sit in the garden”? Surely he’s too cool? Even reads a bit desperate now — in “Love Island” speak this would give me “the ick”. We are in no doubt that he is almost driven mad by this, and exposes his vulnerability to“ reveal my secrets more than enough for me to share”. He wants her badly. So much so he’ll settle into domesticity — the ultimate surrender and commitment. And potatoes — the most mundane and simple of produce to focus on. I just hope he doesn’t eat too many himself. We’re still reeling as the track winds on, slowing, whirling with that dirty sax as we catch our breath, the maelstrom ending with just the piano as a backing and join him much later along the line: Shake your hair girl with your ponytail Takes me right back (when you were young) Throw your precious gifts into the air Watch them fall down (when you were young) Lift up your feet and put them on the ground You used to walk upon (when you were young) That first line is an absolute belter: a mix of joy and complete abandon. He’s practically screaming it to us. Desire and control, obsession and voyeurism. Like Scottie in Hitchcock’s “Vertigo” who says to Madeleine/Judy in that famous transformation scene about her hair “It should be back from your face and pinned at the neck, I told you that — I told YOU that”. A man driven mad by his image of a woman just as he wants it. Is Bryan the same? The line is so visual and visceral — like the secretary who unpins her hair and removes her glasses. Don’t we all want to be told to “shake our hair”, not pin it back.* I toyed with the idea of leaving no comment about “precious gifts” — Urban dictionary tells me it means “virginity” which could explain it. The final verse becomes almost like a hymn to her, an adoration: Lift up your feet and put them on the ground The hills were higher (when we were young) Lift up your feet and put them on the ground The trees were taller (when you were young) Lift up your feet and put them on the ground The grass was greener (when you were young) Lift up your feet and put them on the ground You used to walk upon (when you were young) He’s in control finally, giving her instructions. He wants to relive his youth, their youth, when everything was easier. Hills, trees, grass — a visual, natural yearning for the youth/green when things were simpler. Make her dance for him, like they used to. This is repeated to fade and his voices breaks with passion and delight. “Lift up” becoming just one word, one cried exaltation at the end. This verse is the final act in his love opera. Beautiful. I’m exhausted and exhilarated — just how you’re meant to feel. Like you’ve fallen in love. Or just had an orgasm.
https://medium.com/a-longing-look/shake-your-hair-girl-with-your-ponytail-a9cb459aaa1
[]
2020-08-06 16:02:09.162000+00:00
['Roxy Music', 'Love Letters', 'Love', 'Music', 'Lyrics']
Valve vs The Gaming World
Valve vs The Gaming World How CS:GO’s Approach to Digital Goods Trumps the Competition Counter-Strike: Global Offensive revolutionized the video game industry when it came to creating microtransactions that wouldn’t affect gameplay but would still incentivize purchases. The CS:GO skin market is one of a kind, and while other companies have tried to emulate it, none of them managed to do so to Valve’s degree. In this article, we’ll look at Blizzard, Ubisoft, Riot, and Epic to see how their offerings failed to live up to the CS:GO hype, and why CS:GO’s model is so hard to implement for others. Using the information from Brownwen Grimes’ awesome GDC panel on economy design, we’ll discuss what Valve did right and what others did wrong. Let’s start with the basic differences between the two approaches and go from there. To Each Their Own We’ve outlined the CS:GO skin economy before, so we’ll keep it short this time. For players, skins are a way to modify the look of their weapons. These skins have no bearing on gameplay and can be sold and traded between players using the Steam Marketplace for the currency which can be used to buy other skins, games, and even VR Headsets. Meanwhile, the likes of Overwatch, Rainbow Six: Siege, Valorant, and Fortnite all also have the basic same idea — skins to customize weapons. The problem? They’re not freely tradeable, they’re locked behind loot boxes and season passes. While in some cases, players can “grind” their way to other skins by destroying other skins for in-game currency, all skins are pretty much set in stone in terms of rarity and accessibility. Their only value? In-game. Thus we come to the first crux of the problem. If I get a really popular skin in Fortnite, there’s no way to securely swap the skin with others — you can gift the skin to another player, but you have virtually no guarantee that the other player doesn’t renege on your deal all of a sudden. The other games don’t even have workarounds like this for trading. If you want to sell your rare skin, you can only do so by breaking the Terms of Service and selling your account. This makes skins much less of a commodity, and more of a participation trophy. You play enough, and you’ll probably get a cool skin, which most of the time, you’re stuck with. Don’t like blue skins? Tough luck. Can’t really build an identity around it? You’ll need more money for that. These skins thus become incredibly impersonal, they become achievement showcases rather than expressions of your visual and thematic affinities. This is further boosted by another aspect differentiating the two approaches. One of A Kind When watching Grimes’ video, you will notice the emphasis she puts on making every skin unique, not just in their design, but also by differentiating every skin with wear levels and pattern variations. This may seem like a gimmick to decrease the value of more worn skins at first, but it’s actually a way to increase the uniqueness of each skin. Some wear and pattern combos are capable of increasing a skin’s value exponentially. No one skin is the same as the other, and with customization options such as stickers (scratchable to achieve a unique look), StatTrak (tracking the number of kills with a weapon), and Name Tags, you can truly make your weapon feel like a thing you own. An example of a name tag added to the weapon skin by its owner While the other games sometimes offer similar solutions, they offer no real variation within each weapon skin. For example, the AWP | Asiimov, one of the most popular CS:GO skins, which people enjoy using both in its least and most worn variants. Compare that with the sameyness of any single Valorant skin, where you can end up in a match with 9 other players sporting literally the same skin as you, with no real differences will make you feel as if your skin was made by H&M. The patterns are an even bigger deal. They can change a lot in how a skin presents itself. Sometimes it’s as subtle as the placement of a few buildings on a city landscape, other times they’re varying colour patterns that will completely change the way a knife looks. This means that the hundreds of skin designs can actually mean hundreds of thousands of variants, even when discounting wear. All of this works towards Valve’s goal as described by Grimes: to create a luxury good. CS:GO skins are like fine wine, collector items made with care to ensure every single one is, at least in some way, unique. The other companies instead produce repeatable, consumer-grade products. They’re usable and nice, sure. But they’re not unique, and that makes them feel artificial… for more than one reason. A Labour of Love Grimes places a lot of emphasis on the fact that Valve wanted the community to get involved with the skin creation process. While a lot of the skins in-game were created by Valve employees, more and more are being created by the community. Community skin creators reportedly earn 6 figures a year per in-game skin, making it a lucrative business opportunity for any up-and-coming artist. On top of engaging the community in a unique way, this approach makes every skin tell a story of sorts. They’re more personal than the ones found in other games because they come from real people and represent their artistic influences and personalities. Artisanship over fabrication. Users appreciate that aspect of skins, and that definitely helps Valve sell the concept. Redefining Digital Goods The difference between Valve’s approach and the other can be summed up in three words: ownership, uniqueness, and community. Valve understood that in order to sell their goods, they needed to make them tangible. Thanks to Steam, Grimes’ and co. could play around with these concepts more than Epic, Riot, Blizzard, or any other company. That allowed them to explore skins as luxury goods and see the factors that make them unique. Other companies never had that opportunity and decided to go for a far less intricate approach, that mind you, still works for them, but hasn’t created a whole new market. With Epic looking to challenge Valve’s platform with a store of their own, who knows. Perhaps we’ll soon be living in a world where community-created skins will be bought and sold in exchange for video games and other digital products. But with nothing of the ilk on the horizon yet, what’s left is to applaud Valve for how well their design philosophy translated into the gaming world nearly 10 years after they first came up with their revolutionary concept.
https://medium.com/skinwallet/valve-vs-the-gaming-world-5161cb418637
[]
2020-08-20 15:09:40.567000+00:00
['Digital Goods', 'Gaming', 'Future', 'Counter Strike', 'Valve']
The Latest: Apple News+ joins the audio article trend (May 18, 2020)
The Latest: Apple News+ joins the audio article trend (May 18, 2020) Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews. THE NEWS Apple is asking publishers on Apple News+ for permission to create audio versions of some of their stories. According to Digiday, publishers will be compensated 50% of subscriber revenue based on how much time users spend with the content (the same way revenue is split for written content). Apple will also cover all production costs and audio articles will be hosted exclusively on its platform. SO WHAT Apple is joining a long list of publishers and platforms that are producing audio versions of articles to reach audiences and retain readers. Audio articles are convenient and accessible, hence their appeal with readers. As The Atlantic’s head of partnerships Kim Lau told News Media Alliance: “Audio articles are about convenience and delivering content in a way that works best for our readers.” For the past few years, a number of publishers and platforms have been investing in producing their own audio stories. Some, like The Washington Post and Financial Times, have experimented with text-to-speech AI services like Amazon Polly. (In 2017, the Financial Times built a “subscriber-only podcast player of audio articles.” The app shut down in 2018). HBR partnered with the audio news app Noa in June 2019 and has so far converted 50 articles into news narrations. Last fall, Google rolled out a new Google Assistant feature that uses an algorithm to deliver users a custom-build audio news stream. Google’s product manager of audio news told The Verge that the company hopes to foster an “audio web” ecosystem. The New York Times made a significant investment in the space two months ago when it acquired Audm, a subscription-based audio news startup that produces audio articles for publishers including The Atlantic and The New Yorker. Since launching in 2016, Audm has acquired 20,000 subscribers to its app. Currently, The Times is using Audm to host “The Sunday Read” — a new segment of The Daily show that features read-aloud essays from The Times’ more recent archive to give listeners a brief respite from COVID-19 coverage (read our Q&A with The Times’ Theo Balcomb to learn more about the company’s audio efforts). Zetland, a Danish digital magazine featured in NiemanReports, has taken the audio article concept further than perhaps anyone: It offers audio playlists on its app that start with a conversational podcast and transitions into audio articles. Zetland co-founder Hakon Mosbech says that these playlists function as a radio: users open the app, press play, and listen until the end of their commute. Publishers have seen positive returns on their investments so far — which suggests either that listening to articles is a convenient or even preferred way for some audiences to consume their content. Mosbech told NiemanReports that the “average completion rate for an audio story is 90%.” HBR began working with Noa upon realizing that it was missing out on “busy would-be readers.” Maureen Hoch, editor of HBR.org, told NiemanReports: “If you don’t have the time to sit down and read a feature article in the latest issue of the magazine, we’re trying to deliver you something that makes it easier to get that.” LOOK FOR The creation of new and innovative publisher digital audio experiences. With more audio articles, publishers have the potential to expand their audio presences and build editorially curated audio news feeds (both on their apps and sites). Partnerships with audio journalism apps also present opportunities to capture new audiences. One reason HBR was excited to work with Noa, according to NiemanReports, “was the company’s potential to work with Land Rover and Jaguar to get their audio into connected cars” and curate content based on location. Look specifically for how Audm fits into The New York Times’ audio strategy. While Audm will continue publishing audio stories for its partners, HotPod’s Nicholas Quah suggests that perhaps Audm will contribute to the creation of a Times-owned audio product. Bleacher Report’s Noah Chestnut takes it one step further and posits, “What if the NYT was just a play button?” And finally, look for how audience responses to the usage of playlists, especially ones designed to evoke a radio listening experience. Playlists aren’t entirely new in the narrative audio space, but they are still a relatively fledgling experience. Smart speaker news briefings have been around for just over five years, while Spotify just launched Podcast Playlists earlier this year.
https://medium.com/the-idea/the-latest-apple-news-joins-the-audio-article-trend-may-18-2020-939ecffa6fc8
['Tesnim Zekeria']
2020-05-19 16:09:07.350000+00:00
['Journalism', 'Radio', 'Media', 'Audio', 'The Latest']
Create Text-To-Speech with Python and gTTS
The code To be honest, the code is pretty straight-forward, as the gTTS library does all the heavy lifting, so I’m going to give you blocks of code and a brief explanation. First, create a file and import two Python libraries and set our options: Reading from a string Now, we create the first of our functions that will read the text from text_to_read, with the language voice and a normal speed, as slow_audio_speed is false. We create a gTTS object with the options we created at the start, we save it to the filename (that’s it, my_file.mp3). Now we are done, but we want to play the file we have just created. So, we use the os library to play the file with the name filename on the current folder. Reading from the user’s input Pretty much the same as before. With only one difference: Now we are asking the user to introduce some text to transform it into an audio file. Reading from a file This is the most complex function yet still pretty easy to understand. We ask the user to introduce the name of a file, we add the .txt extension, we open and read the text, and as always we create the mp3. Running the script We only need to declare which function we will use at the end of the code: You can easily switch the function called. Or don’t set any function and run the python interpreter, and keep asking for functions to run with “python –i NAME_FILE.py”
https://medium.com/quick-code/lets-learn-about-creating-text-to-speech-with-python-and-gtts-4f012294acd6
[]
2019-09-16 16:01:21.434000+00:00
['Python', 'Coding', 'Text To Speech', 'Tutorial', 'Programming']
🚀 Reactor 2D on it’s way to Unity
A new way to develop applications is coming to Unity But first, what is an Application? A program or piece of software designed to fulfil a particular purpose. In Unity there is usually the concept of a single application, but for me an application can become a portion of the screen that contains logic and can work autonomously with just injecting the information it needs. But in order to achieve that we need a system that helps us. On Web there are systems such as React or Angular that work as a development platform, Unity does not provide anything similar for what I ventured to experience. In my case I want to be able to have multiple applications that make up a context. Let’s see an example! In the image above we can see how the User Home context is made up of a TabBarApp (Parent Application), UserFeedApp (Child Application) and StoryApp (UserFeedApp Children Applications). Each of these applications has a unique and particular task, so if we separate them each one should continue working in the same way. The Reactor Trident This framework was designed to be Unity’s React or Angular, from which developers can have a platform on which things become easier and accelerate the development of our products. As I said before, Reactor would handle the applications creation, navigation and communication . In this way Reactor2D offers a form of total decoupling between modules. Reactor Applications An application for Reactor2D is a module, for example ShopApp or UserFeedApp. The framework gives these applications independence and decoupling with other modules. Within a Reactor Application we will find everything you need to work: scripts, art, animations, etc. To create an Application, just go to Windows-> Reactor 2D -> New Application . This option will generate the following folder structure.
https://medium.com/dev-genius/reactor-2d-on-its-way-to-unity-ab54140a6d1a
['Martin Gonzalez']
2020-06-30 15:11:56.638000+00:00
['Micro Frontends', 'Unity', 'Software Development', 'Software Architecture', 'Software Engineering']
How Google Is Trying to Preserve Privacy Without Killing Ad Business
Google’s ‘Privacy Sandbox’ is a series of proposals that seek to balance the online ad industry’s need to track user behavior while also preserving people’s right to privacy. Google’s solution is to enable some tracking, but in aggregate form. By Michael Kan How do you serve a targeted online ad without learning too much about the user’s personal information? Some might say you can’t. But Google is attempting it with a “Privacy Sandbox,” a new series of proposals that seek to balance the online ad industry’s need to track user behavior while preserving people’s right to privacy. Google’s goal is to stamp out the most invasive forms of web tracking from identifying your internet presence on the Chrome browser. At the same time, it wants to push the web industry and consumers to accept an online advertising model that still engages in some user tracking, but in a bulk aggregate manner that’s fully transparent. “We’re exploring how to deliver ads to large groups of similar people without letting individually identifying data…leave your browser,” Chrome engineering director Justin Schuh wrote in a blog post today. Google has a big interest in preserving today’s online advertising model; the company’s main business is all about serving targeted ads to users by cataloging their activities, which can occur on Chrome. With your web history, the tech giant can figure out all your interests and come up with personalized ads you’ll view on Google Search and YouTube. On the back-end, marketers can then see whether you’ve clicked on the ads. Third-party websites and ad networks can also serve you tailored ads as you browse the internet by tracking your activities with internet cookies. The only problem? The same technologies can technically map out your web browsing history, which some critics say is tantamount to surveillance. The privacy concerns are why other browsers, such as Mozilla’s Firefox and Apple’s Safari, have been trying to block invasive web trackers and third-party cookies. Google is trying to push back on the need to go nuclear on today’s web trackers. “Recently, some other browsers have attempted to address this problem, but without an agreed-upon set of standards, attempts to improve user privacy are having unintended consequences,” Schuh wrote in a separate blog post. According to Schuh, the cookie blocking will force the web industry to resort to opaque forms of web tracking with no way for users to opt out. This includes fingerprinting,” a tracking technique that involves collecting information about your computer, including the settings and browser version, to identify your internet presence and track which websites you’ve been visiting. “Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong,” Schuh added. He also argued the cookie blocking will derail websites from owners such as media publishers from funding themselves with targeted ads. “Recent studies have shown that when advertising is made less relevant by removing cookies, funding for publishers falls by 52 percent on average,” he said, citing the company’s own advertising data. To fix the problems, Google’s approach is to block fingerprinting on Chrome, a pledge the company made at its developer conference in May. But the company is refraining from scuttling today’s online advertising model. Instead, the goal is to remove individual identifying information from the process. Specifically, one proposal in the Privacy Sandbox calls for marketers to observe and serve ads to large groups of people who exhibit similar browsing habits. “It’s possible for your browser to avoid revealing that you are a member of a group that likes Beyoncé and sweater vests until it can be sure that group contains thousands of other people,” Schuh said as an example. Other proposals revolve around letting marketers continue to measure ad click-through rates, and to detect fraud, but without identifying individual users. However, Google’s proposals have one major blind spot: What about people who don’t want to be tracked at all? Google didn’t say. For that, you’ll need to tinker with the privacy settings on Chrome, like choosing to block third-party cookies, and also clearing the web activity history on your Google account. The Privacy Sandbox is still in the early stages and is looking for feedback and support from across the tech industry. You can expect Google to talk about the project for years to come.
https://medium.com/pcmag-access/how-google-is-trying-to-preserve-privacy-without-killing-ad-business-e725644e5291
[]
2019-08-23 13:12:44.037000+00:00
['Technology', 'Google', 'Advertising', 'Digital Media', 'Privacy']
Deploying a newer version of the application on Google Cloud with zero downtime.
This is Avanish Chauhan, having 8+ years of experience in Backend technologies like Java, GoLang and RubyOnRails. For the last one and half years, I have been working with Luxoft as Senior Software developer cum Scrum Master. While working with Luxoft, I got an opportunity to work on different tools and technologies, working on Google cloud is one of them. As I promised in my last article, that I would be sharing details on how to deploy two versions of an application on Google cloud without bringing down your services or zero downtime. In this post I would be sharing steps to achieve this and the challenges that I faced and how did I fix them. So, my first requirement was to deploy application version V1 on Google cloud and make it available to everyone. Making available to everyone means providing public access to your application so that anyone with an URL can access it. I had two options to deploy my application on Google cloud. First, I could take the source code from the GitHub or I can use the tar.gz format of my source code. Let’s go one by one: Option 1: Taking the source code from the tar.gz file directory. 1. Go to Storage buckets on Google cloud console and create a bucket (Give any name) 2. Upload your source code to the bucket using the upload button in your bucket. 3. Click on the upload button to upload the tar.gz file(let’s name it as shopping-app.tar.gz). 4. Click on the uploaded resource and copy the bucket resource path. It would be something like this gs://<project-id>/shopping-app.tar.gz. 5. Open the Cloud shell from Google cloud console. 6. Copy the source code from the bucket to your working project. 7. gsutil cp <bucket resource path> <target directory> gsutil cp gs://<projectID>/shopping-app.tar.gz . 8. Since this is in tar.gz format, we have to extract the resources. tar xzvf shopping-app.tar.gz 9. Build the image from the extracted source code. docker build –t gcr.io/<project-id>/shopping-app:V1 . 10. Push the built image to Google container registry. docker push gcr.io/<project-id>/shopping-app:V1 11. Configure the compute zone, if not done already. gcloud config set compute/zone us-central1-b 12. Create the Kubernetes cluster. gcloud container clusters create shopping-cluster 13. Get credentials from the created cluster. gcloud container clusters get-credentials shopping-cluster 14. Create a deployment for shopping-app docker image, but before that install “kubectl”, this command is used to communicate with Kubernetes components. gcloud components install kubectl kubectl create deployment shopping-app –image=gcr.io/<Project_id>/shopping-app:V1 15. To check the running pods at any instance of time: kubectl get pods 16. Since our application was supposed to be accessed by everyone having internet, so we had to expose our application to http. kubectl expose deployment shopping-app –name=shopping-app-service –type=LoadBalancer –port 80 –target-port 8080 17. Get the service details. kubectl get service It will return in this format: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE EXTERNAL-IP is used to access the application from the internet. Option 2: To deploy the source code from BitBucket/GitHub or any other source code repository, use the following command to checkout/clone your source code into your working project: Git clone <code repository url> Once you have your code in your working project in Google cloud shell, remaining steps would be the same. Challenges that I faced: 1. Issue while copying as I haven’t set the project correctly. Solution: Set the project and zone by using gloud config commands Export PROJECT_ID = <your project id> gcloud config set project <Your_Project_id> Set the zone: gcloud config set compute/zone us-central1-b 2. Proper permissions were not set on the bucket that I created. Solution: Make sure to set required permissions on your bucket and uploaded resources, this can be done very easily by using the google cloud console. Now I was having my application running successfully on the internet and there were many users accessing it without any issue. It was generating a good amount of traffic as there were a huge number of users using my application. After a few months, we got one new request from the client to make some changes in our current running application. We made those changes and our application was ready to deploy, but, we had a challenge; deploy the newer version of the application without bringing down the currently running application and without impacting the current users. To address this problem, Google cloud’s Kubernetes engine and traffic splitting functionality proved to be a boon for us. So, what exactly I did, please see below: 1. I build the docker image of my new source code by following the commands that I have mentioned earlier: docker build –t gcr.io/<project-id>/shopping-app:V2 . 2. Pushed the built docker image to the Google container registry: docker push gcr.io/<project-id>/shopping-app:V2 3. Update the existing deployment so that I would be pointing to the new application: kubectl set image deployment/shopping-app shopping-app =gcr.io/<project-id>/shopping-app:V2 4. Get the details of the running services: kubectl get service If you try to access the external IP, you will see that now it is pointing to the new application. So, to conclude, Google cloud provides the functionality to deploy your application from code repositories or even from your zipped source code. Also, using Google Kubernetes Engine, it is very easy to deploy a newer version of the application with zero downtime and no maintenance window. In my next article, I would be focussing on scaling up of the applications using replicas and virtual machine instances. Happy Learning :)
https://medium.com/cloud-migration/deploying-a-newer-version-of-the-application-on-google-cloud-with-zero-downtime-de29f25e11b9
['Avanish Chauhan', 'Senior Java Developer']
2020-06-14 13:05:45.267000+00:00
['Luxoft', 'Software Development', 'Google', 'Google Cloud Platform', 'Cloud']
How Probes Partition the Debug Space
How Probes Partition the Debug Space dm03514 Follow Aug 30 · 4 min read Probing is a technique to perform regular checks on a service using a short interval. Probes provide signals that can significantly cut down debug time. This post describe probes and how they can be used to drill down into errors and make debugging more focused; how they can partition the debug space. Probes Probes are targeted checks, performed as request / response actions, on a short (~1 minute or less) interval. Some common applications of probes are: Uptime Probes: Internal Debugging — The focus of this post. Probes make a binary Yes/No determination of if a service is functioning as expected. — The focus of this post. Probes make a binary Yes/No determination of if a service is functioning as expected. Load balancer health checks — Amazon ELB defaults to 30 second health checks — Amazon ELB defaults to 30 second health checks Uptime Probes: Status Pages — Feed uptime data for presentation to end users. — Feed uptime data for presentation to end users. Uptime Probes: SRE — Provides SLI/SLO data, related to status pages above. Probes are a critical component of status pages and customer facing health metrics. Companies often use probes to provide Yes|no down up determinations as the data for their status pages. Think of Pingdom or a load balancer which requests a website every minute to determine if the site is up or not. (Pingdom status page. Source [Pingdom](https://www.pingdom.com/)) Common solutions to implement probes are: SAAS (Pingdom, DataDog) Open-source (Google CloudProber) Built-in to Load Balancers/Cloud Components Homegrown probing on an interval Output Probes are most effective when they return a binary yes|no result and the latency to achieve the result. Since probes are synthetic the expected results are known before hand. The follow describes a prober to check backend health from a load balancer: Protocol HTTP Target Endpoint /health Success: 200 Status code The above probe defines success as 200 and failure as any other status code. The status code in this case isn’t important, only the binary success|failure determination. Graphing the probe above would look like a timeseries of invocations with their success and failures: If you’re wondering why the graph above isn’t uniform, it’s because the metrics are being reported by the system at delayed intervals. The chart below shows the actual prober executions are uniform: It’s also common to see probes aggregated and expressed as a ratio between success and failure: Probe results can be rolled up over larger intervals to calculate aggregate availability. The following examples shows the aggregate availability over 7, 30 and 90 days: Probes are very simple, they contain the following properties: Protocol Target Expected response Interval Google CloudProber contains an example of the config required for a production-ready self hosted solution. These simple operations are the building blocks to determining if services are up or down, client reporting, debugging, reliability, and load balancer targets, among others. Debugging Using Probes The binary success|failure output probes generate make them well suited to debugging. Probes can be used to partition of the debug space along critical & common inflection points. Imagine a service with external customers, the customers complain that something is wrong. An effective debugging workflow takes an huge or unbounded debug space and narrows it down: The goal of debugging and the flowchart above is to take a large problem space, “Service is not working” and drill down into the causes. With each question the debug space gets smaller and more targeted. This can be visualized as a bounded space, where each question partitions the space in half. Good debug questions will binary search the space along critical inflection points: Probes can be used to test these inflection points and make it trivial to dig into the problem space. To partition the debug space using the questions above only a single probe is needed. This probe needs to measure success and latency. A single probe per service significantly partitions the debug space using just availability and latency. A prober per service may seem like a lot of moving parts but I have found probes to be extremely easy to maintain. The requirements to execute them are pretty small. Their focused nature makes it trivial to write probe logic and the fact that they execute on an interval makes them easy to operate.
https://medium.com/dm03514-tech-blog/how-probes-partition-the-debug-space-823c57a1009c
[]
2020-08-30 17:47:41.737000+00:00
['Software', 'DevOps', 'Software Management Tools', 'Software Development', 'Software Engineering']
Albums of 2019: Without Fear // Dermot Kennedy
Without Fear is the debut album by Irish singer-songwriter, Dermot Kennedy, which was released in October of this year. The album was met with notable success; success which is very much deserved. The opening track ‘An Evening I Will Not Forget’ sets the mood for this raw and emotional musical journey. Dermot’s rich and soulful vocals do not need dressing up in any way, and in the beginning of the track he is accompanied simply by a grand piano, making it feel stripped back and exposed, while steadily building up to the powerful beat of the chorus and continuing to the end. Dermot Kennedy manages to convey a fierce passion through his music. The meaningful lyrics scattered throughout each and every song make the listener feel as though they have stumbled upon something deeply personal, therefore making us feel all the more connected through the music. ‘Power Over Me’ and ‘Outnumbered’, are likely to be the most well-known tracks from the album, and these were released as singles prior to the album launch. ‘Power Over Me’ bursts through with irresistible energy, boasting those hard-hitting lyrics which are apparent in all of Dermot’s tracks. The tracks like ‘Outnumbered’ and ‘Lost’ share something which is raw and vulnerable. Dermot’s strong and distinctive voice is both comforting and stirring at the same time, and the listener is able to get completely absorbed in the music. Dermot has a remarkable ability to tell a story through his lyrics, and as a result of this his honesty, passion and realism shine through. Each track takes the listener on a personal and expressive journey and concludes with the album’s namesake. Without Fear is 50 minutes of pure empowerment, and the success of the album is mirrored and reinforced by Dermot’s breathtaking live performances with faultless vocals. Words by Sarah Turner
https://medium.com/the-indiependent/albums-of-2019-without-fear-dermot-kennedy-3cfeb2b2e25
['Sarah Turner']
2020-05-18 08:55:53.138000+00:00
['Review', 'Album', '2019', 'Dermot Kennedy', 'Music']
Making a Lightweight, Low-Cost Rasa Chatbot with NGINX
There’s something about monotonous Monday morning scheduling, emailing and planning that just screams “there must be a more efficient way to do this”. Well, I’m happy to introduce you to TTT’s Coordination Lookup and Analysis Utility, CLAU! CLAU is a conversational AI chatbot for Slack made using the Rasa Open Source framework that we use to automate those simple, repetitive tasks that normally take up a lot of valuable work time. The current implementation is for our Project Management team. All they need to do is message the bot in Slack to quickly find documentation and retrieve information from internal management tools, saving time. That doesn’t sound so complicated, right? Well… One way of externally deploying your chatbot is with Rasa X, Rasa’s own toolset. Rasa X is a tool for conversation-driven development by giving developers a UI to collect, review, and annotate data from users. We opted to deploy our chatbot without Rasa X. The main reason for this was that the functionality we would get from Rasa X wasn’t worth the added complexity and cost of setting it up for our particular use case at this point in the project. There may be a time in the future where Rasa X suits our use case, but for now, we wanted a cleaner, lightweight setup. It’s up to you whether Rasa X suits your needs. If you decide not to go with Rasa X, the challenge is finding appropriate documentation. Most of the Rasa documentation that exists assumes you’re going to install Rasa X with your initial setup. Without Rasa X, there’s less documentation out there to support you. We developed CLAU with limited documentation and ran into three key issues that I’m going to share with you. Whether you’re working on a low cost, lightweight chatbot for a personal project or for work purposes, I hope these insights save you some time. After all, that’s the whole point of the project! For a deeper dive into chatbots in general, take a look at this previous blog of ours. 1. Deploying your Rasa chatbot with Docker The first question I had when we decided not to use Rasa X was, is it even possible? The answer is yes⁠-but, how? When facing a challenge that can quickly become complex, the best strategy is always to abide by best practises. The relevant best practise in this case, is that you should build and test out your bot locally before you move on to the remote environment. If you ever add additional services or change the scope of your project, it’s always a great idea to test everything out locally. Once you can deploy locally without a hitch, moving on to your preferred hosting service should be smooth. The following steps will guide you through the process of local deployment. If you’re using Docker for the first time, Docker containers provide a developer with scalability, isolation, and consistency across different environments among other benefits. Before we continue, make sure you have the following installed and ready to go: To see if you have Docker and Docker Compose properly installed try running docker --version docker-compose --version This is what your directory structure should look like after initializing a simple Rasa chatbot. You can quickly make one following this Rasa tutorial. We want our project directory to look like the following before attempting the next step: Note that I placed actions.py into a folder called actions alongside a requirements-actions.txt and a Dockerfile. This helps modularize our project as everything related to our custom code is now inside this one file. Additionally, I moved the ngrok.exe application and added a docker-compose.yml file in the project directory for easy access from the command line. Please refer to the following code blocks to understand what to place into each of the new files. If you are using custom actions that you specified in actions.py, make sure to change the endpoints.yml file to the specified URL as well. The Dockerfile provides Docker with instructions on how to build your actions code. Here’s a template: #whatever version that suits you FROM rasa/rasa-sdk:latest #define the working directory of Docker container WORKDIR /app #copy the requirements txt file with your dependencies in actions directory COPY ./requirements-actions.txt ./ #copy everything in ./actions directory (your custom actions code) to /app/actions in container COPY ./ /app/actions #install dependencies inside Docker container USER root RUN pip install -r requirements-actions.txt USER 1001 Template requirements-actions.txt specifies the specific packages you need for your actions code. Here’s a template: #<package_name1>==<version of package you want> #e.g: #examplepackage==5.0.3 The docker-compose.yml provides instructions to Docker Composeon how to run your containers. Here’s a template: version: '3.0' services: rasa: container_name: rasa # go to docker hub/rasa changelog to see what version and flavour of rasa that you want # Make sure that the version you specify is the same as the version that you pip installed image: rasa/rasa:1.10.5 #Map port 5005 of local machine to 5005 of container ports: - 5005:5005 # This command will copy everything in current directory to the /app directory in the container volumes: - ./:/app command: - run app: image: <name of image> expose: - 5055 Here is a template endpoints.yml: action_endpoint: url: "http://app:5055/webhook" Here are command line prompts to build and run your code: #Inside your project directory docker build ./actions -t <name of image> docker-compose up -d ./ngrok http 5005 This is the hello message you should see if your deployment worked: Do this curl request to check if your chatbot can talk: curl --request POST 'https://<your url>.ngrok.io/webhooks/rest/webhook'\ --data '{"sender":"Test","message":"Hi"}' This is what you should get back: [{"recipient_id":"Test","text":"Hey! How are you?"}] 2. Installing transport layer security Now that you have a Dockerized version of Rasa working locally, you probably want to connect your chatbot to an external messaging service like Slack or Messenger and deploy it onto a hosting service like Google Cloud or AWS. But before you do so, your chatbot messages need to be encrypted. One way you can install SSL certificates to do so is to use a service that you can easily incorporate to your application called NGINX. NGINX (pronounced Engine X) is open-source software that acts as a server sitting in front of your application. It handles traffic and performs tasks such as reverse proxying, caching, and load balancing. This provides an extra layer of security between your chatbot and the outside world. You can configure NGINX as a Docker container to receive all HTTP traffic which will then be sent upstream to the Rasa container (or an authentication server). This will keep the Rasa container isolated from direct contact with the external world. Additionally, by storing SSL certificates in the NGINX container, you ensure that you can provide a secure connection between your server and servers of the messaging provider you are using (like Slack). NGINX would be placed before the Bot User to add a layer of security. Here’s how you can configure NGINX to add this layer of security. Step 1, obtain an SSL certificate To begin, you will need to get a domain name and set up a remote environment with Docker, Docker Compose, and your code. You can register for a domain name from any domain register and if you are not sure about where to deploy remotely, Google Cloud is a good place to start with a large amount of free credits. For this demo, a simple development environment was created on GCP Compute Engine using a n1-standard-1 VM (1vCPU, 3.75 GB memory) running on Ubuntu 16.04 LTS (20GB). Try testing different setups and services to suit your use case. The first step in obtaining an SSL certificate is to find a certificate authority. Let’s Encrypt is a popular non-profit authority that provides free certificates and is the option that we will go with. Perform the following commands to install certbot (Let’s Encrypt software to install free SSL certificates): sudo apt-get update sudo apt-get install software-properties-common sudo add-apt-repository universe sudo add-apt-repository ppa:certbot/certbot sudo apt-get update sudo apt-get install certbot To get a certificate (you will need a domain name and email ready in this step): sudo certbot certonly --standalone Next, move your freshly obtained certificates into a directory that Docker has access to. I placed it into my project directory. Wherever you put it, do not commit it to version control: #Inside your project directory mkdir certs sudo cp /etc/letsencrypt/<domain>/live/fullchain.pem ./certs sudo cp /etc/letsencrypt/<domain>/live/privkey.pem ./certs Before moving on, make sure that you have Docker and Docker Compose installed in your VM. Now that we have SSL certificates, we need to put them into NGINX. Step 2, configure NGINX as a reverse proxy When you run your application, you need to make sure that NGINX is configured to act as a reverse proxy for your Rasa container and point it to where the new SSL certificates are stored. If it is not configured correctly, your chatbot would be unable to access user inputs because NGINX is unable to forward the messages to the Rasa container. First, make a configuration file inside the project directory (note that this is an environment-specific configuration file so it will not work on localhost without a few changes): mkdir nginx nano default.conf Inside default.conf: #sends events to the rasa container on port 5005 upstream rasa { server rasa:5005; } #change your domain name to localhost if testing locally #listen on port 80 (default port for non-encrypted messages) #if testing locally, <your_domain_name> is localhost server { listen 80; server_name <your_domain_name>; #reverse proxy to rasa container location / { proxy_pass http://rasa; } } #comment out this block if you are testing locally #listen to port 443 (default port for encrypted messages) server { listen 443 ssl; server_name <your_domain_name>; #points to ssl certificates that we will move to nginx docker container in docker compose ssl_certificate /etc/letsencrypt/live/<your_domain_name>/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/<your_domain_name>/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/<your_domain_name>/fullchain.pem; #reverse proxy to rasa container location / { proxy_pass http://rasa; } } The last step is to make a new NGINX container that includes the SSL certificates and configuration file that we created and add it to the deployment. Step 3, add NGINX into the Docker deployment Add the source code below to your Docker Compose file and you’re good to go! version: '3.0' services: rasa: container_name: rasa # go to docker hub/rasa changelog to see what version and flavour of rasa that you want # If unsure, using rasa/rasa:latest-full is a good default option image: rasa/rasa:1.10.5-full #This is the port on the container that is being exposed expose: - 5005 # This command will copy everything in current directory to the /app directory in the container volumes: - ./:/app command: - run app: container_name: actions image: app-server expose: - 5055 nginx: container_name: nginx image: nginx ports: - 80:80 - 443:443 volumes: - ./nginx:/etc/nginx/conf.d #I kept my SSL certs in a certs folder in project directory (Make sure to include this in .gitignore) - ./certs:/etc/letsencrypt/live/<domain> #You can specify your own network if you do not want to like the default docker network naming If you don’t have your custom actions image already, you can pull it from an image repository like Docker Hub or simply build it again using the Docker build command. Now, see if it works! Inside the project directory put: docker-compose up -d Go to your domain and try the https:// connection and you should be able to see the hello message from Rasa if you give it a few seconds to load. 3. Avoiding having to curate unnecessary training data There’s always a bit of uncertainty on Rasa’s part when it tries to recognize entities. Rasa X can help with this issue by collecting lots of user input and training new models to recognize previously unseen entities. However, without Rasa X, we wanted a better way for our chatbot to recognize entities it hasn’t seen before. Since CLAU’s users are members of TTT’s Project Management team, the chatbot needed to be trained to recognize a wide variety of project names. At TTT, we have a steady stream of new, unstructured project names, so we devised a system to avoid having to curate unnecessary training data every time we have a new project. Sometimes you don’t need a large quantity of data when you can create high quality data. We developed a simple parentheses notation system that gives our chatbot 100% certainty that a word (or a group of words) is a specific entity. Instead of trying to train a chatbot to recognize new words as entities, we simply trained it to recognize a simple notation instead. Now, imagine a restaurant website that has a chatbot. The restaurant has a menu that changes every day. You could push menu updates through the chatbot simply by telling it: “add (Bruschetta) as a daily special”. This way you wouldn’t have to worry about training the chatbot to recognize Bruschetta as a dish because it’s already trained to recognize everything inside the parenthesis as a dish. Here’s some example use of the notation: #example training data ## intent:update_menu - put [(Sandwich)](food) as the daily special - Can [(Clam chowder)](food) have its price updated by a dollar - remove [(steak)](food) from the menu # a second example involving names this time ## intent: message_user - remind [(Amanada)](person) of our meeting today - send [(Darth Vader)](person) the bill for customizing his mask - give [(Carly)](person) in [legal](department) the documents for our new project This could be effective for you if you have to handle bite-sized entities or categories that change or need to be updated regularly. This special notation wouldn’t be effective in situations where you have a stream of new users who would need training on using the notation. Our users are our project managers. We can quickly show them how to use it. After all, it’s a pretty small application and this use is internal for us for now. If, for example, you run a high volume website for making appointments, it would be unrealistic to expect users to learn and understand a special notation just to make an appointment. At this point you should have a solid foundation set up for your Rasa chatbot. You will have deployed it locally using Docker and Rasa Open Source, added transport layer security with NGINX and possibly incorporated a parenthesis notation as a workaround for certain kinds of training data. I hope you found the instructions and source code useful and good luck with your Rasa chatbot!
https://tttstudios.medium.com/making-a-lightweight-low-cost-rasa-chatbot-with-nginx-f1bb72a5af20
['Ttt Studios']
2020-07-08 19:26:02.834000+00:00
['AI', 'Rasa', 'Chatbots', 'Slackbot', 'Nginx']