title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
Why We Can’t Stop Fooling Ourselves | An Apolitical Essay for a Political Time.
A Meditation on Quantum Knowledge, Information Castles, & Love
— — — — — — — — — — — — — — — — — — — — — — — — — —
Physicist Richard Feynman once said, “The first principle is that you must not fool yourself and you are the easiest person to fool.”
It occurs to me during this post-election dysphoria where whatever news you want to perceive about its results is readily available for your consumption, that one of the bizarre and unsettling truths of the 21st century and its great technological data boom is the realization that information didn’t actually make us smarter. It just made it easier to fool ourselves.
I’ve been thinking a lot lately about these graphs (DIKW hierarchy) and Feynman’s quote.
Thinking about how in an information abundant landscape we can conjure up nearly any idea about our own personal realities and backtrack to find some “data” to support it. I’m not talking about motivated reasoning or selection bias here. Rather, I’m talking about something much bigger. Much sneakier.
I'm talking about the idea that in an infinitely expanding information landscape, all propositions become both true and untrue simultaneously. We get stuck in a quantum fluctuating world where merely observing the “information electron” forces it left or right or up or down. Merely observing it makes it come to be.
We simply have a thought that we perceive to be some approximation of reality and simply by searching for its validity on the internet we will it further and further into existence. This is Wittgenstein’s worst nightmare come to life. A Monocultural Hive Mind Mirror Machine that reflects back on to the user exactly what he or she wishes to see. That user then shares that reflection and by sharing it then affirms its own existence.
In an ideal world where a culture or civilization tries to make sense of their surroundings in order to ensure their continued survival while simultaneously innovating their way out of any evolutionary pressure points, they might try to start with data and hopefully somehow shape that data into coherency. Something resembling information. And hopefully, from there, take that information and shape it into something like knowledge or insight that is scalable and practical. Something that matches neatly on to reality. And then use that knowledge to somehow forge a notion of wisdom. Timeless rules that have both practical and evolutionary value at the individual and group level.
The problem is that in a hyper-connected world the Rø of information spreads 10000x that of a virus - and we simply don’t have the mental software clever enough or powerful enough to compress that information into knowledge fast enough.
We don’t have the time required to take psychic breaths. To sit in psychic silence. The time that knowledge and wisdom require of us in order to fully bake in our conceptual ovens. It’s all just a barrage of ingredients and smells that resemble what a fully prepared intellectual meal might taste and look like.
What we are left with in its’ wake is just inflationary memetic noise masked as self-conviction. We tell ourselves a thing must be true because the alternative — living in that quantum state indefinitely — is worse. If a “thing” can break up or down or left or right equally and continually and endlessly depending simply on how I “observe it”…. why wouldn’t I “observe it” in a way that makes more sense to me in the first place.
I don’t just mean this metaphorically. In the same way, the brain and ears are constantly vigilant, scanning the environment, evaluating any external noises as potential threats, always a moment's notice away from going into “high-alert mode”… so too does it behave the same way for informational noise. “Does this “noise” have implications for my survival? Implications for my map of reality? If my map of reality is wrong, how at risk am I? How vulnerable am I?”
This barrage of untethered information keeps us chained to our monkey brains. Chained to a sympathetic nervous system response. The Prefrontal Cortex never gets to come into the game and make sense of the noise. Never gets a chance to put it in its proper place. To derive knowledge and context and wisdom out of it. It quite literally prevents us from evolving.
I am reminded recently of a story I once read about the origins of Taoist philosophy as told by the parable of the Dextrous Butcher.
Cook Ding was cutting up an ox for Lord Wenhui. As every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee — zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Jingshou music. “Ah, this is marvelous!” said Lord Wenhui. “Imagine skill reaching such heights!” Cook Ding laid down his knife and replied, “What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now — now I go at it by spirit and don’t look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and following things as they are. So I never touch the smallest ligament or tendon, much less a main joint. “A good cook changes his knife once a year — because he cuts. A mediocre cook changes his knife once a month — because he hacks. I’ve had this knife of mine for nineteen years and I’ve cut up thousands of oxen with it, and yet the blade is as good as though it had just come from the grindstone. There are spaces between the joints, and the blade of the knife has really no thickness. If you insert what has no thickness into such spaces, then there’s plenty of room — more than enough for the blade to play about it. That’s why after nineteen years the blade of my knife is still as good as when it first came from the grindstone. “However, whenever I come to a complicated place, I size up the difficulties, tell myself to watch out and be careful, keep my eyes on what I’m doing, work very slowly, and move the knife with the greatest subtlety, until — flop! the whole thing comes apart like a clod of earth crumbling to the ground. I stand there holding the knife and look all around me, completely satisfied and reluctant to move on, and then I wipe off the knife and put it away.” “Excellent!” said Lord Wenhui. “I have heard the words of Cook Ding and learned how to care for life!”
https://thedewdrop.org/2020/05/18/the-dexterous-butcher-zhuangzi/
The point of this story is about deep learning through simplicity. A man that unlocked prophetic timeless wisdom through butchering oxen. That through years and years of fidelity to a single act, a single endeavor, profound insights, and wisdom about all of life begin to emerge. The brain begins to make metaphoric connections to ancient memetic truths that transcends the “external noise”. Turning the lizard brain off. Turning the non-essential volume down. Living comfortably in the quantum space.
Compare this with the alternative; trying to hold all the information on the internet in one breath. Trying to map it at once. Trying to make sense of it as a coherent whole and then lying to ourselves about having done so. Like trying to visualize a billion sided Polyhedron or imagining all the stars of the universe in a single glance of your mind’s eye.
Milkway Galaxy from Earth
I’m guilty of this as much as the next person. Too much information. Too many stars in the sky. Not enough time to count them all. Takes too much effort to live in the quantum indeterminate space. Better off sticking with the constellations I know and backtrack from there.
And so I contend that this is actually the real struggle of our generation. Not Climate Change, not politics, not economics. Rather, what do you do with hyperinflationary information in a recessionary knowledge economy? The struggle simply stated: How honest can we really be expected to keep ourselves when the ability to fool ourselves is just a mouse click away.
There are solid reasons to be deeply pessimistic about the outlook here. But then again, that might just be me “observing the thing” a little too much.
It does help in these moments to remind myself of another Feynman quote.
“Physics and science isn’t the most important thing. Love is.”
It may sound cheesy, but in an information cascade, best to stick with the wisdom that has stood the test of time. Best to stick with those like Feynman and Gandhi and Ding and Aurelius. Those who have traveled deep into the information universe and have returned with only things like “love”, “gratitude”, “simplicity”, “forgiveness” in their return suitcases. Things that the cynic perceives as banal platitudes and the sage approaches with irreverent respect.
And so I try to keep that as the plaque hanging over my informational front door most days. That the important thing is Love. It's the gargoyle watching over my knowledge castle. Make sure I don’t fool myself. Making sure I don’t convince myself too thoroughly that I have all the sky’s stars accounted for.
Admittedly he’s not the best gargoyle. He needs to be woken up from his slumber frequently. But you’d be foolish to count him out. Especially in times like this. | https://medium.com/curious/why-we-cant-stop-fooling-ourselves-8287ce42beb9 | ['Rob Healy'] | 2020-11-12 03:33:17.230000+00:00 | ['Election 2020', 'Feynman', 'Philosophy', 'Tech', 'Neuroscience'] |
12 Easy Tricks To Massively Improve Your Sleep Quality | 12 Easy Tricks To Massively Improve Your Sleep Quality
I tested these tricks for better sleep and productivity. It worked fabulously.
Photo by Rafal Jedrzejek on Unsplash
Personally, I think sleep is underrated. I’m definitely no sleep expert, but I can’t help but notice the positive effects of regularly getting a decent night’s sleep night after night.
Over the last few months, I’ve been slowly tweaking my sleep habits to reach the optimal conditions so that I can get the best quality of sleep as often as I can.
My decision to improve my sleep quality wasn’t because I’m an insomniac or anything — in fact, I’ve always considered myself a good sleeper — but it was thinking about the whole notion of sleeping at a deeper level. It’s believed we spend approximately one-third of our lives sleeping. Assuming we were to live up to the ripe old age of 80, that means we would have slept for 26.6 years during our lifetime.
Given that we spend so much time sleeping, I think it’s fair to say that we should be doing it right. When we can get a quality night’s sleep time and time again, the benefits will be clear to see in our everyday life. We’ll be in a better mood every day. We’ll be happier and have more energy.
Here are some top tips on how to improve the quality of your sleep:
1. Go to bed at the same time every evening
It’s becoming increasingly uncommon to find someone with a regular bedtime these days. With our careers taking so much of our time, on weekdays, we only have the time between the end of our workday and when we go to bed to fit in as many personal chores as we can.
This can lead to setting a bedtime that’s either too late or too irregular for us. This often results in us not getting enough sleep. For most of us, we can’t get up later because we have jobs to go to, so we must go to bed earlier every night.
I used to consider 6 hours of sleep to be enough for me. I’ve since tried to increase it to at least 7, with the optimal number of hours between 7 and 8.
It’s been a bit more challenging for the past year or so to get 8 hours a night of sleep due to my 6 am morning routine I have in place. Since I don’t really want to shift my morning routine back any later, I decided to move my bedtime to an earlier time.
I now go to sleep at 10.30 pm when before I was going to sleep around midnight. This only left me with 6 hours, which is not enough. I am now getting enough sleep every night.
2. Nap if you have to
There will be occasions where you cannot stick to your regular bedtime. For example, sometimes we have parties or are out late with friends, and it is these occasions can screw our regular bedtimes up and put us right back where we started.
When possible, try to take a nap to make up for hours lost rather than sleeping in and getting up later. However, approach this with caution as it’s easy to nap for too long then be too awake at our bedtime.
3. Make sure you are comfortable
It may seem very obvious at this point, but I’ve been neglecting this aspect for years because I ignored my body signs. Besides, I knew it would take too much effort to sort this problem out.
For the last decade, I have been sleeping on a firm mattress, which I thought was good for me even though I always woke with aching shoulders. I also slept on cheap pillows, which didn’t help matters. Since then, I have ordered a new Tempur cloud mattress. Now they are expensive. I am now sleeping better than I ever have before, and you can’t put a price on a good night’s sleep! I couldn’t recommend the Mattress or the pillow more, and they both go down as the best investments I have made in my life to this day.
4. Wind-down
It’s a good idea to have an activity that you do just before bed to signal to your brain that it’s soon time to sleep. Before, I used to halt whatever I was doing whenever I felt tired. Sometimes I’d be watching a movie, playing video games, or working on my business, and my brain activity wouldn’t be ready to switch off and rest suddenly.
Reading is a good winding down activity, especially since there isn’t any bright glare from an electronic device. Every night I spend about 10–20 mins reading a book before feeling too sleepy to continue.
5. Don’t exercise too late in the evening
This may not affect everyone, but it’s probably best not to exercise too close to your bedtime. After exercise, your blood is pumping through your body at an increased rate, and your breathing is up, which aren’t optimal conditions to relax and fall asleep. This may not be of concern to you, or perhaps you do not have any other time to exercise, in which case you can ignore this point.
However, if you are having problems sleeping and exercising too late, consider switching your workout session to the morning.
6. Cut the caffeine early
I never let myself drink any kind of tea or coffee or anything that contains caffeine after 6 pm. Caffeine can keep you awake and delay the time it takes you to fall asleep. Make a caffeine cut-off point that works for you.
7. Don’t drink too much fluids at night
This tip is not for everyone, but I have a bladder like a two-hundred-year-old man, and small amounts of fluid before bed can have me getting up in the middle of the night heading for the bathroom.
Some people like to have a drink before bed because the body can become dehydrated during the night. This is true, but this doesn’t work for me. Instead, I am to have a glass of water next to me to drink first thing when I get up.
8. Keep it dim before bedtime
To get the body ready for sleep, you need more Melatonin to be produced before your sleep. Melatonin is produced most effectively when it’s dark. Therefore it’s a good idea to slowly dim your lights as you get nearer and nearer to your bedtime.
By 10 pm, I’ll already have my main lights off and use table lamps to light the room. This also applies to electronic devices with a backlit screen. Computer screens, iPods, iPad, mobile phones, and television screens can be bright enough to slow down melatonin production. It may be a good idea to turn off such devices for at least 30 minutes before bed.
9. Keep it completely dark
It doesn’t take a rocket scientist to know that the best conditions for sleep are complete darkness. However, with so many electronic devices and power outlets that emit light during the night, the likelihood of negatively impacting our sleep has become greater. If you have a clock that emits light, either get a new one or cover up the lighted display.
Turn off all devices from the power source to stop any other lights shining in the night. If light manages to creep past your curtains, get thicker curtains, or buy a comfortable eye-mask to block out any light.
10. If you do get up at night — keep it dark
If you have to get up in the middle of the night, try not to use too bright a light to get back to sleep. A flashlight to go to the bathroom is the best solution to move around the house, albeit you could be confused as a burglar. Would you rather get back to sleep or be considered a potential burglar by another member in your home? The choice is yours.
11. Make it the right temperature
This is overly obvious, but you don’t want to be too cold or too warm. When it’s too hot, you’ll toss and turn, and if it’s too cold, you won’t be comfortable enough to drop off to sleep. Open windows, close windows, get a thicker duvet, use air-conditioning, turn up the heating, use a fan, use a hot water bottle, and use all that you have available to make sure you are at an optimal temperature for you to be comfortable and sleep through the night.
12. Plan for tomorrow
This was a big one for me. To sleep all the way comfortably through the night, you need to put your mind at rest and tie up any loose ends. Haven’t you experienced a time where you had a problem, and you couldn’t get to sleep because you kept thinking about it? Or you couldn’t sleep because you were thinking about all the things you have to do in the morning?
Put your mind at rest by planning tomorrow’s activity the night before. When you go to bed, knowing firmly in your mind what you must do when you get up, you’ll notice your mind is in a more relaxed place to come to rest.
You don’t have to geek out on all this sleep stuff as I did, but hopefully, you can see that a few regular small changes can have a really positive effect on your energy levels, productivity, and well-being. | https://medium.com/in-fitness-and-in-health/12-easy-tricks-to-massively-improve-your-sleep-quality-4b2a1222b39c | ['Josef Cruz'] | 2020-12-24 15:25:51.297000+00:00 | ['Health', 'Lifestyle', 'Life', 'Sleep', 'Self Improvement'] |
Inspiring action with data and design. | Inspiring action with data and design.
Forging emotional connections to global data sets.
Our planet is in danger. Mass extinction is threatening the stability of our planet and our own personal futures. Humans are the cause of this mass extinction, but we also hold the power to stop it. What matters most is what we choose to do next.
Securing our planet’s long-term health will require a massive change in the way we think about our planet and approach our problems. The Half-Earth Project marks the turning point of that change. Conceived by E.O. Wilson, Half-Earth is a call to conserve half the Earth’s land and sea in order to manage sufficient habitat to reverse the species extinction crisis and ensure the long-term health of our planet. But which half? The Half-Earth Map will answer this important question.
Designed and developed by Vizzuality in collaboration with the E.O. Wilson Biodiversity Foundation, Map of Life, and Esri, the Half-Earth Map presents global biodiversity data in a way that compels us to take a different look at what’s happening to our planet’s species. Guided by user research and an understanding of human psychology, the Map is an example of how to combine scientifically rigorous data with emotionally stimulating design. | https://medium.com/vizzuality-blog/inspiring-action-with-data-and-design-601ccbc68339 | ['Camellia Williams'] | 2019-07-08 13:23:11.833000+00:00 | ['GIS', 'Biodiversity', 'Design', 'Maps', 'Data Visualization'] |
I Was a Prostitute for Halloween | Fast forward to Halloween day. I put on my pink bob wig, a leopard print bra, fishnet tights, a pair of skimpy underwear, and black thigh-high leather boots. Not quite Julia Roberts in Pretty Woman more Natalie Portman in Closer but you get the idea.
It was going to be a chilly night and the party was mostly going to be outside so I added a purple fur jacket for good measure and stuffed my pockets with 20s and condoms just in case anyone questioned my credentials.
When we arrive at the house party, my friends and I make a beeline to the bar. As we weaved through the crowds, I spotted scantily clad women everywhere…A sexy ninja turtle, a sexy bunny, a sexy vampire, a sexy cop, and sexy Harry Potter.
Ah, I love Halloween, I thought to myself as I quickly realize for adults, jello shots had replaced candy and masks had been replaced by skin. As I squeezed out the last remnants of the jiggly cherry gelatin in my mouth, a group of guys approaches us.
“Nice costume,” Luigi remarks. “What are you?”
“Guess”, I respond.
He looks over at his friend, Mario, who gives him a shoulder shrug and back at me. “I’m not sure. Uh…Lady Gaga?”
“Nah, wrong wig” I respond with a smile. “Try again” I say as I run my hand slowly up my leg with a coy glance.
“An 80s go-go dancer?” He says.
I take out a few 20s, and a condom wrapper falls out of my pocket. He blushes.
At that moment, my friend, dressed as a sexy Where’s Waldo (yes, trust me, it’s possible), drags me off to meet the new guy she’s sleeping with.
“Found you!” he says to my friend as he squeezes her ass.
This is my girlfriend, Pixie”, my friend says with a laugh. I”m not sure how I feel about being called Pixie, but hey, for one night I guess you can call me whatever you want.”
“Nice to meet you, Pixie.”
“Can you tell what I am? I ask again.
There’s a long pause. I think I know, but I’m trying to come up with a proper way to say it.
“Say it anyway,”
“A lady of the night?”
“Yes!” I exclaim. “That’s a nice way to put it. I’m a hooker.”
The look on his face tells me he’s surprised by my enthusiasm to be a whore. At that point, his cute friend approaches. This is Lindsay and Pixie. “Can you guess what she is?”
“Pink?”
It goes on like this all night. After a dozen or so conversations, not a single guy was comfortable calling me a hooker, an escort, a prostitute, or even a stripper. But no one had any problem telling the sexy nun that they’d like to take her at the altar. I realized, while we all have become comfortable or at least immune to seeing women dressing “slutty” for Halloween, we are still very uncomfortable with a woman owning her sexuality as a career choice.
A sexy nurse is just a cliche. A sexy prostitute is too close for comfort. But does a sex worker define a person any more than, say, a tech worker? The real reason I went as a prostitute was captured in a quote by poet and writer, Fernando Passoa, “To be understood is to prostitute oneself.” | https://medium.com/sexography/i-was-a-prostitute-for-halloween-3c1ad8ce58cb | ['Zoe Naz'] | 2020-11-01 18:34:39.044000+00:00 | ['Society', 'This Happened To Me', 'Sexuality', 'Culture', 'Halloween'] |
Prevent 3 Nasty Habits by Wearing a Mask | Prevent 3 Nasty Habits by Wearing a Mask
Resist the urge to pick your nose.
Photo by Kobby Mendez on Unsplash
“Put your mask back on!” I tell my friend incessantly for the last few months. It’s not that he doesn’t understand the importance, and he’s not too proud to wear the mask, but it’s been giving him more acne than he’d like. I’m not sure if that’s actually the mask or the copious amounts of fried foods he eats, but hey, whatever explanation he’d like to provide for himself works for me.
Being one of the few people whose whereabouts I am completely aware of for the entirety of quarantine, I see this friend a lot. When we’re together we don’t wear masks, and I’ve started poking fun at some habits he has — that we all may sometimes have — that wearing a mask would prevent. The following is a fun little list of some of these habits and an upbeat, light-hearted way to convince your friends that wearing a mask — aside from all of the health benefits — is actually a really great thing.
1. It Prevents Nail Biting
My friend and I were sitting in my apartment watching football, and I knew he had some money on the games. When it came down to the last few plays and the scores were close, I could see him getting nervous. One of his nervous ticks is nail biting, and he immediately went for his nails every time the team he didn’t bet on looked like they were going to win.
“Wouldn’t be biting your nails if you wore your mask!” I said to him with a smile on my face. Wearing a mask keeps you from putting your hands in your mouth. It may help a child (or my 26-year old friend) stop from biting their nails.
2. It Prevents Nose Picking
We all may go digging for gold every now and then, and my friend is no exception. I caught him with his finger near his nose during an outdoor workout, and I immediately yelled, “Wouldn’t be picking your nose if you wore a mask!”
He immediately slid his mask up and continued his workout. Because the mask should always cover your nose (that is kind of the whole purpose), it’s way more difficult to get caught with any fingers up there.
3. It Prevents Pimple Popping
My friend’s main reason for disliking his mask is because it’s causing him to get pimples. Yet, for some reason, whenever he gets pimples, he immediately touches, scratches, and pops those pimples, which makes them even worse. I saw him popping a pimple during football that same day. Not only was it nasty as is, but also, it started bleeding.
“Wouldn’t be bleeding if you wore your mask!” I said to him. He cleaned his face and put his shirt up over his nose.
The Low Down
Wearing a mask is important right now no matter which way you slice it. But if people don’t want to wear it to save lives, then maybe these fun reasons will help. If people don’t want to wear them because they’re too proud, then maybe these nasty habits the mask can prevent will make them realize wearing a mask can actually be a positive. | https://medium.com/curious/prevent-3-nasty-habits-by-wearing-a-mask-b763eea29c31 | ['Jordan Gross'] | 2020-12-03 06:04:29.232000+00:00 | ['Humor', 'Creativity', 'Self Improvement', 'Life Lessons', 'Inspiration'] |
Does a List Define a Life | A while ago, CR Mandler MAT, a lovely writer friend of mine, invited me to write a list.
I don’t think she knows how much I love lists. Usually, they are in a To-Do format, a document that details the endless aspirations, wishes, and tasks that I invent in the middle of the night.
I am frequently forced to triage my lists.
Otherwise, I may be overwhelmed to the point where I get nothing done at all. When I triage, I ask myself, “What is the one thing that absolutely has to get done today?”
This one is a challenge. A random list of personal traits, my life, trivial facts, and whatever lights my fancy is a bit of a different list for me to write.
It reminds me of a game we used to play in high school.
Truth or Dare.
If you’re inspired by this idea, you’re invited to join as well. Please tag me if you do because I would love to read about you.
So here goes.
Random, with no filter. | https://medium.com/illumination-curated/does-a-list-define-a-life-858be5374352 | ['Tree Langdon'] | 2020-12-27 03:44:30.326000+00:00 | ['Relationships', 'Vulnerability', 'Business', 'Writing', 'Self Improvement'] |
The Cult of Work You Never Meant to Join | Are our most valuable qualities being exploited at work? How our strengths get twisted into forming bad habits that — if we don’t change fast — just might kill us.
You didn’t mean to end up here. You didn’t even see it coming.
It all started with a chance to earn a living doing something you loved. Your dream job. Creating things instead of rotting in a cubicle. You weren’t just going to make a living — you were going to leave your mark on the world.
At first, you loved the work; it was challenging and fast-paced. Everyone around you was crazy smart.
You brainstormed in your off time. Took projects home with you. Put in extra hours on weekends. It never felt like overworking because it never felt like work.
You put in way more than 40 hours a week, but who was counting? This was fun.
But weeks passed into months and somehow you ended up here: Working 60 hours a week minimum, usually more. You greet your coworkers, bleary-eyed, half-joking about needing coffee to survive.
The work is still fun, but you don’t feel the same passion anymore. Whole days slip by sometimes and you have no idea what happened; you certainly don’t have much to show for it.
Your goals outside of work are on hold. You’d love to find out if the Belgians have anything to be cocky about waffle-wise, but you don’t have time for a big trip right now. You know you need to get into an exercise routine, but something always comes up and you skip the gym.
“Later,” you promise yourself, “I’ll get around to it soon.”
You’re not exactly unhappy, but something’s off. You can’t put your finger on it. You’ve just always felt that there would be . . . more.
You’re being force-fed an “ideal” work ethic that’s actually toxic for everyone involved.
You’ve Been Absorbed
You’re no longer a free member of society. You’ve been lured into the Overkill Cult.
The Overkill Cult is a cultural delusion that working 60+ hours each week — at the expense of everything else in our lives — is not only a necessary part of success, but that doing so is somehow honorable.
The insidious thing about the Overkill Cult is that it masquerades as all the things we like most about ourselves: dedication, ambition, follow-through, responsibility.
It tells us to push harder, stay later, sleep when we’re dead. It tells us we’re never going to get ahead if we don’t show up first and go home last.
Cleverly, wickedly, the Overkill Cult persuades us to hang ourselves with our own strengths.
And if we don’t break free, we’re all going to die.
The Overkill Cult Will Kill You (Like It Tried to Kill Me)
Balance is the first thing to go once the Overkill Cult has us in its grasp.
For me, it started with my health. I skipped the gym — too busy, I thought. I didn’t have time to cook — too busy — so I ordered delivery.
My hobbies went next. Everything that wasn’t work fell away — too busy, too busy — until I was on the computer constantly, working. In 2012, I was working 70–90 hours a week.
After that, I lost my social life. Friends knew I wouldn’t show up — can’t; too busy — so they stopped calling. Some days my only human interaction was ordering coffee.
Then — and this, sadly, is where I finally realized there was a problem — I lost my beard.
The Canary in the Coal Mine, or How I Killed My Beard
At the end of 2012, I landed the biggest project of my career at that point: a Black Friday sales site for a Fortune 100 company.
I was thrilled and terrified. A project like this had the potential to move my company to the next level, and I decided to do whatever it took to make this project the best I’d ever built.
The designers had great ideas, and I sat with them to make sure they were possible on our timeline. We came up with a slick, modern idea built on cutting-edge technology. The client loved it.
Then bureaucracy came into play. The legal department made changes. Brand adherence contradicted legal. Design went over schedule. Way over schedule.
By the time the design was approved, I had a third of the time we’d scheduled. And — since this was a Black Friday site — we couldn’t push back the release date. It either launched on time or I was a failure. Period.
Not to be defeated, I powered through four straight days leading up to Black Friday, sleeping maybe six hours total. On Thanksgiving Day I skipped family get-togethers in favor of making the final push.
I was exhausted. Delirious. But, goddammit, I finished the project.
The client was thrilled. The site won a couple Addy Awards. I assume they made a metric fuckton in holiday sales.
May, 2013 — about six months after my Black Friday fiasco.
Over the next few months, patches of my beard started to turn white. The whiskers became ultra-fine. Then they fell out altogether.
Shortly afterward, I lost my ability to grow a beard entirely — I was left with the unsavory choice between a clean-shaven “giant fat baby” look and a creepy mustache.
I had stressed myself out so badly that my body had forgotten how to grow a beard. And for what? So I could work 19-hour days and skip family holidays to meet crazy deadlines?
I was exhausted. My body was failing. I was overwhelmed and unhappy and isolated. I had a mustache, for chrissakes.
I had been guzzling the Overkill Cult’s Kool-Aid.
Something had to change.
How to Tell If You’re in a Cult
The telltale signs we’ve fallen prey to the Overkill Cult’s influence are subtle:
Frequently working more than 40 hours a week
Frequently sleeping less than 6 hours a night
Feeling guilty about any time away from work — even if that time is with family and friends
We don’t join overnight — this is death by a thousand cuts — and once we’ve joined, we’ll probably deny it.
But we’ve joined. By the thousands, we’ve joined.
The Lies of the Overkill Cult
The Overkill Cult’s siren song seems like a healthy sense of ambition. “We have to work hard to get ahead.” It’s something we’ve been told our entire lives.
We’re doing what we think is best for the future.
But the Overkill Cult doesn’t plan for survivors.
Though the symptoms of the Overkill Cult grow from good intentions, they’re short-sighted habits that ultimately do more harm than good.
Let’s look at each of the Overkill Cult’s telltale signs, and how each of them is a long-term detriment disguised as a healthy work ethic.
Frequently Working More than 40 Hours a Week
Long hours often feel mandatory — it’s just part of the culture. We think, “My boss/coworkers/cat will judge me if I’m not working the same long hours as everyone else. I’ll never get ahead if I don’t go above and beyond.”
This is just what it takes to make it, right?
Wrong. Incredibly, terribly, spectacularly wrong.
Research has proven over and over again that it’s not possible to be productive for more than 40 hours a week. At least not for sustained periods of time. Henry Ford introduced the 40-hour work week in 1914 because he saw — through research — that workers on five eight-hour shifts kept up the highest sustained levels of productivity.
Despite over 100 years of research supporting shorter work weeks, many companies still push for long hours, under the claims of a “sprint” or “crunch time” period.
This diagram is loosely based on one included in Sidney J. Chapman’s Hours of Labor.
The irony comes in when we look at productivity over time. After just two months of 60-hour weeks, productivity goes negative compared to what a 40 hour week would have produced.
Did you catch that?
By working 150% of the hours, you accomplish less in the long run.
Frequently Sleeping Less than 6 Hours a Night
Somehow, sleeplessness has become a strange badge of honor. We swap “war stories” of sleeping two hours a night with an odd, martyred pride shining dimly in our bloodshot eyes.
I never sleep because sleep is the cousin of death, we murmur drowsily. So many projects, so little time.
But this belief that burning the midnight oil somehow gets us ahead is utterly, tragically wrong.
You’re the cognitive equivalent of a drunk driver after being awake for 18 hours. But the problem compounds: if you don’t get enough sleep, that level of impairment comes faster the next day. After a few days of too little sleep, you’re a drunken zombie.
We wouldn’t go to work drunk, so why the hell do we go to work on four hours’ sleep, when we’re more impaired than if we were hammered?
To make matters worse, sleeping less than six hours a night may lead to an early death. The Overkill Cult is literally killing you.
Feeling Guilty About Any Time Away from Work — Even Time with Family and Friends
When we’re in the clutches of the Overkill Cult, we feel a stab of guilt when we’re not working.
“I’d love to go to this holiday party, but I really shouldn’t; this project won’t finish itself.”
We fear that any time not spent working is wasted.
The irony is — yet again — science tells us exactly the opposite is true.
Overworking leads to higher stress levels and burnout, which have been linked to increased health risks.
Conversely, time away from work is proven to relieve stress and boost creativity, among numerous other benefits.
Besides, if we accept that the ideal is to sleep 8 hours a night and work 8 hours a day, that leaves us with 8 hours for non-work activities.
Taking time away from work gives us time to recharge. It puts distance between us and our projects, giving us time to remember why we like doing what we do.
Making Our Escape
We may have been duped into joining the Overkill Cult, but it’s not too late to escape.
We’ve been conned using our own best qualities to develop habits that — even though it seems like they’d make us better — make us worse at our jobs, less satisfied with our work, and less happy in our day-to-day lives.
Leveraging the same strengths the Overkill Cult exploits, we can break free of its clutches and take back our happiness and passion.
After my beard died, I felt the full weight of burnout. I was burnt to a fucking crisp. I realized I could either leave my career altogether, or make some fundamental changes to my lifestyle.
For what it’s worth, here are the promises I made to myself that helped me break away from the Overkill Cult.
I Work as Much as I Can — But Not More
Before anything else, I had to accept that it’s only possible to do 6–8 hours of quality work each day.
Trying to work longer hours will not make me more productive. In fact, working longer hours actually results in me getting less done as time drags on.
I chose the latter, and implemented some radical (to me) strategies for controlling my time. I cut from an average of 70–90 hours a week in 2013 to an average 38 hours per week over the last year.
I expected to see less professional success in favor of better overall balance in my life — a sacrifice I was willing to make — instead I saw better productivity at work: my turn-around times went down and I was more consistently hitting my deadlines.
I was floored at the time, but in retrospect I’m not surprised at all.
I Make Sleep a Top Priority
Getting enough sleep is beneficial on every level. Yet it was always the first thing I’d sacrifice when life got busy.
Too little sleep wreaks havoc on my ability to think clearly, and that hurts me at work in a big, bad way.
After I cut my hours down, I started sleeping without an alarm. Since I’m not working crazy hours, I close my computer by six or seven in the evening, and by eleven I’m usually in bed, where I read for a bit before falling asleep. I wake up naturally between seven and eight-thirty.
This has changed my life. No bullshit.
Waking up to an alarm before I’m fully rested starts the day in a groggy, stressful way. Waking up naturally after getting as much sleep as my body needs leaves me much happier to be awake, and far more ready to start my day.
I Dedicate a Reasonable Amount of Time to NOT Working
This was — and still is — the biggest challenge I faced in breaking away from the Overkill Cult. I love what I do, and I want to get my projects finished. It’s easy to rationalize working more hours and skipping activities that keep me from working.
But now I know that taking breaks makes me more productive: time away from work lets my passion and excitement for the work renew itself; taking my mind off of a project allows my subconscious to roll around abstract ideas that result in better solutions; breaks from the job lower my stress levels and boost my creativity.
So I make sure to take time off, even if my gut (incorrectly) tells me it’s a bad idea.
I take walks. I leave my phone in my pocket when I’m out with friends or eating my meals. I spend a fair amount of time on my hobbies, like writing and hunting for the world’s best cheeseburger.
I’m happier today than I can ever remember being in my life. I feel excited to work on my projects, to pursue my hobbies, and to spend time with people I love.
I’m excited to be alive.
Leaving the Overkill Cult Saved My Life
When my beard died in 2013, I feared it was only the first sign of an impending decline in my health that would ultimately kill me. It was a glimpse into my future, and I was terrified that if I didn’t change, I was in for a life of isolation, ulcers, alopecia, and an eventual heart attack or stress-induced brain tumor.
By changing my lifestyle, I was able to turn things around. After just a year of balancing my work with the rest of my life, my beard grew back. I lost 30 pounds because I was actually going outside and making it to the gym. I felt more awake, and I became more positive.
When I left the Overkill Cult, everything in my life improved. Not one single thing got worse. | https://medium.com/digital-nomad-stories/the-cult-of-work-you-never-meant-to-join-cd965fb9ea1a | ['Jason Lengstorf'] | 2016-03-14 06:09:48.341000+00:00 | ['Happiness', 'Health', 'Work'] |
Lessons from Brands Setting Up and Driving Adoption of Design Systems — Part 4 | The people and politics aspects of design systems are hard. How do you get buy-in from your boss and team to spend time working on the design system project? How do you ensure people will use the design system? Without widespread adoption and use, design systems are meaningless. This is especially challenging at scale in very large and complex organizations, where design and development functions are spread out among many teams and locations.
It can also be a very intimidating and daunting task to get started on a design system. You might even experience ‘design system imposter syndrome,’ feeling like your project isn’t a ‘real’ design system because it doesn’t include code yet, or because you don’t have a dedicated design systems team.
There are many ways to start design systems efforts, and no one right or wrong way to go about it. We talked to people about their experiences getting design systems off the ground in government, a public sector corporation, and a large technology company. Here are the approaches they took, and the lessons we can learn from their experiences.
Lesson 1: Context is king
When you’re getting started with your design system, consider the context you are operating in. Is it highly bureaucratic and hierarchical? Or more consensus based? Who holds the decision making power, and do you need to get them on board straight away? Who are the advocates that can unlock support for the design systems initiative?
When Edo Plantinga started dreaming of a Netherlands Government Design System, he knew a grassroots, bottom up approach would be crucial. There were several design systems across government, but the individual government organizations were not consistently applying them. “We actually tried to get a design system initiative off the ground a few years ago, but I think we were a bit too early at the time, and it didn’t really stick.”
Plantinga has been working with the Dutch government for several years, and has a strong network of UX and IT connections. Someone in his network pitched the idea of a government-wide design system. The Dutch government already had a style guide, but it wasn’t working consistently in practice. “We started with meetups, inviting people from all over government. Last year we did two informal sessions on design systems, with about 60 people showing up to each one.”
An interface inventory showing the wide variety of buttons across Dutch government websites. Image by Victor Zuydweg.
At these sessions, people worked through structured processes to identify which features would be most important, and what challenges people wanted to solve with the design system. “Meetups can be chaotic, so we took all the inputs from the meetups, and put everything in one central place, weighed each feature, and used that as analytics. We also took a look at the existing design systems to see what we could recycle.”
Another thing the team did to drive buy-in was to get commitments regarding adoption of the design system from various relevant stakeholders. Collecting quotes from senior leaders showing their support of the initiative helped to build social proof that people had bought into the design system.
Infographic by Andreea Mica.
This open, bottom-up, consensus building process has been time consuming, but it has also ensured that everyone is on board with the design system initiative. Even better, the team now has public-facing, written commitments stating that employees are going to use the system when it is available. Plantinga reflects that “The Netherlands is not a very hierarchical country, and so things cannot be done top-down. The consensus model is very Dutch, and considering this cultural context has been crucial to the success of this work.”
Lesson 2: Start where you are
Don’t worry too much about what makes a ‘real design system.’ If you don’t have a big team to work with or all of the resources to include development and code in your design system from the get go, you can still start with things like research and audits, UI kits, and creating documentation. What can you start with today? Who might your allies be? What is going to be most useful for the team? What do you have control over?
There’s a lot of arguments and rhetoric about how a design system is not really a design system unless it contains coded components and code repositories. Be that as it may, it can be very deflating for teams who are struggling to get resources and support to building code components.
For designers at Canada Post, Canada’s mail carrier, they didn’t let a lack of engineering and development resources at the time stop them from pushing their design system forward. Daria Shepelenko was part of the team at the outset. “As a design team we were facing huge demand. We wanted to systematize commonly used patterns in order to support teams to deliver their work more efficiently. When we started doing a comparison across our products, we realized that we lacked the desired level of consistency and cohesion among some of our public and internal-facing products.”
It was clear to product leads and stakeholders that the team faced challenges in securing resources and team time to dedicate to such a large project as a design system. So they did something really smart: they started where they were. “We started by unifying things for designers, since that’s what we had direct control over. We created a robust design file where we outlined all of the common design elements that designers could start using within their projects.”
The Canada Post design kit includes templates and examples of the components that are used in each template. For example, this landing page template includes top navigation, a marketing banner, and a blog component among others. Image by Canada Post.
It wasn’t long before this design kit started gaining momentum. “A designer from one team started using it, and then another one started using it. It was a snowball effect.” The benefits of the design kit gained notice too: “It helped us to bring consistency across our products, and it helped us to ship things faster. This got interest from product owners, which then helped us to build traction with senior leadership.” Shepelenko also shared stories of the design kit getting noticed by stakeholders in design sprints, as curiosity was piqued as to how the team was able to build prototypes so quickly.
From there, the team got funding and resources to build a digital resource where the design system would live. While it started out as design and content-based system, building out development and coded components was next on the roadmap.
The Canada Post Mercury design system now has a public facing website which acts as a centralized resource for the system. Development and coded components are on the roadmap for the system. Image by Canada Post.
Shepelenko has since moved on from her role at Canada Post and is now a UX manager at Shopify. While she acknowledges the importance of code repositories as a crucial aspect of design systems, she also has taken a really vital lesson from her time working on the Canada Post design system: “Even if you can’t have development from the start, there are still a lot of things that you can get started with as just a design team. You can still kick start the process of documentation, you can start building reusable design or UI kits to share with product designers, you can do research like UI audits and have conversations with people about the challenges they are facing.”
Lesson 3: Use what’s working
Be intentional about the starting point for the design patterns that you document. Keep in mind the tradeoffs of starting from a very specific product or user experience. Weigh the need for speed with the variety of use cases you want to cover from the get go. Be open to the need for your design system to evolve in the future. Ask yourself what products or applications are working really well? What can we abstract from? Where are there known successful patterns in our UX?
Shopify mobile, web and point of sale pre-Polaris. Image by Kyle Peatt.
The Shopify UX team had 10 weeks to build the first version of their Polaris design system from start to finish. This was an aggressive goal, as the company set out to document and align Shopify user experiences across all of their products. Moreover, the team was focused on ensuring that their multidisciplinary approach to building products shone through the design system. This meant including information on the function of each component, and the UX rationale behind them.
In order to undertake this mammoth task, the team started with the admin panel. The admin panel is the interface that merchants use to run their business on Shopify. As Yesenia Perez Cruz puts it, “There are lots of ways to get started with a design system. One of the ways is to take a product that is working well, and abstract that into a design system. So we took what was working really well about the core Shopify admin and went from there.”
Shopify Polaris is a robust and often-cited example of a design system. Now on version 4, the first version of Polaris started with the core Shopify admin.
The advantage of this approach was that it enabled the team to create an incredibly detailed and well-documented design system in a very short timeframe. When Polaris launched in May 2017, it immediately made lots of Shopify development partners’ lives much easier.
Shopify mobile, web and point of sale post-Polaris, showing a more unified and consistent look and feel. Image by Kyle Peatt.
However, the downside to this approach was that the design system was very specific to the user experience of the admin panel which it had been based on. “Other teams realized they could benefit from using Polaris, but their audience, context, and user flow might be different to the ‘back office’ experience of the admin panel,” said Yesenia. “For example, if I’m in a retail space under bright lighting the color palette doesn’t work, or I might need bigger buttons in order to be able to maintain eye contact with my customers.”
In order to evolve Polaris to match the evolving use cases and needs, the team has taken a meta approach. “What we’re going to be evolving Polaris into is a system of systems,” shared Perez Cruz. “So this means we will have an umbrella design language for Shopify. It will include experience principles, design language, and foundational components such as grids and layouts.”
This will allow teams to build their own tailored satellite systems, for example Polaris for retail or Polaris for marketing. “The goal is to allow teams to deviate meaningfully if they need something new or different than what is provided in the overarching Polaris system.” From starting with what’s working, the team has grown and continues to evolve a highly sophisticated and successful design system.
The first step is the hardest
As we can see from these stories, each design system has a unique origin story. There really is no right or wrong way to get started. Many successful, enduring, and mature design systems started somewhere, and you can come at a design systems project from many different angles.
The hardest part can be knowing where to start, or feeling that there are certain requirements that make something a ‘real’ design system. In this case, it’s useful to consider what is actually within your control and start there, and if it’s a UI library, so be it. It’s also worthwhile to consider your organizational culture and decision making structures as you figure out your first steps. Finally, you can start with documenting the UX and patterns that you already know are working to get the design system started.
In the end, necessity is often the mother of invention when it comes to design systems!
Continue to part five: Managing, Maintaining & Governing Design Systems for Success.
Thank you to all of the people who participated in research, interviews and surveys, and shared their deep knowledge and expertise. For more on design systems, download the Adobe and Idean e-book ‘Hack the Design System.’ | https://medium.com/thinking-design/driving-buy-in-adoption-of-design-systems-1d5ef573b265 | ['Linn Vizard'] | 2020-01-29 18:16:55.594000+00:00 | ['UI', 'Business', 'Design Systems', 'Design', 'UX'] |
Let’s Usher in 2021 With a Twerk | Okay, I’m putting you on. I don’t really mean “dirty,” as that would be untoward and we all know I don’t go there, right?
With that said, I couldn’t help but think, when I was letting my fractured brain do its own thing the other day, how much fun it would be to throw a virtual dance fest with our Medium buddies.
Maybe it’s the quarantine and the fact that I love to dance but rarely do it, except for when my hubby vacates the premises, or maybe I’m just in a riled-up mood and need to let off some steam, but think about it. Imagine cutting loose with the likes of Helen Cassidy Page, James Knight, Suzanne V. Tanner, Rasheed Hooda, Sterling Page, Kristi Keller, Robin Klammer, Stephen Sovie, P.G. Barnett, Terry L. Cooper, Gayle Kurtzer-Meyers, Don Feazelle, Kim McKinney, Jezebel Feast, Estacious(Charles White), Dawn Bevier, Tina L. Smith, Greg Prince, or any number of Medium “influencers.” You all know who you are.
We might have to actually dress, you know, put on some decent clothes, or whatever still fits, anyway. But that’s a measly price to pay for getting down with our bad selves with some of the baddest asses on Medium.
Imagine the nostalgia of combing the knots out of our hair, shaving (you guys), slapping on a bit of blush, or a swipe of lipstick (we women). And no masks necessary!
Also, everyone would be invited, even the half-assed tipsters, who probably wouldn’t attend anyway. Too busy conjuring up more useless advice for how we losers can “make it” on Medium. Or dreaming up yet another workshop. Still, they’d be welcome. Or not.
Naturally, this would have to be a BYOB affair. Definitely wine for me as I prefer to remain upright when I dance, yet, I wouldn’t turn down the occasional shot with like-minded imbibers.
Snacks? Totally your call but keep in mind what happens when we drink on an empty stomach. I would hate for Helen to see me purging into my fake ficus.
As for music, I propose that we nominate one lucky person to be our DJ. Of course, your input is not only welcome but needed, here. Maybe we all submit ten of our favorite ass-shaking tunes to whoever is going to do the spinning and let him or her work it out. Unless we turn this into a rage and keep it going for days. But just know that whoever falls out first, gets “unfollowed.”
Kidding. I’m kidding, here.
I’m just feeling so, boxed-up. Aren’t you? Along with mentally constipated and physically…soft. Think of the calories we could burn off from countless bags of Cheetos and loaves of sourdough bread.
Having a “plus one” is up to you but for me, I can’t imagine having my mate watching CNN upstairs while I gyrate in front of my iMac for hours. We could just respectably ask our partners to “get the fuck out, for a while.” But again, that’s your call.
Now, here’s where it gets tough: The technical stuff. How many dancers would we be able to “fit in?” Do some of us take breaks? Drop-in and out? We need to figure out the logistics before we get down.
Also, what’s the vehicle? Do we Zoom? Skype? Or what? Would any of you want to take this essential element on? Perhaps Ev Williams could lend a hand to his loyal Medium minions.
“Ev, what do you think, buddy? Most of us aren’t making squat here so can you help throw us a party?”
I’m getting excited just thinking about a Medium dance party. I’m sure many of you have some pretty impressive moves. And I for one want to see them. Yes, I’m a bit of a perv, but it’s all good. I just want to get sweaty with my friends.
So what do you think? Can we do this?
By the way, twerk and/or floss at your own risk. If you throw your back or hip out, I can’t be held responsible. | https://medium.com/rogues-gallery/lets-usher-in-2021-with-a-twerk-9d2e1a3b3528 | ['Sherry Mcguinn'] | 2020-12-29 22:43:24.571000+00:00 | ['Humor', 'New Year Party', 'Dance', 'Writers On Medium', 'Music'] |
Why the Sixers Need Joel Embiid to Keep Shooting Threes | JOEL EMBIID SWUNG his arm into the air, yelling in triumph towards the TD Garden crowd.
Moments earlier, he had hit a decisive three-pointer to put the Sixers up 100–92 with just over 4 minutes to play in the fourth quarter. After setting a screen for Tobias Harris, he stepped into his shot confidently, despite the audible yell of a fan exclaiming, “Rebound, rebound, rebound!” as soon as he caught the ball. Embiid held his follow-through high, the ball softly splashed through the net, and he celebrated, knowing he had silenced his critics.
This game was a response to the intense criticism Embiid had faced earlier. With a line of 38/13/6 (and 2 three-pointers too), he fully looked like the dominant interior force that O’Neal had been urging him to be.
But watch this game again (Celtics vs. Sixers, December 12th), and it will quickly become apparent that Embiid is not the same player as O’Neal, Barkley, and other forces of nature who dominated at times with sheer size.
No, Embiid is playing in a different era, where his unique combination of footwork, grace, and technique paired with his size makes him unstoppable — and not just one of those factors exclusively.
For as many times as Embiid ran straight to the block for a post-up or dominated on the offensive glass, he hit face-up jump-shots, set strong screens, ran the floor in transition, or even hit threes.
He is a unique offensive specimen, one who shouldn’t be exclusively deployed in the post. This isn’t the NBA of old, and there are so many defenses, rules, and reasons why the isolation post-up isn’t the best offensive system — it should only be an offensive option.
He’s [Joel Embiid] the toughest player in the league to match up with, but we don’t talk about him the way we talk about Luka, Giannis, Anthony Davis, James — we don’t ever say that about him. -Charles Barkley
And with every game that passes, Embiid continues to show why he is a unique offensive force and the NBA’s best center.
On a nationally-televised Christmas Day game vs. the Bucks, matched up against Giannis Antetokounmpo, whom O’Neal previously compared himself to due to Antetokounmpo’s relentless attacking of the paint, Embiid dominated. Though the Bucks had the best record in the NBA coming into the game, the Sixers fully looked like the more dominant team — spearheaded by Embiid.
Being guarded by the stout Brook Lopez, and with Antetokounmpo and a variety of long-limbed defenders nearby, Embiid didn’t attack through straight isolation post-ups, however.
Instead, he ran the floor in transition, took defenders off the dribble, hit face-up jump-shots, set screens, and took 6 threes, making 3 of his attempts. He did all of this damage on offense while also guarding Antetokounmpo, teaming up with Al Horford to hold him to 18 points on a putrid 29.6% field goal percentage.
Said color commentator Doris Burke of ESPN, who was at the game, “You can deploy him [Embiid] all over the floor,” adding, “this guy is a monster talent.”
And indeed, at the end of the game, Embiid finished with an O’Neal-Esque 31 points and 11 rebounds, while barely playing in the fourth quarter because the Sixers led by as much as 29.
Antetokounmpo, meanwhile, took 7 threes while being dared to shoot by the 76ers, and he missed all of his attempts.
Embiid responded to O’Neal and Barkley’s criticism in person, saying, “I like being criticized. For them to say I have the potential to be the best player in the world and I haven’t shown that. They’ve been there, they’ve done it, they’re Hall of Famers so it just shows me that I’ve got to play harder and I’ve got to be dominant like I can.”
He concluded, “I think it was great for me.”
Will the growth of Embiid as a shooter outpace that of Giannis Antetokounmpo? (Link)
FOR AS INEFFECTIVE as he has seemed when shooting threes, Embiid is a very effective shooter with a broader perspective.
As a rookie, he was an above-average three-point shooter at 36.7%, though this was on the lowest volume of his career (3.2 attempts per game). His shooting would influence how teams guarded him, as many centers defended him on the perimeter as his shooting was on the level of Karl Anthony-Towns — though, with hindsight, his shooting ability isn’t even close to Towns.
But what is the value, the reason behind the shooting craze in the NBA? Sure, some players are terrific shooters, but the reason teams encourage average shooters to station themselves behind the arc is simple: it unlocks creativity and space on offense.
For Embiid, three-point shooting has unlocked his most effective, yet simple move: the pump fake and the drive.
Almost every center seems to fall for Embiid’s pump fake — a slow, over-extended raising of the ball that exaggerates his shooting motion. And after faking out his defender, Embiid can show off his terrific speed as he forays to the rim. With a full head of steam, defenders can only foul him, hoping he doesn’t convert his shot. This phenomenon has lead to him being the most prolific foul-drawing player outside of James Harden since entering the NBA.
Even if defenders don’t bite for Embiid’s fake (teams are adopting a “no-jumping policy” on his pump fakes), he can still waltz into mid-range jump-shots, where he shoots an above-average 40% for his career.
Still, the Sixers see even more value in Embiid shooting threes besides adding to his offensive arsenal. The Sixers have invested in Ben Simmons, who essentially refuses to shoot threes, and Al Horford, a lower-volume three-point shooter. All of this clogs the lane for Embiid, who has to find ways to exist on offense when Simmons, Horford, or even Tobias Harris need room in the paint.
The simplest solution? Have Embiid spot-up behind the three-point line.
Also, in the era of load management, three-point shooting is a way for Embiid to alleviate the stress on his body — while still playing in games. It is so important that Embiid, who is fragile at worst and injury-prone at best, develops a three-point shot because doing so will mean he doesn’t have to put his body on the line until the NBA Playoffs, when every game matters.
This season, his shooting numbers are up: 33.0% from three and 83.8% from the free-throw line, all of which are up from last year on a slightly decreased volume.
Even if the Sixers wanted to post-up Embiid 20–25 times a game like O’Neal suggests (he is still by far the leader in post-ups this season), they couldn’t do so for 2 reasons that feed into each other.
First, doing so would alienate the other 3 stars on the team, whom the Sixers have invested over $447 million in over the next 5 years. The Sixers have built a core of athletic, two-way stars, but they sacrificed three-point shooting as a result.
Second, teams run a zone defense against the Sixers more than against any other team in the league. And with the Sixers’ iffy outside shooting, they are justified in using this strategy. Zone defense has seen a revival in today’s NBA — and it is something that O’Neal and Barkley didn’t have to deal with.
Without the ability to shoot threes, Embiid will never be able to fully take advantage of zone defenses, as zone defenses eliminate post-up chances.
Back in O’Neal and Barkley’s heyday, there was an “illegal defense” rule that banned zone, and the three-second violation on offense wasn’t initiated until O’Neal became too overwhelming. Compare that to today, where rules favor playing a perimeter-oriented style.
If one of Embiid’s former teammates, like J.J Redick or Marco Bellinelli get the slightest amount of contact behind the arc, they are going to the line for three free-throws. If Embiid is grabbed, mangled, and pushed in the lane, there is only a 50/50 chance he will get the call.
How is Embiid supposed to post-up when surrounded by three long-armed, athletic defenders in a zone? That is a question that Barkley and O’Neal, even when at the peak of their abilities, would not be able to answer. | https://medium.com/basketball-university/why-the-sixers-want-joel-embiid-to-keep-shooting-threes-4ad29909cdce | ['Spencer Young'] | 2020-02-22 01:37:43.211000+00:00 | ['Sports', 'Basketball', 'Philadelphia 76ers', 'NBA', 'Future'] |
Exploring Soviet Isotypes: Digitizing “The Struggle for Five Years in Four” | But first some backstory and context
There’s not a lot of information about the IZOSTAT. What I know comes from the definitive article, “Picturing Soviet Progress: Izostat 1931–4” written by Emma Minns from the Isotype archives at the University of Reading and explains much of the background. It is the story of a creative entrepreneur under the influence of opportunity and Otto Neurath was entranced by expansive budgets, vast Soviet resources, open political agreements, and a personal link with the famous Russian artist and designer, El Lissitzky.
The concept for the IZOSTAT began simply as a Rusian counterpart to Neurath’s Gesellschafts- und Wirstschaftsmuseum (the Social and Economic Museum in Vienna) in Vienna. Building on a friendly relationship between Austria and the USSR as well as the international acclaim of the Vienna Method of Pictorial Statistics, Otto Neurath, Gerd Arntz, and Marie Reidemeister (who later married Otto) traveled to Moscow in 1931 to establish an organization for pictorial statistics. Otto was initially named the director of the institution, and his staff was to remain on hand to advise, despite being under the jurisdiction of the Soviet government.
According to a German/Russian newspaper, the institute was created to “devise pictorial statistics for newspapers, schools, business operations, and many other purposes. Special games, teaching aids and other tools of enlightenment will be developed. Exemplary museums and touring exhibitions are planned according to the Vienna Method and the construction of a large museum in Moscow is already being considered, along with the establishment of an Institute building with all the necessary test facilities.” It’s easy to see how the ambitious Neurath would see this as a great opportunity to spread his methods and ideas on an epic scale.
Original format of the document as individual cards
Credits from the inside front of The Struggle for Five Years In Four
Izostat mural on a Moscow apartment building from the magazine Prozhektor, 1934
Only a few months later, “The Struggle for Five Years In Four” was published as a folder of 64 statistical charts produced by the Izostat Institute and the State Publishing House of Fine Arts. It was a project designed to celebrate Stalin’s first Five Year Plan, which was accomplished in only four years — hence the title. The data was provided by Soviet officials, so it’s presumed to be more of a propaganda project than scientifically accurate.
This project was not credited to Neurath’s international team, but to Ivan Petrovich Ivanitskii, who was a practitioner of pictorial statistics before Neurath’s team arrived in Moscow. After admitting the Vienna method was superior to his own “film strip” method (which can be seen in charts 6, 14–16), he then learned Transformation from Marie and became her main “scientific collaborator.” Nevertheless, because this project was rushed to production it is a mix of the two approaches.
Ivanitskii was clearly cut from the same cloth as Otto Neurath, and was a passionate champion of pictorial statistics in the everyday lives of Russians. He went on to be credited for most of the IZOSTAT works through 1940 and his passion led to other projects outside of print media, including this IZOSTAT mural on a Moscow Apartment building and large posters in shop windows.
“The Struggle for Five Years in Four” also radically departs from the Vienna Method in a few substantial ways, mainly in its illustrative qualities. In the Russian version, Ivanitskii explains in a preface that the guide-pictures are to help the viewer understand the subject matter better, but it is clear that the illustrations also editorialize the overall objectives of the Five Year Plan. Factories, tractors, and microscopes all help to sell the presentation rather than embody or enhance the nuance of the data.
This is precisely what interests me about this work. These charts are attractive as well as persuasive. The stories they present are immediate, like most Isotypes, but they are also more entertaining. We can see Stalin’s Soviet realism encroach on the modernist aesthetics to amplify their message, even if it comes at the cost of statistical clarity. The imagery of the ‘story’ becomes the primary focus, not the data, not the design — but what it all represents to the Soviet people.
“The symbols used … suggest what they are meant to represent”
While this might strike some as an ethical compromise, Ivanitskii focuses on the representation of the statistics rather than the graphic formatting. The use of abstract icons would have been as unusual in Soviet Russia as they would have been in many parts of the world. A year earlier, Ivanitskii explained his design “not in the form of columns and tables of dry and boring numbers, but in the form of images or pictograms capable of exciting the interest of every worker in the Soviet Union.” The icons in these charts have personality, the charts have backgrounds that project ideals more than data.
To me, “The Struggle for Five Years in Four” stands as another path for what the Isotype could have become. Decades later, this work remains vibrant, interesting, and exciting! We see a hybrid of the Vienna Method with Russian flourishes that creates perhaps more personality than many of the Neurath’s own charts.
Ivanitskii says it best in his forward: | https://medium.com/nightingale/exploring-soviet-isotypes-digitizing-the-struggle-for-five-years-in-four-50df7417a766 | ['Jason Forrest'] | 2020-04-10 14:01:01.145000+00:00 | ['Data Visualization', 'Infographics', 'History', 'Books', 'Dvhistory'] |
When There’s Too Much of A Good Thing | When There’s Too Much of A Good Thing
You know this thing has more legs… just one more day, one more drink, one more bet, keep your foot on the gas…
It’s interesting. The old saying “too much of a good thing” is prominent in my mind at the moment. From the extra condition that sits around my waistline, to the over-enthusiastic societal tendency towards political correctness, and the cultural upheaval the US, collapse from the peak of excess seems to be an unavoidable condition of the human experience.
The devil on your shoulder says, just squeeze a little more… you know this thing has more legs… just one more day, keep your foot on the gas…
Then, bang!
It’s too late.
You should have got out sooner. “Why didn’t I listen to my gut?” you say.
More recently though, the accuracy of this idea came to me from my seven year-old daughter.
I was busy painting the landing ceiling late last night. Up a ladder staring squinty-eyed through my hand to try block the light from the recessed downlighter burning a hole in my retina.
Cara watched me as she got ready for bed and said;
It’s funny, isn’t it Dad? When you don’t have enough light you can’t see, and when you have too much light you can’t see either.
Profundity!
I was taken aback by the level of insight I just received from this little blonde girl. Kids are not supposed to know this stuff, but the truth is they do, and sometimes they bowl us over with it.
Nobody Lost Taking A Profit
“Too much of a good thing” recognises that the coin has two sides, that the stick has two ends, and the the other side of both positive and negative experience exists.
The understanding of this idea makes us aware that we’re applying too much gas, that we’re flogging the horse too hard, like the folks that got caught when the markets collapsed in 2008.
A client of mine who managed to make a call before the bust comes to mind. It was 2006 or so, when he told me how he sold all his properties a year or two earlier. He decided to take his profit; “I earned enough” he said, “time for someone else to make a buck”.
He landed on the right side of that coin toss.
Losses aside, this idea also gives us the comfort that things will get better, but usually in their own time. Forcing them to improve tends to work against us.
Best to feel our way in and out. Rationality rarely works. | https://medium.com/the-reflectionist/when-theres-too-much-of-a-good-thing-11b9b4d1b813 | ['Larry G. Maguire'] | 2020-09-01 23:17:00.590000+00:00 | ['Self', 'Life Lessons', 'Success', 'Culture', 'Society'] |
François Goube of OnCrawl Talks Data Science, SEO, & Other Non-Intuitive Marketing Strategies | With data science, marketing becomes more reliable and decisions can be made based on solid predictions.
As a part of my Marketing Strategy Series, I’m talking with my fellow marketing pros at the top of their game to give entrepreneurs and marketers an inside look at proven strategies you might also be able to leverage to grow your business. Today I had the pleasure of talking with François Goube.
François is the Founder and CEO at OnCrawl, an award-winning SEO platform. A serial entrepreneur, he has founded several companies and is actively involved in the startup ecosystem. He loves to analyze scientific Google publications and is a real enthusiast of semantic analysis and search engines.
Thank you so much for doing this! Can you tell us a story about what brought you to this specific career path?
Thank you for the opportunity. It’s a real pleasure to contribute to Authority Magazine! When I try to connect the dots, I think about two things.
First, as far as I can remember, there were computers in my home. So when I had the choice between working for a Fortune 500 company or joining a bunch of geeks developing things, the call for freedom and creativity was too strong and my appetite for technology too great, so I started my career in an IT company.
Next, my own parents were freelancers so I didn’t grow up with the idea of a “boss” in mind. Starting my own business was something very natural.
I have always wanted to build things on top of complex technology to ease the daily routine of people. This was one of the main goals when I started my first company, which was a job search engine now available in more than 15 countries.
Creating a search engine propelled me into the world of SEO. Naturally, I decided to create a second company, Propulseo, an SEO consulting agency. Since then, I have never left this industry.
In 2013, I decided to partner with one of my friends, Tanguy Moal, to create what was supposed to be a one-time data project. We developed a crawler technology specifically for the needs of the largest French e-commerce website, called Cdiscount. When we got the first results out of our own platform, we realized that we wanted to make these advanced search technologies accessible to the greatest number of SEO professionals. A few months later, OnCrawl was officially born! Again, when we built Oncrawl, we wanted to democratize Big Data technologies to all web players.
What industry are you in and what makes the company you are marketing different than others in your space?
OnCrawl is situated at the crossroads of SEO and Data Science. Our core business is to provide reliable and comprehensive data to help marketers open Google’s black box. We make SEO accessible and easy to use as well as making a technology formerly reserved for big players available to the greatest number.
But what differentiates us radically from other players in our industry is the technical nature of the platform and the implementation of artificial intelligence in our technology.
A year ago, we officially decided to orient our SEO platform towards data science. We are convinced that data science can prove to be a major advantage for SEO professionals. Not only in terms of saving time thanks to the automation provided by machine learning algorithms, but also thanks to the reliability of the reports and predictions produced.
Can you dive into why Search Engine Optimization (SEO) is an important part of any long-term marketing plan?
Yes, of course, I do believe that SEO is an important part of any long-term marketing strategy. Why? There is a simple reason: if your business aims to sell products/services online or needs visibility to make its business work, how can it succeed if it is not visible to its potential customers? In the long term, a good SEO strategy allows you to position your website organically on strategic keywords. It is always more reassuring for a user to click on a result that appears in search results thanks to its relevance rather than on an ad that may not meet their needs.
SEO is becoming more and more important in the marketing field. According to a recent Conductor survey, 66% of digital marketers say that organic search was their top-performing online channel in 2019. I believe that this channel will continue to grow in the next few years.
What “3 Non-Intuitive Marketing Strategies” have been most effective for you in that industry?
An often forgotten but extremely effective SEO strategy is the optimization of your crawl budget. Google has a budget allocated to analyze your website. Once this budget is exhausted, whether or not it has finished analyzing your pages, it will leave your site and move on to another one. It is therefore very important that Google spends its time on your important pages. For this, I advise you to spot your important pages and to optimize them to the maximum, both in their content and their technical health, and to direct a maximum of link juice to them. For example, one of our users managed to increase organic traffic by 70% in one year by improving Google crawl rate on important pages. Before SEO actions were taken on this website, it was wasting a large portion of its crawl budget on useless pages. After SEO revamping, conversions increased by 68%.
Then, as I mentioned earlier, I think that machine learning is still too little used in SEO. These algorithms can impress but with a few hours of training, it’s easy to master them and get reliable predictions like your SEO traffic a few months from now, or which keywords will bring you the most traffic in the next few weeks. It is also possible to automatically generate content already optimized for SEO.
Finally, this advice is not specifically directed at SEO but rather at SaaS entrepreneurs. We have found a real added value in the support we provide to our clients, whether it is an accompaniment by qualified experts or a conversational marketing strategy via the implementation of a chatbot. I think this factor is even more important with the period we are going through.
If you were only allowed to run paid ads on 1 platform (in your industry) over the next 12 months, what would it be and why?
Although our core business is SEO, we also have a multi-channel paid acquisition strategy. One of the channels that works best for us is LinkedIn. Our audience is very active and converses a lot on this social network. It is therefore one of the channels that allows us to generate not the greatest number of leads, but definitely the most qualitative leads.
How do you think data science will impact the future of marketing? Why should marketers use data science to build their strategy?
I think that data science is going to help solve many of the problems that marketers are facing today. How can you predict your online performances? How can you save time on low-value and repetitive tasks? How can you manipulate and understand big data volumes and share them with all departments within the company? Thanks to AI, we now have the answers to those questions.
Data science addresses these issues by removing multiple roadblocks. First of all, there is the problem of the mass of data that we have to deal with. We are receiving more and more data from more and more channels, so much that it is impossible to process it manually. Machine learning algorithms make it possible to process this data automatically and obtain reliable reports.
The second major aspect of data science coupled with marketing is prediction. We have already mentioned this element several times in this interview, but getting solid predictions regarding your traffic, sales or revenue is an undeniable competitive advantage.
I think that marketing teams will have to develop profiles such as Data Scientist, IT Director or Growth Analyst who can work together to cross-analyze a big volume of data and use them to find new ways to optimize their strategy and capture low-hanging fruits.
With data science, marketing becomes more reliable and decisions can be made based on solid predictions.
What quote would you say has inspired you the most in your life or career?
I can think of many examples but I find this quote from Mark Twain very inspiring: “Twenty years from now you will be more disappointed by the things you didn’t do than by the ones you did do. So throw off the bowlines! Sail away from safe harbor. Catch the trade winds in your sails.”
I like the idea of being able to take risks and think outside the box to build something you’re particularly proud of. In life in general and in entrepreneurship in particular, the fear of failure and its consequences too often take precedence over our ambitions and dreams. It is important to believe in yourself and in your projects and to always give your best so as to never have regrets.
One final question: As a professional marketer, you are a person of great influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be?
One of the causes that I consider to be crucial is access to education for all. First of all, it’s an inherent value in our business as our primary objective was to democratize SEO. But beyond that, today there are more than 61 million children in the world who do not have access to primary education. As a father of two young children, this figure is particularly touching to me. We are currently preparing a marketing campaign to donate funds to Plan International. Education is extremely important to give all children equal opportunities and to enable them to fulfill their dreams.
Thank you so much for sharing these fantastic insights! | https://medium.com/authority-magazine/top-marketing-minds-with-fran%C3%A7ois-goube-of-oncrawl-and-kage-spatz-57dc4189941b | ['Kage Spatz'] | 2020-09-02 22:10:57.126000+00:00 | ['Marketers', 'Business', 'Marketing', 'Data Science', 'SEO'] |
Rogue: A Type-Safe Scala DSL for querying MongoDB | Rogue is our newly open-sourced, badass (read: type-safe) library for querying MongoDB from Scala. It is implemented as an internal DSL and leverages the type information in your Lift records and fields to keep your queries in line. Here’s a quick look:
Checkin where (_.venueid eqs id)
and (_.userid eqs mayor.id)
and (_.cheat eqs false)
and (_._id after sixtyDaysAgo)
select(_._id) fetch() VenueRole where (_.venueid eqs this.id)
and (_.userid eqs userid)
modify (_.role_type setTo RoleType.manager)
upsertOne()
We’ve been developing and using it internally at foursquare for the last several months. You can now get the sources on github, and the packaged JAR is available on scala-tools.org under foursquare.com/rogue (current version is 1.0.2).
In this post, we’re going to dive in to some of the motivations and implementation details of Rogue, and hopefully show you why we think Scala (and MongoDB and Lift) are so awesome.
Background
At foursquare we use the Lift web framework for our ORM layer. Lift’s Record class represents a database record, and MetaRecord trait provides “static” methods for querying and updating records in a fully expressive way.
Unfortunately, we found the querying support a bit too expressive — you can pass in a query object that doesn’t represent a valid query, or query against fields that aren’t part of the record. And in addition it isn’t very type-safe. You can ask for, say, all Venue records where mayor = “Bob”, and it happily executes that query for you, returning nothing, never informing you that the mayor field is not a String but a Long representing the ID of the user. Well, we thought we could use the Scala type system to prevent this from ever happening, and that’s what we set out to do.
For reference, here’s a simplified version of our Venue model class:
class Venue extends MongoRecord[Venue] {
object id extends Field[Venue, ObjectId](this)
object venuename extends Field[Venue, String](this)
object categories extends Field[Venue, List[String]](this)
object mayor extends Field[Venue, Long](this)
object popularity extends Field[Venue, Long](this)
object closed extends Field[Venue, Boolean](this)
} object Venue extends Venue with MongoMetaRecord[Venue] {
// some configuration pointing to the mongo
// instance and collection to use
}
Lift’s [cci lang=”scala”]MongoMetaRecord[/cci] trait provides a [cci lang=”scala”]findAll()[/cci] method that lets you pass in a query as a JSON object (MongoDB queries are in fact JSON objects), returning a list of records. For example, using lift’s JsonDSL, we can do:
Venue.findAll((Venue.mayor.name -> 1234) ~
(Venue.popularity.name -> (“$gt” -> 5)))
which is equivalent to
Venue.findAll(“{ mayor : 1234, popularity : { $gt : 5 } }”)
which will return a List[Venue] containing all venues where the mayor is user 1234 and the popularity is greater than 5. And this all works fine until the day you do
Venue.findAll(Venue.mayor.name -> “Bob”)
Venue.findAll(Venue.categories.name -> (“$gt” -> “Steve”))
which don’t really make sense and should be able to be detected by the compiler.
Scala to the rescue!
We would like to write an internal Scala DSL that lets you write queries like this:
Venue where (_.mayor eqs 1234)
Venue where (_.mayor eqs 1234) and (_.popularity eqs 5)
while enforcing some kind of type safety among records, fields, conditions and operands. To start off, we need to pimp the MongoMetaRecord class to support the where and and methods.
implicit def metaRecordToQueryBuilder[M <: MongoRecord[M]]
(rec: MongoMetaRecord[M]) =
new QueryBuilder(rec, Nil) class QueryBuilder[M <: MongoRecord[M]](
rec: M with MongoMetaRecord[M],
clauses: List[QueryClause[_]]) {
def where[F](clause: M => QueryClause[F]): QueryBuilder[M] =
new QueryBuilder(rec, clause(rec) :: clauses)
def and[F](clause: M => QueryClause[F]): QueryBuilder[M] =
new QueryBuilder(rec, clause(rec) :: clauses)
}
Notice that the where method applies the clause function argument to the MetaRecord rec in use. So in a query like Venue where (_.mayor …) , the where method applies _.mayor to Venue , yielding Venue.mayor . So what about the eqs 1234 part? We have something like Venue.mayor , which is a Field , and we need to return a QueryClause[F] ( F represents the field type, Boolean or String or whatever). So all we need to do is pimp the Field class and add the method eqs , which will take an operand (e.g., 1234) and return a QueryClause[F] .
implicit def fieldToQueryField[M <: MongoRecord[M], F]
(field: Field[M, F]) =
new QueryField[M, F](field) class QueryField[M <: MongoRecord[M], F]
(field: Field[M, F]) {
def eqs(v: F) = new QueryClause(field.name, Op.Eq, v)
}
( Op is just an enumeration that defines all the comparison operators that MongoDB supports: Eq, Gt, Lt, In, NotIn, Size , etc. The .name method is provided by Lift’s Field class, which through the magic of reflection is a String that matches the name of the field object as it is declared in the Record.)
So an expression like Venue where (_.mayor eqs 1234) gets expanded by the compiler to:
metaRecordToQueryBuilder(Venue).where(rec =>
fieldToQueryField(rec.mayor).eqs(1234))
This allows the compiler to enforce two things: that the field specified ( mayor ) is a valid field on the record ( Venue ), and that the value specified (1234) is of the same type as the field ( Long ) — notice that the eqs method takes an argument of type F , the same type as the Field .
More operators
We can extend this to support other conditions besides equality. The Scala type system helps us once again in ensuring that the condition used is appropriate for the field type.
implicit def fieldToQueryField[M <: MongoRecord[M], F](field: Field[M, F]) =
new QueryField[M, F](field)
implicit def longFieldToQueryField[M <: MongoRecord[M]](field: Field[M, Long]) =
new NumericQueryField[M, F](field)
implicit def listFieldToQueryField[M <: MongoRecord[M], F](field: Field[M, List[F]]) =
new ListQueryField[M, F](field)
implicit def stringFieldToQueryField[M <: MongoRecord[M]](field: Field[M, String]) =
new StringQueryField[M](field) class QueryField[M <: MongoRecord[M], F](val field: Field[M, F]) {
def eqs(v: F) = new QueryClause(field.name, Op.Eq, v)
def neqs(v: F) = new QueryClause(field.name, Op.Neq, v)
def in(vs: List[F]) = new QueryClause(field.name, Op.in, vs)
def nin(vs: List[F]) = new QueryClause(field.name, Op.nin, vs)
} class NumericQueryField[M <: MongoRecord[M], F](val field: Field[M, F]) {
def lt(v: F) = new QueryClause(field.name, Op.Lt, v)
def gt(v: F) = new QueryClause(field.name, Op.Gt, v)
} class ListQueryField[M <: MongoRecord[M], F](val field: Field[M, List[F]]) {
def contains(v: F) = new QueryClause(field.name, Op.Eq, v)
def all(vs: List[F]) = new QueryClause(field.name, Op.All, v)
def size(s: Int) = new QueryClause(field.name, Op.Size, v)
} class StringQueryField[M <: MongoRecord[M], F](val field: Field[M, String]) {
def startsWith(s: String) = new QueryClause(field.name, Op.Eq, Pattern.compile(“^” + s))
}
You can see that only certain field types support certain operators. No startsWith on a Field[Long] , no contains on a Field[String] , etc. So now we can build queries like
Venue where (_.venuename startsWith “Starbucks”)
and (_.mayor in List(1234, 5678))
without having to worry about the stray (and admittedly contrived)
Venue where (_.mayor startsWith “Steve”)
and (_.venuename contains List(1234))
Executing queries
Now once we have a QueryBuilder object, it is a straightforward exercise to translate it into a JSON object and send it to lift to execute. This is done by the fetch() method:
Venue where (_.mayor eqs 1234)
and (_.categories contains “Thai”) fetch()
It’s also a simple matter to support .skip(n) , .limit(n) and .fetch(n) methods on QueryBulder .
Summary
To recap, Rogue enforces the following, at compile time:
the field specified in a query clause is a valid field on the record in question the comparison operator specified in the query clause makes sense for the field type the value specified in the query clause is the same type as the field type (or is appropriate for the operator)
In the next post, we’ll show you how we added sort ordering to the DSL and how we used the phantom type pattern in Scala to prevent, again at compile time, constructions like this:
Venue where (_.mayor eqs 1234) skip(3) skip(5) fetch()
Venue where (_.mayor eqs 1234) limit(10) fetch(100)
In the meantime, go check out the code — contributions and feedback welcome!
- Jason Liszka and Jorge Ortiz, foursquare engineers | https://medium.com/foursquare-direct/rogue-a-type-safe-scala-dsl-for-querying-mongodb-46c3364d9717 | [] | 2018-07-31 18:58:42.261000+00:00 | ['Engineering'] |
Build Ask-A-Gif: An app that can answer your questions using VueJS and Giphy | In this tutorial we are going to build a fun project that answers any yes or no question with a gif. Here’s what the finished product looks like!
Getting started
With this tutorial we will be using webpack to build and run single file components. To get started make sure that you have vue-cli installed.
npm i -g vue-cli
Once vue-cli is installed we create a new project using the webpack build template shipped with the Vue CLI. In the command line, run:
vue init webpack ask-a-gif
Answer all the prompted questions. We wont need vue-router for this project.
Setting up our component
To keep it simple we will be building everything in one component. Navigate to /src/App.vue file. Currently the boilerplate code is contained here.
The file is broken down into three different parts. The template, the script and the style section. This way we have everything we need all in one file.
Let’s first update the template section. We just need a simple form for our question and then a section for our gif to appear. You can replace the template code with the code below:
<template>
<div>
<form>
<input placeholder="What is your question?" />
</form>
<div class="gif-container">
</div>
</div>
</template>
Next we can update our style section. This will include all the css we need for this tutorial.
<style>
body {
font-family: 'Avenir', Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
background:#18181E;
color:white;
padding:60px;
}
input{
font-size:60px;
font-family: inherit;
background:none;
border:none;
outline:none;
color:white;
width:100%;
}
.gif-container img{
margin-top:30px;
min-width:600px;
}
</style>
Here’s what our app looks like so far.
Adding our data
The last part of our setup is to add our gif data. When a user submits a question we want to choose a random gif and display it to the user. The gifs will answer a yes or no question so we will need:
5 “yes” gifs like this one:
5 “no” gifs like this one:
To store the gifs we will create an array and store it in the components state. To do this lets create an array in the script tag.
<script>
let gifs = [
"https://media1.giphy.com/media/GCvktC0KFy9l6/giphy.gif",
"https://media2.giphy.com/media/3ohhwAaoGzLRGM6jqo/giphy.gif",
"https://media0.giphy.com/media/xT9IgrStO9Vdk3odgY/giphy.gif",
"https://media3.giphy.com/media/HFHovXXltzS7u/giphy.gif",
"https://media2.giphy.com/media/r1fDuPIcs18d2/giphy.gif",
"https://media3.giphy.com/media/12XMGIWtrHBl5e/giphy.gif",
"https://media0.giphy.com/media/1zSz5MVw4zKg0/giphy.gif",
"https://media2.giphy.com/media/5xtDarC0XyqmUhD5eDK/giphy.gif",
"https://media0.giphy.com/media/T5QOxf0IRjzYQ/giphy.gif",
"https://media3.giphy.com/media/tCYMesnacJ6cE/giphy.gif"
]
export default {
name: 'App',
}
</script>
We need to bind our gif array to the components state to do that we will use the data() function in our component.
export default {
name: 'App',
data(){
return {
gifs,
activeGif: null
}
}
}
Now that we have our gifs we can start submitting questions!
Hooking up the question form
Let’s hook up our question form. The question really wont matter. We just want to show a random gif when the form is submitted. To do that we can add a @submit tag to the form.
<form @submit="sendQuestion">
<input placeholder="What is your question?" />
</form>
Next we can create our function sendQuestion in our script tag to grab our random gif. We will get a random number between 0 and 9 to act as our index. Once we have the index we will set our activeGif variable to the gif at that location.
methods: {
sendQuestion(e){
e.preventDefault();
let index = Math.floor(Math.random() * 10) + 0;
this.activeGif = this.gifs[index];
}
}
Displaying our Gif
Let’s display our activeGif variable. To do that we can create an image tag inside of our div named gif-container . We will use the :src property to bind the image tag to our state variable.
<div class="gif-container">
<img :src="activeGif" />
</div>
Now let’s refresh the page and ask a yes or no question!
Thanks Gordon
Resetting our app
Awesome now you can ask a question and your app answers you! But the gif stays on the screen as you type your next question we want to reset our active gif when the question changes. To do that we can use the @input property. Let’s add it to our input tag.
<input @input="resetActiveGif" placeholder="What is your question?" />
And then let’s add our method below. This will just set activeGif to null.
resetActiveGif(){
this.activeGif = null
}
Let’s refresh the page and take a look at what we just did!
Thanks for reading!
If you got this far, thanks for reading this article! Feel free to mess around with the code and make it your own. Add your own gifs, change the styling and add more functionality.
If you have any questions or a tutorial you would like to see feel free to reach out to me here on Medium or my Instagram and Twitter accounts! | https://medium.com/techtrument/build-ask-a-gif-an-app-that-can-answer-your-questions-using-vuejs-and-giphy-a37986b22010 | ['Alex Brown'] | 2018-04-23 19:16:53.924000+00:00 | ['JavaScript', 'Web Design', 'Front End Development', 'Web Development', 'Vuejs'] |
Working With Visual Studio Code | Set Up Visual Studio Code
Visual Studio Code is a modular, flexible editor — it’s not designed to work with only one programming language. Visual Studio Code users can install extensions to make it easier to work with specific languages in Visual Studio Code.
Open the Extensions window in Visual Studio Code and search for C#. | https://medium.com/swlh/working-with-visual-studio-code-a78cbb3cb5a7 | ['Ken Bourke'] | 2020-12-22 12:37:48.444000+00:00 | ['Software Development', 'Csharp', 'Dotnet', 'Learn To Code', 'Programming'] |
Some things I learned about data-driven storytelling in Schloss Dagstuhl | Trick question: What do social media, business, dance, marketing, design, photography, life, film, hollywood, branding, journalism, speaking, travel, leadership, pr, folk, advertising, instagram, fundraising, country music, pitching, employer branding, and marketing your skills for a job have in common?
Well, they are all about storytelling, according to the first 5 pages of results for the respective Google search. | https://medium.com/data-driven-storytelling/some-things-i-learned-about-data-driven-story-telling-in-schlo%C3%9F-dagstuhl-b5ecfaef0910 | ['Moritz Stefaner'] | 2016-02-15 19:51:22.647000+00:00 | ['Storytelling', 'Data Visualization'] |
Here We Go Again. Americans Are Done With This Mess. | Here We Go Again. Americans Are Done With This Mess.
We’re post-outrage.
Photo by Brian Wertheim on Unsplash
“Here we go again,” I thought.
A few years ago, some of my coworkers raised a little money for me and my spouse. We were expecting a baby. They tried to give it to me at a meeting, in a little bag with some gift cards and newborn clothes.
My boss didn’t like this.
He snatched the bag from my hands and then stood up in front of the room. He gave a speech about how much I meant to the department, and how everyone — especially him — had chipped in.
Then he presented it to me, as if he’d sewn the clothes himself. He made me give a little acceptance speech.
The whole thing was awkward.
Later, I found out that he hadn’t actually contributed anything. Not one dime. If nothing else, it was a valuable lesson in human behavior. He did this kind of thing all the time, and I finally learned to expect it.
Remind you of anyone?
We have nothing left to say.
As Donald Trump stands in front of a camera days before Christmas, threatening to veto a stimulus package that took nine months to hammer out, we’ve got nothing left to say.
Anyone with a brain knows what’s going on. This is another game. It’s a version of “here we go again.”
And it’s not just Trump anymore.
It’s all of them.
Our faith in our government was already at an all-time low. This takes it down yet another rung, but it really couldn’t get much worse. We’re not sure if Democrats could do more, but they sure do seem to be getting the short end of the stick these days. They always seem to be letting the sharks eat our lunch. Ask one of the few remaining Americans with any sanity, and they’ll shrug and tell you they just don’t care anymore.
It’s always something now.
We’re sick of it.
We live in the land of maybe.
Our government doesn’t function anymore.
Everyone knows it.
We’re on our own. We have been for about a year now. No, we’re not whittling sticks into spears and hunting squirrels just yet, but we know the limits of human kindness. It’s not extinct, but definitely endangered. For those who haven’t lost their empathy, they’re simply stretched too thin to take care of anyone but their closest kin right now.
Can you blame them?
Everything else is a coin toss. Sure, there’s still some institutional oomph left, but not much. Maybe you’ll be able to get some unemployment assistance. Maybe you’ll get food stamps. Maybe your mail will arrive on time, delivered to you by an exhausted postal worker. Maybe you’ll get a job with health insurance. The truth is that all the things that felt like entitlements and guarantees ten or fifteen years ago are gone.
Anyone who has anything today is extremely lucky.
Some of us don’t even know it.
Our politicians don’t care about us.
Does this really need to be said, again?
It’s about time we stopped falling for the circus act. Our politicians take turns grandstanding now. It’s all they do.
Everything is about taking credit and blaming the other side. They don’t care who gets hurt in the process. They spend more time tweeting than they do actually negotiating anything.
We seem to encourage it.
Lots of Americans are on auto-response.
A few days ago, I wrote an essay about how millennials are struggling to live a decent life, even while they work full-time jobs and string together hustles on the side. Millennials aren’t the only ones dealing with an impossible economy, but we’re a disproportionate majority.
Amid the responses, stuff like this stood out:
Stop whining and get out there and do something about your situation yourself.
That happens all the time now. It’s normal to think that way. Americans like this are on auto-response.
They don’t read. They don’t listen.
When they hear about someone else’s struggles, they assume it’s their fault. Like our leaders, they just don’t care.
They want to grandstand.
We’re used to dysfunction.
We’re no longer surprised by anything.
We don’t expect things like stimulus checks anymore. We expect more bad news and hardship. When we hear about the new Super-Covid variant on its way, we shrug. We say, “It figures.”
This is life now, waiting for the next burden. Meanwhile, we enjoy the little moments while we can.
We’ve given up on recruiting anyone else to wear a mask. We’re not trying to use logic or reason on our opponents anymore. We’re protecting ourselves, and the handful of people we care about.
That’s all we can do.
The time for stimulus is over.
This stimulus was always a joke.
Our government is like the cavalry showing up after we’ve lost the battle. See, we lost the fight with Covid. It won. It’s pillaging every town in America. We’re sick. We’re scared. We’re out of work.
Our daycares have already shut down. Our teachers and healthcare workers are already burned out. Many of them are planning to leave. We’re just waiting for the smoke to clear.
We’re not giving up on life, just help.
We’re just giving up on the government. We know what’s up. We’ll be expected to raise our kids and hold down jobs without any kind of help that would’ve made a difference.
We’ve been doing it.
We’re going to be in charge of our own health, mental and otherwise. We’ll just have to make sure we don’t get sick enough to need medical attention anytime in the next year, or the year after that.
That’s not news either.
We’re done with outrage.
There’s always going to be a majority of people who enjoy shouting and grandstanding. It’s no secret that a wide range of Americans prefer to ensure each other’s destruction.
Then there’s the us, the select few who watch it all with a rising sense of apathy. We see what good comes from outrage. It doesn’t change anything. It just saps up our precious energy.
We’re done with it.
Help us, or don’t help us. We stopped waiting a long time ago. We know the only thing we can do is help ourselves. We can help a handful of other people, but we can’t save everyone. It makes us a little bitter, but not angry. That said, we can still express our disappointment.
Our leaders are just like my boss.
They’ll swipe something right out of your hands, only to deliver a speech and present it back to you like a precious gift. Take your “gift,” but you don’t have to say thank you. They haven’t earned it.
Not even close.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/here-we-go-again-americans-are-done-with-this-mess-b6509bae9584 | ['Jessica Wildfire'] | 2020-12-27 16:04:39.240000+00:00 | ['Politics', 'Government', 'Society', 'News', 'Economics'] |
7 Lessons I Learned as a Non-Profit Founder | I spent much of my 20s confused about a suitable career choice. I graduated college with a degree in English and a minor in Psychology and hoped my subsequent stint in the Peace Corps in Mali, West Africa would clarify a career path.
Prior to that adventure, I’d lived abroad as an exchange student twice in high school and once in college, so the international realm held interest for me.
While I felt hopeful about a career in international development, my years in Africa, which illuminated the complexity of international travel and global aid, disavowed me of that notion.
As I settled down in the U.S., I knew one thing: world travel had changed me so positively and so irrevocably that I wished everyone could immerse deeply into other cultures. I understood the power of travel to create tolerance, nurture empathy, open minds, and broaden worldviews, and I knew too many Americans who needed their ethnocentric tendencies challenged and their stereotypes shattered.
So when I met a woman a couple of years after moving to Portland who asked me what I wanted to do after graduate school, I replied,
“I want to start an organization that takes young people out of the country who would never otherwise have the chance to travel abroad.”
She paused, smiled, and said, “I’ve always had the same dream.”
Shortly thereafter, The Pangaea Project was born. I was 29 years old.
My co-founder and I rode the wave of leadership for almost 10 years, and the trajectory, challenges, successes, and surprises that emerged during my time with The Pangaea Project are rich fodder for another story on another day. This article is about leadership, entrepreneurship, and the lessons that come with starting something of your own.
You’ll get a lot of support from the wrong people.
If you start anything — be it a restaurant, a tech start-up, a non-profit organization, a social business — you’ll naturally invite involvement from people you know in the nascent phase. You ask friends and family to join your Facebook page or newsletter list. You sell them your earliest products. You invite them to become focus group members or donors. And those individuals show up just as they should: with love, words of encouragement, completed surveys, online sharing, and dollars.
But those kind-hearted souls are almost never the target audience. Nor can they sustain your vision.
Continue to understand, communicate with, cultivate, and woo your target audience (customers, clients, donors, what have you) from the first day.
If you don’t know your strengths and weaknesses yet, you will soon.
As a founder, almost all challenges cross your desk in the beginning, simply because you don’t have colleagues to whom you can delegate. That means that one day you may be negotiating for a lease on office space while the next day you may be appeasing a disgruntled vendor. You may lay people off, write an annual report, meet with city officials, win a substantial contract or grant, and call a plumber, all in the same week.
Very quickly you grasp your professional strengths as well as your weaknesses. If you’re lucky, your weaknesses reveal themselves early in your tenure, as only then you can hire people who complement you.
Few people will tell you the truth.
Many people have great ideas, and most people never execute them. Those who do are lauded. As an entrepreneur, you’ll receive compliments and kudos, be asked to share your story, and generally feel very good about yourself. People focus on what you’re building, and may not initially share thoughts on any duplicative, unneeded, or harmful elements of your work.
Laster, you’ll be everyone’s boss, and few people tell their boss the whole truth.
Surround yourself with truth-tellers. While you may have them on your Board of Directors or among your staff, invite an outside advisor or mentor, with little personal investment in your project, to tell you the truth regularly. Due to their impartiality, these individuals make strong allies.
The hardest task involves establishing an impeccable team.
As your idea germinates, you’ll share it with colleagues, friends, family, or individuals you like, know, and trust. Perhaps friends help you develop your website or tour office spaces for you, and it may be tempting to invite those people on as IT or operations staff. Fight the urge, and squelch any burgeoning sense of duty.
Only welcome your friends into important positions if they’re the most suitable candidate for those positions. Top-notch employees — those who can propel your business forward, produce genius ideas, solve monumental problems, and persist loyally during the company’s growing pains — are probably not your closest friends. And a friendship doesn’t usually survive a layoff.
As Maureen Dowd wrote in her New York Times piece on the CEO of Netflix, Reed Hastings,
“Mr. Hastings writes of his managers: ‘To feel good about cutting someone they like and respect requires them to desire to help the organization and to recognize that everyone at Netflix is happier and more successful when there is a star in every position.’”
You don’t build the product or provide the service.
Failed founders are those who create a business so that they can provide the service or make the product: the apparel CEO who wants to do the sewing, the design/build president who hopes to swing the hammer, the non-profit executive who wishes to work with the children.
Leaders tend to be enmeshed in the same sorts of tasks across industries. They anticipate the future, set course, understand audiences or customers, manage budgets, digest data, change direction, and take the punches. They don’t sew, build, or work with children, at least not for very long.
The Pangaea Project inspired teenagers from low-income families to become changemakers. We trained and supported groups of young people locally, accompanied them for several weeks each summer to South America or Asia to learn about social and environmental challenges, and assisted them in creating change upon return.
And what did I do? I served as a chief strategist, fundraiser, marketer, and operations head for most of my tenure.
Knowing the right people is more important than knowing the right information.
No one can know everything, and the most successful leaders surround themselves with individuals who know more about specific topics where the leader lacks wisdom. It might be obvious to hire heads of finance and human resources, but the most successful entrepreneur notices the small but vital gaps in their knowledge.
Finally, you’re not exceptional.
Founders come in all shapes and sizes. Some are confident and self-assured, others are socially awkward. Some can vision but not execute; others can execute but not vision. Some can network with charisma; others function more effectively behind their screens. Some seem to be able to do it all; others are quietly drowning.
We’re all merely people. Messy, complicated people.
With all the praise and with few people speaking hard truths, you may begin to feel the ego swell. In truth, founders are expendable, just like any staff person anywhere. We think that the baby we birthed must be raised by us, but often, when the founder moves on, the organization benefits from a fresh vision.
Anybody can start something. We all have seeds of entrepreneurship within us. Those who water those seeds are more willing to take risks, to try, to fail, and to try again. We’re not exceptional; we’re just persistent. | https://medium.com/swlh/7-lessons-i-learned-as-a-non-profit-founder-d32009137d4d | ['Stephanie Tolk'] | 2020-11-29 14:02:19.099000+00:00 | ['Work', 'Leadership', 'Entrepreneurship', 'Business', 'Self'] |
The Variety of Ways You Can Meditate | There’s something for everyone
The Variety of Ways You Can Meditate
There are many ways to tap into divine guidance and reach a heightened state of awareness without needing a mat
Allow yourself to become fully aware and present in any moment. (Photo: Krismas)
Dealing With Life
My daughter was wreaking havoc in the living room; there were toys and popcorn crumbs everywhere and it was starting to get to me so I went into the bathroom to get some alone time.
I stood in front of the bathroom in the superhero pose standing with my spine straight staring deep into my eyes in the mirror as I closed them to be fully present with myself.
After about three minutes she rushed in there asking what I’m doing and why my eyes are closed. I told her I’m meditating.
She said, “That’s not how you meditate, mommy. You need to sit down.”
“Nope, I can meditate standing up too!” I said to her while demonstrating how I was just observing my mind and focusing on breathing.
Most mornings, when I’m able to sit down in meditation, she joins me. She’s used to sitting down with legs crossed.
I told her there are so many other ways mommy can meditate including standing quietly with my eyes closed — or open as long as I can observe myself, my surroundings, and my breath.
Meditation Is an Act of Self-Love
Meditation is a great thing for the body, mind, and soul, and that’s why many people know and talk about it.
It is an act of self-care and self-love in which we experience a heightened state of awareness so we can tap into mindfulness and have ease accessing our inner guidance.
It helps us manifest our deepest desires by focusing the mind on the things we desire. Meditation is good for you but it’s still one of those things you know is good but don’t do enough of — like exercising regularly, stretching daily, drinking lots of water, and eating fruits and veggies.
I find it hard to meditate sometimes because of how difficult it is to control my thoughts and when I have to sit down and cross my legs to do it, it starts to feel like a chore. Nobody likes chores!
When I sit still with my spine straight for extended periods of time, it’s sometimes uncomfortable on my back. But that’s no excuse not to meditate!
There are several ways to tap into divine guidance and reach that same state of awareness without needing to sit.
I can meditate while I’m on the move or waiting in line at the grocery store. All I have to do is to be present and observe my thoughts.
Here are some meditation techniques I use to raise my vibration without needing to sit still for long periods of time:
I can meditate while appreciating the blue skies, the golden sun, the shimmering moon, or the passing clouds. (Photo: Stefan Keller)
Observation Meditation
In this type of meditation, all I do is become an observer of myself, my surroundings, my thoughts, and my emotions without judgment of the things I’m observing. I just have to become aware without attaching myself to people, situations, or thoughts. I observe to perceive my inner truth.
It is the easiest way to meditate every day. No matter where I am or what I’m doing, I can simply observe myself and my surroundings as a way to be mindful and slow my thoughts.
I can do this right before falling asleep when I’m struggling to drift off and I can do it in the morning when I don't feel like getting out of bed just yet.
I can sit down in a chair under a shade or stand on my balcony and observe my surroundings while enjoying the view, listening to the sound of the rain, or listening to what the birds are trying to tell me.
I can do it while appreciating the blue skies, the golden sun, the shimmering moon, and the passing clouds from afar. I do it as I walk in the park while observing a child play with a ball or watching the birds fight over crumbs.
I can meditate while I’m in the shower by blessing the water cleansing my body and feeling it’s warmth on my skin. If I’m feeling creative, I can visualize the water washing away the stress off of me, and then I watch it all go down the drain. Bye, bad vibes!
I can even do it while doing chores like washing the dishes or cooking! Most things can become a meditative practice.
Or I can grab the mat and sit with my legs crossed to meditate for an hour when I feel like it!
Observation meditation can be done anywhere and at any time. All you need to do is to be aware, observe, and be mindful.
Allow yourself to become fully aware and present in the moment.
By questioning negative thoughts, thinking, or saying positive affirmations or mantras, you are treating yourself with compassion like you would treat your best friend. (Photo: Viktor Forgacs)
Affirming Meditation
This is a very effective way for me to calm my anxious mind.
It becomes an act of self-love that comforts and soothes me every time I'm feeling ungrounded. By affirming and repeating positive and encouraging words and sentences to myself, I’m showing myself love and compassion.
All I do is repeat a few empowering words and sentences that make me feel awesome. It always calms my mind and allows me to feel peaceful. Here are some affirmations I’m constantly repeating to myself:
Everything is going to be okay. I’m doing the best I can. I’m not alone.
You ever heard of Ho’oponopono?
Ho’oponopono is an ancient Hawaiian healing practice to let go of toxic and dense energies within you to allow the impact of divine words to change and heal the heart of past pain and trauma.
All it requires is repeating these four phrases directed to me, others, or God:
I’m sorry. Please forgive me. Thank you. I love you.
That's it!
I say sorry to myself for the things I did and didnt do in the past, I ask for my own forgiveness, I show gratitude that I'm forgiving and I express my love to my “self.”
The thing with affirmations is that you have to believe and really feel the words coming out of your mouth or the thoughts you are thinking for it to work.
It doesn't work if you can't bring yourself to believe.
No matter where I am or what I’m doing, I can use ho’oponopono and other affirmations or mantras to reprogram my subconscious mind to forgive the past and let it all go.
Guided Meditation
In guided meditation, I’m guided by another person’s voice. Mostly a kind, gentle and nurturing voice in a recording or podcast.
My mind has a tendency to wander where it pleases!
It’s easier to focus on one thing when my mind has a voice to focus on.
I listen to high frequency guided meditations with binaural beats — there's plenty on Youtube — that guides me through the meditation process using vivid imagery and visualization.
When you move the body, you allow the energy that lies in your spine to flow effortlessly promoting balance and harmony. (Photo: Gianluca Carenza)
Movement Meditation
This is a fun way for me to meditate without needing to sit down and focus. I use movement meditation with dancing, yoga, stretching, and sometimes even while running or exercising.
Dancing is a fun way to strengthen my root chakra and ground myself into my body.
When I dance, I move my body in slow circles and I feel the divine feminine — which has nothing to do with being a man or a woman because we all have masculine and feminine aspects within us — and source energy move through my body.
I move my waist and my hips to allow energy to flow in my sacral chakra so I can access my creative and sexual energy — which has nothing to do with sex although it can.
You can also meditate while having sex (ahem!) Just put your back and waist into it while being fully present and enjoying the experience.
Creative and sexual energy in our sacral chakra can be harnessed to create the life we want — it can create magic and even babies!
The abundant energy can be tapped into to manifest our wildest dreams!
The same can be done with yoga, stretching, and running. It’s all about being fully present, observing yourself, and not forgetting to breathe as you move the body slowly or vigorously!
When I run, I observe the breath flowing in and out of me; I feel the way the ground and the rocks feel as my feet hit the ground. I can also walk barefoot in the grass to ground and connect myself to earth while sungazing to strengthen my third eye chakra.
I observe my surroundings by noticing the color of the trees and the smells in the air. I engage every sense in my body, including my sixth sense.
Music and water together work like a charm in the manifestation process. (Photo: Taisiia Stupak)
Manifestation Meditation
This is one of the best ways!
The intention is to manifest something by focusing the mind on a specific outcome. What I do personally is run myself a nice luxurious bath with some epsom salt and some essential oils — because I'm worthy — and I turn on my favorite music that makes me feel really good inside.
I lay in the tub with my eyes closed and I visualize and manifest the life of my dreams! I direct all of my thoughts and energy into a specific outcome, and I manifest the shit out of it! Music and water together work like a charm!
Manifestation meditation can be done during creative projects or sex when you are filled with good feelings and emotions.
If you have something you want to happen for you, like have an application approved, get called for a job interview, or even manifest some money, give this method a try. I’ve even used this method to manifest top writer statuses and viral articles (wink.) All I needed to do was meditate on the outcome I wanted — and do the work to put myself in alignment with my desires.
You don’t need water, music, or sex to do this type of meditation. You can manifest things right before falling asleep or waking up when you are going between states of consciousness and unconsciousness. Those are a powerful space where you’ve released resistance and can therefore manifest the things you want easily.
The only thing is that this method doesn’t work if you don’t believe it.
Before you achieve, you must believe.
Final Thoughts
Meditation is a must, especially if you are looking into spiritual growth and personal development, but it isn’t always easy and life can get so busy that we forgot about it.
Calming the mind is hard work and it takes practice and dedication. Sitting straight for extended periods of time can be a struggle for people with back issues but none of that should stop you from reaping its benefits.
Meditation is good for you and there is a technique for everyone!
What's your excuse? | https://medium.com/mystic-minds/the-variety-of-ways-you-can-meditate-625a83f13982 | ['Kimberly Fosu'] | 2020-12-15 14:53:53.227000+00:00 | ['Creativity', 'Ideas', 'Mindfulness', 'Leadership', 'Inspiration'] |
Foursquare Native Auth on iOS and Android: Developers, connect your users more quickly than ever | A few weeks ago we were excited to announce one of our most-wished-for features from our developer community, native authentication for iOS, and today we’re happy to announce we’ve also shipped support for native auth on Android in our latest release of Foursquare on Google Play! In a nutshell, this means that your users can connect their Foursquare accounts to your app without wrangling with messy WebViews and log-ins. Native authentication simply pops your users into the Foursquare app on their phone and lets them use their existing credentials there.
And even though this has only been out for a few short weeks, we love what our developers have been doing with it so far. If you want to see what native auth looks and feels like in the wild, install the latest version of quick check-in app Checkie: after using Foursquare to find a place for you and your friends to go, Checkie lets you check in with incredible speed.
Since Checkie uses our checkins/add endpoint, users need a way to log in. Below is what the app used to look like upon opening. Users are taken directly to a WebView where the user had to type in — and more importantly, remember, without the aid of Facebook Connect — their Foursquare credentials before continuing to use Checkie.
For this old flow to succeed, at least four taps are necessary, along with who knows how many keystrokes. Below is how the new Checkie flow works after integrating native auth: there’s a more informational screen when the app opens, and only two taps are necessary to begin actually using Checkie: “Sign in,” which bumps users to the Foursquare app where they can hit “Allow.”
How You Can Use Native Auth Today
You too can get started using this flow right away. We have libraries and sample code for iOS and Android available on GitHub that you can dive straight into. The details vary depending on OS, but the overall conceptual process is similar for both and outlined below — it should be familiar for those who have worked with 3-legged OAuth before.
Update your app’s settings. You need to modify your app’s redirect URIs (iOS) or add a key hash (Android). Include our new libraries in your project. OS-specific instructions are found on their GitHub pages. Unless you want to use it as a backup mechanism, get rid of that (UI) WebView ! Chances are, if you expect your users to have Foursquare accounts, they’ll have the app on their phones. Call our new native authorize methods. On iOS, it’s authorizeUserUsingClientId ; on Android, it’s FoursquareOAuth.getConnectIntent then startActivityForResult with the returned intent. These methods bounce your users to the Foursquare app’s authorize screen or return appropriate fallback responses allowing them to download the app. If you user authorizes your app, your user will land back in your app. Follow OS-specific instructions to obtain an access code. This should involve calling either accessCodeForFSOAuthURL (iOS) or FoursquareOAuth.getAuthCodeFromResult (Android). Trade this access code for an access token. The access token (not access code) is what is eventually used to make calls on behalf of a particular user. There are two ways to do this: (Preferred) Pass the access token to your server, and then make a server-side call to https://foursquare.com/oauth2/access_token —see step 3 under our code flow docs for details on the exact parameters needed. The response from Foursquare will be an access token, which can be saved and should be used to make auth’d requests. This method is preferable because it avoids including your client secret into your app. For more details, see our page on connecting. Call our new native methods to get an access token. On iOS it’s requestAccessTokenForCode . On Android it’s FSOauth.getTokenExchangeIntent followed by startActivityForResult (make sure you also make requisite changes to AndroidManifest.xml )
If you have any comments or questions about this new native auth flow — or anything API-related in general! — please reach out to api@foursquare.com.
- David Hu, Developer Advocate | https://medium.com/foursquare-direct/foursquare-native-auth-on-ios-and-android-developers-connect-your-users-more-quickly-than-ever-7f1f2129e283 | [] | 2018-07-31 19:21:44.591000+00:00 | ['Engineering', 'Foursquare'] |
Real-Time Analytics in the World of Virtual Reality and Live Streaming | Authored by Sebastian Zangaro
“A fast-moving technology field where new tools, technologies and platforms are introduced very frequently and where it is very hard to keep up with new trends.” I could be describing either the VR space or Data Engineering, but in fact this post is about the intersection of both.
Virtual Reality — The Next Frontier in Media
I work as a Data Engineer at a leading company in the VR space, with a mission to capture and transmit reality in perfect fidelity. Our content varies from on-demand experiences to live events like NBA games, comedy shows and music concerts. The content is distributed through both our app, for most of the VR headsets in the market, and also via Oculus Venues.
From a content streaming perspective, our use case is not very different from any other streaming platform. We deliver video content through the Internet; users can open our app and browse through different channels and select which content they want to watch. But that is where the similarities end; from the moment users put their headsets on, we get their full attention. In a traditional streaming application, the content can be streaming in the device but there is no way to know if the user is actually paying attention or even looking at the device. In VR, we know exactly when a user is actively consuming content.
Streams of VR Event Data
One integral part of our immersive experience offering is live events. The main difference with traditional video-on-demand content is that these experiences are streamed live only for the duration of the event. For example, we stream live NBA games to most VR headsets in the market. Live events bring a different set of challenges in both the technical aspects (cameras, video compression, encoding) and the data they generate from user behavior.
Every user interaction in our app generates a user event that is sent to our servers: app opening, scrolling through the content, selecting a specific content to check the description and title, opening content and starting to watch, stopping content, fast-forwarding, exiting the app. Even while watching content, the app generates a “beacon” event every few seconds. This raw data from the devices needs to be enriched with content metadata and geolocation information before it can be processed and analyzed.
VR is an immersive platform so users cannot just look away when a specific piece of content is not interesting to them; they can either keep watching, switch to different content or-in the worst-case scenario-even remove their headsets. Knowing what content generates the most engaging behavior from the users is critical for content generation and marketing purposes. For example, when a user enters our application, we want to know what drives their attention. Are they interested in a specific type of content, or just browsing the different experiences? Once they decide what they want to watch, do they stay in the content for the entire duration or do they just watch a few seconds? After watching a specific type of content (sports or comedy), do they keep watching the same kind of content? Are users from a specific geographic location more interested in a specific type of content? What about the market penetration of the different VR platforms?
From a data engineering perspective, this is a classic scenario of clickstream data, with a VR headset instead of a mouse. Large amounts of data from user behavior are generated from the VR device, serialized in JSON format and routed to our backend systems where data is enriched, pre-processed and analyzed in both real time and batch. We want to know what is going on in our platform at this very moment and we also want to know the different trends and statistics from this week, last month or the current year for example.
The Need for Operational Analytics
The clickstream data scenario has some well-defined patterns with proven options for data ingestion: streaming and messaging systems like Kafka and Pulsar, data routing and transformation with Apache NiFi, data processing with Spark, Flink or Kafka Streams. For the data analysis part, things are quite different.
There are several different options for storing and analyzing data, but our use case has very specific requirements: real-time, low-latency analytics with fast queries on data without a fixed schema, using SQL as the query language. Our traditional data warehouse solution gives us good results for our reporting analytics, but does not scale very well for real-time analytics. We need to get information and make decisions in real time: what is the content our users find more engaging, from what parts of the world are they watching, how long do they stay in a specific piece of content, how do they react to advertisements, A/B testing and more. All this information can help us drive an even more engaging platform for VR users.
A better explanation of our use case is given by Dhruba Borthakur in his six propositions of Operational Analytics:
Complex queries
Low data latency
Low query latency
High query volume
Live sync with data sources
Mixed types
Our queries for live dashboards and real time analytics are very complex, involving joins, subqueries and aggregations. Since we need the information in real time, low data latency and low query latency are critical. We refer to this as operational analytics, and such a system must support all these requirements.
Design for Human Efficiency
An additional challenge that probably most other small companies face is the way data engineering and data analysis teams spend their time and resources. There are a lot of awesome open-source projects in the data management market — especially databases and analytics engines — but as data engineers we want to work with data, not spend our time doing DevOps, installing clusters, setting up Zookeeper and monitoring tens of VMs and Kubernetes clusters. The right balance between in-house development and managed services helps companies focus on revenue-generating tasks instead of maintaining infrastructure.
For small data engineering teams, there are several considerations when choosing the right platform for operational analytics:
SQL support is a key factor for rapid development and democratization of the data. We don’t have time to spend learning new APIs and building tools to extract data, and by exposing our data through SQL we enable our Data Analysts to build and run queries on live data.
Most analytics engines require the data to be formatted and structured in a specific schema. Our data is unstructured and sometimes incomplete and messy. Introducing another layer of data cleansing, structuring and ingestion will also add more complexity to our pipelines.
Our Ideal Architecture for Operational Analytics on VR Event Data
Data and Query Latency
How are our users reacting to specific content? Is this advertisement too invasive that users stop watching the content? Are users from a specific geography consuming more content today? What platforms are leading the content consumption now? All these questions can be answered by operational analytics. Good operational analytics would allow us to analyze the current trends in our platform and act accordingly, as in the following instances:
Is this content getting less traction in specific geographies? We can add a promotional banner on our app targeted to that specific geography.
Is this advertisement so invasive that is causing users to stop watching our content? We can limit the appearance rate or change the size of the advertisement on the fly.
Is there a significant number of old devices accessing our platform for a specific content? We can add content with lower definition to give those users a better experience.
These use cases have something in common: the need for a low-latency operational analytics engine. All those questions must be answered in a range from milliseconds to a few seconds.
Concurrency
In addition to this, our use model requires multiple concurrent queries. Different strategic and operational areas need different answers. Marketing departments would be more interested in numbers of users per platform or region; engineering would want to know how a specific encoding affects the video quality for live events. Executives would want to see how many users are in our platform at a specific point in time during a live event, and content partners would be interested in the share of users consuming their content through our platform. All these queries must run concurrently, querying the data in different formats, creating different aggregations and supporting multiple different real-time dashboards. Each role-based dashboard will present a different perspective on the same set of data: operational, strategic, marketing.
Real-Time Decision-Making and Live Dashboards
In order to get the data to the operational analytics system quickly, our ideal architecture would spend as little time as possible munging and cleaning data. The data come from the devices in JSON format, with a few IDs identifying the device brand and model, the content being watched, the event timestamp, the event type (beacon event, scroll, clicks, app exit), and the originating IP. All data is anonymous and only identifies devices, not the person using it. The event stream is ingested into our system in a publish/subscribe system (Kafka, Pulsar) in a specific topic for raw incoming data. The data comes with an IP address but with no location data. We run a quick data enrichment process that attaches geolocation data to our event and publishes to another topic for enriched data. The fast enrichment-only stage does not clean any data since we want this data to be ingested fast into the operational analytics engine. This enrichment can be performed using specialized tools like Apache NiFi or even stream processing frameworks like Spark, Flink or Kafka Streams. In this stage it is also possible to sessionize the event data using windowing with timeouts, establishing whether a specific user is still in the platform based on the frequency (or absence) of the beacon events.
A second ingestion path comes from the content metadata database. The event data must be joined with the content metadata to convert IDs into meaningful information: content type, title, and duration. The decision to join the metadata in the operational analytics engine instead of during the data enrichment process comes from two factors: the need to process the events as fast as possible, and to offload the metadata database from the constant point queries needed for getting the metadata for a specific content. By using the change data capture from the original content metadata database and replicating the data in the operational analytics engine we achieve two goals: maintain a separation between the operational and analytical operations in our system, and also use the operational analytics engine as a query endpoint for our APIs.
Once the data is loaded in the operational analytics engine, we use visualization tools like Tableau, Superset or Redash to create interactive, real-time dashboards. These dashboards are updated by querying the operational analytics engine using SQL and refreshed every few seconds to help visualize the changes and trends from our live event stream data.
The insights obtained from the real-time analytics help make decisions on how to make the viewing experience better for our users. We can decide what content to promote at a specific point in time, directed to specific users in specific regions using a specific headset model. We can determine what content is more engaging by inspecting the average session time for that content. We can include different visualizations in our app, perform A/B testing and get results in real time.
Conclusion
Operational analytics allows business to make decisions in real time, based on a current stream of events. This kind of continuous analytics is key to understanding user behavior in platforms like VR content streaming at a global scale, where decisions can be made in real time on information like user geolocation, headset maker and model, connection speed, and content engagement. An operational analytics engine offering low-latency writes and queries on raw JSON data, with a SQL interface and the ability to interact with our end-user API, presents an infinite number of possibilities for helping make our VR content even more awesome! | https://medium.com/rocksetcloud/real-time-analytics-in-the-world-of-virtual-reality-and-live-streaming-c5cdf93512ba | ['Shawn Adams'] | 2019-09-23 19:19:45.500000+00:00 | ['Real Time Analytics', 'Data Streaming', 'Kafka', 'Data Pipeline', 'Data Engineering'] |
Fish vs. Zsh vs. Bash and Why You Should Switch to Fish | What’s Special About fish?
Fish Shell logo
Easy to understand and use
Unlike the other shells that need a lot of setting-up to work the way you want them to, fish works perfectly right out of the box.
It ships with the most widely used features already included, which are present when you start using it without needing to install any additional plugins or tweak any configuration files unless you want to. Its syntax is simple, clean, and consistent.
Syntax highlighting
Syntax highlighting is a feature that we all wish our CLI could perform. It saves a lot of time and frustration. Well, fish does it, and it does it pretty well.
It shows you whether your command or the directory to be searched exists before you even hit enter. You’ll know whether you’re typing anything wrong before you hit enter. This makes it easier for people to parse commands, and find errors.
It highlights (most) errors in red, such as misspelled commands, misspelled options, reading from non-existing files, mismatched parenthesis and quotes, and many other common errors.
It also features highlighting of matching quotes and parenthesis. Oh, and it’s pretty, colorful.
Configuration for fish shells
The fish community maintains Oh My Fish, which is a shell framework inspired by Oh My Zsh. It offers a lot of beautiful prompt themes and awesome plugins, is lightweight, awesome, and easy to use.
It also offers a web-based configuration feature. Just type:
fish_config
You will land on the website with which you can customize the skin of your shell.
web_config page for fish
Inline searchable history
This is an interactive feature of this Shell. You begin typing a command and press the up key to see all the times in the Shell history where you used that command before.
To search the history, simply type in the search query, and press the up key. By using the up and down arrow, you can search for older and newer matches. The fish history automatically removes duplicate matches and the matching substring is highlighted.
These features make searching and reusing previous commands much faster.
Inline auto-suggestion
Fish suggests commands as you type and shows the suggestion on the right of the cursor, in grey. If you mistype a command, it will show in red to indicate that it’s an invalid command.
It also suggests the most frequently used commands and auto-completes while you type, based on your history and valid files available.
Demonstration for inline auto-suggestion
Tab completion using man page data
Fish can parse CLI tool man pages in various formats. Type in a command and “tab” through all the suggested auto-completions. | https://medium.com/better-programming/fish-vs-zsh-vs-bash-reasons-why-you-need-to-switch-to-fish-4e63a66687eb | ['Sid Mohanty', 'Captain Techie'] | 2020-05-17 23:39:55.452000+00:00 | ['Mac', 'Productivity', 'Linux', 'Bash', 'Programming'] |
Web Scraping Instagram to build your own profile’s dashboard — With Instaloader and Google Data Studio | Photo by NordWood Themes on Unsplash
Looking for an excuse to learn a bit more about web scraping and Google Data Studio, I decided to begin a project based on my wife’s commercial Instagram profile. The goal was to build an online updatable dashboard with some useful metrics, like top hashtags, frequently used words, and posts distribution per weekday:
Requirements:
Fast and easy to create/update
Usable for any Instagram account, as long as it’s public
In this article, I want to share with you my approach to do that.
The whole project can be found at the project’s GitHub repository, and if you are only interested in usage rather than knowing how it works, you might consider going straight to the project’s README.
So, in order to achieve my goal, I would need to do the following:
Extract information from Instagram Transform this information into useful metrics Upload these metrics and information to a data source accessible to Google Data Studio Connect the data source and build the dashboard at Data Studio
It should be noted that I won’t cover number 4 here. In this article, I’ll limit myself to steps 1 to 3, and leave the explanation on how to actually build the dashboard for some other time.
To summarize, I used the Instaloader package for extracting information and then processed it in a python script using Pandas.
As a data source, I decided to use a Google Sheet at my personal Drive account. To manipulate this spreadsheet, I used a python library called gspread .
The dashboard also uses some images for the logo and to generate the word cloud (which I will discuss later). At the time I was building the dashboard, Data Studio didn’t recognize image URLs from my Drive account, so I created an Imgur profile and used the python API imgurpython .
Let’s detail the whole process a bit more.
The Pipeline
I want to do all these tasks sequentially, so I wrote a shell script in order to generate/update the report in a single command:
./insta_pipe.sh <your-profile> <language>
In which your_profile is your Instagram profile, and language is the language you want the report to be generated in(currently en and pt ).
The shell script looks something like this:
I will talk about each command of this script in the following sections.
What we end up with
After running the shell script, the goal is to have a google spreadsheet updated with the information we need, much like the one below:
I decided to divide groups of information into different worksheets:
ProfileInfo — profile name and imgur URL for the profile pic
WordCloud — Imgur URL for the word cloud image
Top_Hash — top 10 hashtags, according to the average number of likes
Data — table of one Instagram post per row, with info about media type (image, video, or sidecar), plus the number of likes and comments
MediaMetrics — the average number of likes per media type
DayMetrics — the average number of likes and number of posts per weekday
MainMetrics — the overall average number of likes and comments
Extracting Information from your Insta Profile
For the first task, I decided to use this awesome library called Instaloader. As it is said at the library’s website, Instaloader is a tool to download pictures or videos from Instagram, along with other metadata. For this project, I am mainly interested in info such as the number of likes, comments, captions, and hashtags.
Once you pip install instaloader and read the documentation, it turns out that for this project, a single command is all it takes:
instaloader --no-pictures --no-videos --no-metadata-json --post-metadata-txt="date:
{date_local}
typename:
{typename}
likes:
{likes}
comments:
{comments}
caption:
{caption}" $1;
That will create a folder named with your profile, and inside there will be a bunch of .txt files, one for each post. Inside each file, there will be information about:
date
media type (image, video ou sidecar)
number of likes
number of comments
caption
If you run this command, you’ll see that my attempt of breaking the lines with
was not successful. The command automatically escapes the backslash, and I end up just with “
” written on it.
I am certain there is a smarter way to do that, but my workaround was to replace \
with
for every text file, which is what fix_lines.py does.
Oh, Instaloader will also download your profile pic, which I will also use it as a logo for the dashboard.
Transform and Upload — Preliminary Steps
For this step, I had to make sure I had some things beforehand:
a Google account to use Drive
an Imgur Account
I also had to follow some instructions to authenticate and authorize the application for both gspread and imgur .
For gspread , I followed these instructions:
to, in the end, have a credentials.json to put at ~/.config/gspread/credentials.json .
As for Imgur, I followed these instructions:
and followed the steps at the registration quickstart section just up until I had the following: client_id , client_secret , access_token and refresh_token . These tokens are to replace the placeholders at the imgur_credentials.json file, along with the username of your Imgur account.
The last thing is that I had to create a blank google sheet beforehand and get its key. If you open a google sheet, the link will be something like this:
https://docs.google.com/spreadsheets/d/1h093LCbdJtDCNcDUnln4Lco-RANtl6-_XVi49InZCBw/edit#gid=0
The key would be that sequence of letters and numbers in the middle:
1h093LCbdJtDCNcDUnln4Lco-RANtl6-_XVi49InZCBw
I will use it later, to let gspread know where to update the information into.
Transform and Upload — Assemble, Generate and Upload
The script at transform_and_upload.py reads the .txt files created with Instaloader, assembles all the information, creates metrics and dataframes, and then updates the worksheets:
Creating the clients
First, we begin by setting the Google Sheet key we wish to update and creating a sheet object so we can later update its content:
g_sheet is an instance of the gSheet class, which contains the methods to authenticate and update the worksheets. Just to show you the beginning of it (you can check the rest of it at the repository):
An Imgur client is also created, using the credentials to perform operations on your account, like removing images from past runs of your application and uploading the new images you want to be displayed at your dashboard:
Generating dataframes and metrics
The assemble_info function is the one that actually reads the text files line by line and assembles the information into an initial dataframe called df_posts :
Each row of df_posts refers to a single post, with the following columns:
media_type: Video, Image or Sidecar
media_code: Just encoding the above into integers (1,2 or 3)
likes: Number of likes the post currently has
comments: Number of comments the post currently has
date: Date of creation
hashed: List of hashtags used
In addition, assemble_info also uploads the image profile at Imgur, returning its URL, and concatenates the text of every caption (except hashtags), which is used later to generate the word cloud.
df_posts is then used to generate further specific dataframes and metrics:
df_hash contains the hashtags in descending order, according to the average number of likes (hashtags that appear less than 5 times are ignored).
Information according to weekday is stored in df_day , while df_data is just a subset of df_posts that will be used by Data Studio to create the pie graph according to media type.
As for metrics , it contains some overall metrics of average likes and comments, besides info according to media type.
Uploading to Drive
With everything at hand, it is time to update our Google Sheet:
All these methods basically update each worksheet with the appropriate information. The only exception would be update_wordcloud , that also generates the word cloud, using nltk to tokenize the text and remove stopwords, and the wordcloud package:
and then uploads it to Imgur before sending the URL to the worksheet:
And that’s it!
For the sake of brevity, I didn’t show here every line of code used throughout the script, but if you are interested, you can check it all out at the Project’s Repository.
Next Steps
Now that we have an updated Google Sheet, we can use it as a data source to plug into Data Studio. The next step would be, of course, building the dashboard. As I said before, I feel that explaining the process here would make the story somewhat extensive, so I will leave it for some other time!
Thank you for letting me share the experience! | https://medium.com/analytics-vidhya/web-scraping-instagram-to-build-your-own-profiles-interactive-dashboard-with-instaloader-and-42141575e009 | ['Felipe De Pontes Adachi'] | 2020-09-22 13:45:15.829000+00:00 | ['Python', 'Google Sheets', 'Data Science', 'Web Scraping', 'Data Visualization'] |
Having Trouble Sticking to an Exercise Program? | Having Trouble Sticking to an Exercise Program?
Time to shift your thinking
Photo by John Arano on Unsplash
I’m sitting at my desk right now with a giant ice pack on my shin. Under the ice pack is a goose egg, with a bright-red, oozing cut going down the middle.
Just thirty minutes ago, I was riding downhill on my mountain bike and I took a spill into the dry Southern California brush. The pedal hit my leg as I landed in the bushes. When I looked down, I could see the bump rising on my shin.
While riding home, goose egg still growing, I suddenly remembered an incident that happened about twenty years ago.
I was on my first mountain biking outing with my husband. It was a regular trail, nothing too difficult, but at that point, I had not experienced trail riding with rocks, crevices, and the like.
Turning a corner, I hit a rock and fell off my bike. The pedal hit my leg and I ended up with a bump the size of a golf ball on my shin, much like today. That day, all those years back, I rode home, terrified to fall again and in pain from my injury.
From that day forward, I despised mountain biking. I couldn’t understand why anyone would participate in such a dangerous sport. That disdain lasted almost two decades.
Twenty-something years later, with two kids in tow, my husband suggested we go mountain biking. Finding activities for the kids is always a challenge, so I couldn’t say no. Plus, I knew we would be going on trails easy enough for kids, so I reluctantly agreed. I didn’t even have a mountain bike — it was a hybrid.
That first day back on the trail was a nail-biting experience. However, the one great thing about having kids is they force you to pretend everything is okay, even when it isn’t. That is what I did while I peddled in fear.
Of course, to start with we went on beginning trails. Listening to my husband explain to the kids how to ride over rocks and cracks in the dirt helped me. I learned alongside them without any of them knowing.
Watching my kids enjoying themselves mountain biking gave me a new perspective — that it could be fun to ride off-road with all the varied terrain and interesting landscapes. After five or six rides with them, my fear subsided, and I started enjoying myself.
As my daughters became more competent in the sport, I did as well.
Several years have gone by. I now have a proper mountain bike. We still ride as a family, but I also go out on my own, trying out new challenges, and achieving new skill levels. Sure, I’m here now with an ice pack on my leg. Will it stop me from going out again — no way.
So what is the difference between my experience today and twenty years ago? And what does this have to do with sticking to an exercise program?
This story is an example of one of the reasons it can be difficult to maintain an activity. To stick with it, the experience needs to be enjoyable — something you look forward to doing several times per week for years.
In my experience with mountain biking, it wasn’t fun because when I started, I was in over my head. I didn’t have the skills or knowledge to ride over rocks and down steep, scary hills. This was further complicated by having a negative experience (injuring myself).
Exercise adherence is a fickle thing. Getting our lifestyle programmed to continue an activity on a regular basis takes a strong desire to start, positive experiences, and consistency long enough to make it a long term habit.
Create Positive Experiences
Photo by Nik Shuliahin on Unsplash
Sheltering in place has given us a rare opportunity to pursue new hobbies and activities. If you always hated going for walks or even running, it might be a great time to try it out again. I was never a fan of cooking, but since my life outside the home has been restricted, cooking has become the highlight of my day.
As I’ve become better at cooking more complicated meals, I get more satisfaction from it. The same applies to exercise. Now is a perfect time to jump off screens for a half-hour a day and pursue a new activity.
We know exercise enjoyment helps with long term adherence; however, developing a healthy relationship with exercise isn’t always romantic. It isn’t always that image we have in our heads of frolicking in the waves or hitting that zone in running that gives you a runner’s high. I like to think of it more like a happy marriage. There is romance at times, but also toothpaste blobs in the sink and toilet rolls that don’t get replaced.
Some forms of exercise are not always fun right away. Running is a good example of this. I hear many people claim to hate it. Personally, I didn’t like running until I stuck with it long enough to gain enough fitness that it felt good. I’ve been doing it now for 28 years and I can’t imagine living without it.
I didn’t particularly enjoy weightlifting when I started either. I had to give it a chance over several months until the fitness and competency kicked in before I really started to find it satisfying.
Get Rid of Negative Thoughts
Photo by Francisco Moreno on Unsplash
Sometimes our history and relationship with exercise can be influenced by holding on to negative thoughts. These thoughts are often about our ability, or even from having a bad experience that lingers in the back of our minds. That is what happened to me with mountain biking. I initially decided the sport was too scary and I hated it.
As with many hurdles in life, it is helpful to reflect on our negative belief systems — things like, “it’s too hard,” or “I can’t do this.” These beliefs prevent us from fully embracing and enjoying what we are doing. Negative thoughts also impact our competency. Those belief systems will need to be replaced with, “I can do this,” or “I am getting stronger.”
When you notice negative thoughts arise, it is also helpful to remind yourself of the positive aspects of what you are doing, like listening to the sounds of birds, the fresh air, and the feeling of movement. Over time, by addressing and changing these thoughts, your relationship with activity will take on a more positive spin.
The great thing about movement is it naturally produces happy hormones (endorphins) that can assist you in feeling more positive about what you are doing. These endorphins will also help with rough times in our overall lives — like the many difficulties we are currently facing with COVID-19.
Create a Habit
Photo by Drew Beamer on Unsplash
The only way to achieve results from an exercise program is to be consistent. Our bodies operate on the principle of adaptation. When we stress parts of the body, like muscles or the heart, it temporarily weakens it. When it repairs, it is done in a way to make it stronger.
Building muscle is expensive from an adaptation perspective. Your body doesn’t want to shift resources (energy) to build a muscle that needs to be maintained with more energy if it only happens once in a while. This is why it is important to exercise regularly.
Making exercise a habit is important for long term success.
If I go about my day without a plan to exercise, it is easy for the day to get away from me. The best way to prevent that from happening is to make a ritual for the activities I want to do daily, like doing it at the same time every day.
Since I know I am going to work out at 2:00, I try to get the largest chunk of work done before that time. I’ve noticed having that daily break makes me more productive before 2:00. When I finish exercising, more blood has been moving around my body and it helps me feel more alert and more productive when I get back to work.
Doing it at the same time every day or even creating a sort of ritual that includes exercise means our bodies and minds have to think less about making a decision, instead it works on autopilot.
When exercise becomes a habit, and it becomes a long-term commitment, it leads to better results. Seeing results leads to more motivation. And so starts the positive feedback loop.
Gauge Intensity
Photo by Anne Nygård on Unsplash
My first mountain biking experience was on a trail above my level. This led to frustration and discouragement, and ultimately me quitting the sport.
When I took several steps back, doing trails easy enough for kids, I was confident about my ability, which made the experience positive. Over time I stuck with it.
It is common to start an exercise program that is too advanced. When starting a program, enthusiasm is at a high peak. It is easy to allow that enthusiasm to dive into a program above your current fitness level. We often don’t want to accept where we are in our fitness journeys, believing we are more advanced than we really are. Starting off slow, even when excited to jump in, is imperative. Patience is an important exercise virtue.
Avoiding making this mistake is crucial to long term success. It is better to walk every day for 10 years than start a running program that only lasts a few months.
Get Social Support
Photo by Felix Rostig on Unsplash
Having other friends or family members participate in an activity can make it easier to stick to a program. Being accountable to another person means you are more likely to schedule exercise sessions and actually do them. It helps on the days where you don’t feel motivated, but don’t want to let another person down.
Our current environment is making this challenging for many people since meeting up with others isn’t advised. If you don’t have anyone in your inner circle who can join you in an activity, there are other options. Sending texts or calling friends can have the same accountability. One of my close friends and I plan walks at the same time, and talk on the phone while we hit the trails. There are also apps, like Strava in the running world, that tap into the social side of running.
If you do have a friend who can meet with you in person, beware of being pushed too hard. If your friend is more advanced than you, outline your expectations. My husband, who is more advanced in mountain biking, only goes with me when he wants to ride easy, or just wants to go out and have fun with me, but not for an intense workout.
Closing Thoughts
Mountain biking is a new sport for me. I love other sports as well and have been doing many of them for decades. It may sound like I’m one of those workout-freaks, but I don’t think I am different than most people. I’ve merely been able to tap into something I think many of us have inside of us — the love of movement. We are born to move. Somehow, we train ourselves out of it. The key is to push past the inertia of inactivity and get that ball rolling again.
When enjoyment becomes your main reason for exercising, your focus changes from making it a “have-to” to a “want-to.” Learning to love it comes with many positive side effects — like a strong heart, lean muscles, lower cholesterol, better mental health, improved coping skills, and many others.
We all know exercise is something we should do. Starting a program can be a challenge, but sticking with it can be even more challenging, especially if it isn’t enjoyable.
Making it fun and easy to achieve is one of the best ways to keep motivation strong over a long time period. | https://medium.com/in-fitness-and-in-health/having-trouble-sticking-to-an-exercise-program-710a5655ba7a | ['Amy J. Wall'] | 2020-09-16 15:53:35.936000+00:00 | ['Sports', 'Exercise', 'Health', 'Fitness', 'Lifestyle'] |
Exploratory Data Analysis(EDA) For Predicting Hotel Booking Cancellations Using Machine Learning | Booking cancellations have a substantial impact on demand management decisions in the hospitality industry.
Every year, more than 140 million bookings were made on the internet and many hotel bookings were booked through top-visited travel websites like Booking.com, Expedia.com, Hotels.com, etc.
Hotel Booking Cancellations, A Growing Problem…
When Analyzing the past 5 years data, D-Edge Hospitality Solutions has found that the global average of cancellation rate on bookings has reached almost 40% and this trend produces a very negative impact on hotel revenue and distribution management strategies.
To overcome the problems caused by booking cancellations, hotels implement rigid cancellation policies, inventory management, and overbooking strategies, which can also have a negative influence on revenue and reputation.
Once the reservation has been canceled, there is almost nothing to be done and it creates discomfort for many Hotels and Hotel Technology companies. Therefore, predicting reservations which might get canceled and preventing these cancellations will create a surplus revenue for both Hotels and Hotel Technology companies.
Motivation
Have you ever wondered what if there was a way you could predict which guests were likely to cancel and adjust the overbook rate accordingly? That would be great right?
Luckily, by using Machine learning with Python, we can predict the guests who are likely to cancel the reservation and this could help produce better forecasts and reduce uncertainty in business decisions.
In this article, I will apply Exploratory Data Analysis (EDA) to get insights from the data set to know which features have contributed more in predicting cancellations by performing Data visualization with Matplotlib & Seaborn. It is always a good practice to understand the data first and try to gather as many insights from it.
Exploratory Data Analysis (EDA) with Data Visualization
To better understand the dataset, we have to come up with a list of questions.
What are the Top 10 Countries of Origin of Hotel visitors (Guests)
Around 40% of all bookings were booked from Portugal followed by Great Britain(10%) & France(8%).
2. Which Month is the Most Occupied with Bookings at the Hotel?
According to the graph, August is the most occupied (busiest) month with 11.62% bookings and January is the most unoccupied month with 4.96% bookings.
3. How many Bookings were Cancelled at the Hotel?
According to the pie chart, 63% of bookings were not canceled and 37% of the bookings were canceled at the Hotel.
4. Which Month Has Highest Number of Cancellations By Hotel Type?
For the City hotel, the number of cancelations per month is around 40 % throughout the year and for the Resort hotel, the cancellations are highest in June, July & August and lowest during November, December & January.
5. How many Bookings were Cancelled by Hotel Type?
For the Resort Hotel, a total of 25.14% Bookings were canceled and for the City Hotel, a total of 74.85% Bookings were canceled.
6. Relationship between Average Daily Rate(ADR) and Arrival Month by Booking Cancellation Status
The highest Average Daily Rate (ADR) has occurred in August and due to the highest ADR in August, maybe it could be one of the reasons for more cancelations in August.
7. Total Number of Bookings by Market Segment
Around 47% of bookings are made via Online Travel Agents, almost 20% of bookings are made via Offline Travel Agents and less than 20% are Direct bookings without any other agents.
8. Arrival Date Year vs Lead Time By Booking Cancellation Status
For all the 3 years, bookings with a lead time less than 100 days have fewer chances of getting canceled, and lead time more than 100 days have more chances of getting canceled.
9. Relationship between Special Requests and Cancellations
Around 28% of bookings were canceled with no special requests from the guests followed by 6% bookings were canceled with one special request from the guests.
10. How does the ADR Vary Over the Year by Hotel Type
For Resort Hotel, ADR is more expensive during July, August & September and for City Hotel, ADR is slightly more during March, April & May.
11. What is the Effect of Deposit Type on Cancellations
Around 28% of bookings were canceled by guests with no deposit , followed by 22% bookings were canceled with Refundable.
of bookings were canceled by guests with , followed by bookings were canceled with So it's obvious that guests who do not pay any deposit while booking are likely to cancel more reservations.
Business Summary
From our EDA, we have observed that the top 5 most important features in the data set which helps in predicting Cancellations from the guests are :
Lead Time ADR Deposit Type Arrival Day of the Month Total Number of Special Requests
Strategies to Counter High Cancellations at the Hotel
Set Non-refundable Rates, Collect deposits, and implement more rigid cancellation policies.
Using Advanced Purchase Rates with varying Lead Time windows
Encourage Direct bookings by offering special discounts
Hotels should consider the total number of special requests from guests to reduce the possibility of cancellations by improving customer service.
Monitor where the cancellations are coming from such as Market Segment, distribution channels, etc.
As always, I welcome feedback and constructive criticism. Python Code for Data Visualization is available on GitHub and I can also be reached on LinkedIn.
References | https://medium.com/analytics-vidhya/exploratory-data-analysis-eda-for-predicting-hotel-booking-cancellations-using-machine-learning-3990be4af2ff | ['Chaithanya Vamshi'] | 2020-07-19 16:30:03.745000+00:00 | ['Eda', 'Python', 'Hospitality', 'Data Visualization', 'Machine Learning'] |
Creating a Confirm Dialog in React and Material UI | There comes a time in every application where you want to delete something. So like every developer, you add a button, that when clicked on, deletes the resource.
Wether it’s a blog post, a shopping cart item, or disabling an account, you want to protect against unwanted button clicks.
Enter the Confirm Dialog.
Sometimes you do want to execute an action without always prompting the user with a confirm dialog, and sometimes it can be annoying always prompting them.
Hey, do you want to do this? No, really, do you really want to do this?
Sometimes, I get annoyed and tell myself, and the application. Yes! Of course I want to do the action. Why else would I have clicked on it?
However, when it comes to deleting sensitive data, such as a blog post, I would suggest adding a confirm dialog, so the user can be alerted and can back out if they accidentally clicked on it by mistake.
Before we begin, let’s look at how to achieve this is native JavaScript.
var shouldDelete = confirm('Do you really want to delete this awesome article?'); if (shouldDelete) {
deleteArticle();
}
This will prompt a default confirm box, and prompt the user with the text, “Do you really want to delete this awesome article?”
If the user clicks Yes, then it will set the shouldDelete boolean to true and run the deleteArticle function. If they click No, or Cancel, it will close the dialog.
But the native browser implementation of the confirm dialog is kind of boring, so let’s make a version, that looks good, with React and Material UI.
Let’s begin by creating a reusable component. You can use this in any application that uses React and Material UI.
import React from 'react';
import Button from '@material-ui/core/Button';
import Dialog from '@material-ui/core/Dialog';
import DialogActions from '@material-ui/core/DialogActions';
import DialogContent from '@material-ui/core/DialogContent';
import DialogTitle from '@material-ui/core/DialogTitle'; const ConfirmDialog = (props) => {
const { title, children, open, setOpen, onConfirm } = props;
return (
<Dialog
open={open}
onClose={() => setOpen(false)}
aria-labelledby="confirm-dialog"
>
<DialogTitle id="confirm-dialog">{title}</DialogTitle>
<DialogContent>{children}</DialogContent>
<DialogActions>
<Button
variant="contained"
onClick={() => setOpen(false)}
color="secondary"
>
No
</Button>
<Button
variant="contained"
onClick={() => {
setOpen(false);
onConfirm();
}}
color="default"
>
Yes
</Button>
</DialogActions>
</Dialog>
);
}; export default ConfirmDialog;
This component will take in these props:
title — This is what will show as the dialog title children — This is what will show in the dialog content. This can be a string, or it can be another, more complex component. open — This is what tells the dialog to show. setOpen — This is a state function that will set the state of the dialog to show or close. onConfirm — This is a callback function when the user clicks Yes.
This is just a basic confirm dialog, you can modify it to meet your needs, such as changing the Yes or No buttons.
Now let’s see how we can use this component in our application.
As an example, let’s say we have a table that lists blog posts. We want a function to run when we click a delete icon, that will show this confirm dialog, and when we click Yes, it will run a deletePost function.
<div>
<IconButton aria-label="delete" onClick={() => setConfirmOpen(true)}>
<DeleteIcon />
</IconButton>
<ConfirmDialog
title="Delete Post?"
open={confirmOpen}
setOpen={setConfirmOpen}
onConfirm={deletePost}
>
Are you sure you want to delete this post?
</ConfirmDialog>
</div>
In this component, we need to implement the ConfirmDialog with the props open, setOpen, and onConfirm. Open and setOpen are controlled by using a React state, and onConfirm takes in a function called deletePost, which calls an API to delete this certain post. Implementing these are beyond the scope of this article. I will leave it up to to implement what those functions actually do.
There you have it! Pretty easy to create a reusable confirm dialog, and it looks a million times better than the default native browser dialog. | https://medium.com/javascript-in-plain-english/creating-a-confirm-dialog-in-react-and-material-ui-3d7aaea1d799 | ['Andrew Bliss'] | 2020-04-29 17:14:05.755000+00:00 | ['JavaScript', 'Web Development', 'React', 'Material Ui', 'Programming'] |
3 Reasons Other People’s Opinions Do Matter | Aspiring creatives are often told to produce work purely for themselves — at least at the beginning of their journey.
The idea is that enjoying the process of creation is more important than worrying about the recognition our work receives.
I myself have repeated that advice on this blog, so my intention with this piece isn’t to refute it.
I do think enjoying the process of whatever it is we decide to do is more important than any recognition we might receive.
We can’t control recognition, but we can control the attitude we adopt toward the work we take up.
There are no guarantees that we will become financially independent as creatives, and so the real question we need to ask ourselves is, “Can I do what I do even if the money never comes?”
Learning to have fun while creating, setting one’s own goals and working diligently toward them, and not being too concerned with how others view our journey are all critical elements of getting back in touch with our creative nature.
That being said, I think there is an equally valid perspective regarding why we should make an effort to care about someone’s opinion when it stems from the genuine desire of wanting to give good feedback.
What I’m not suggesting is that we ought to let anyone who has a negative opinion of something we create completely obliterate our self-esteem, or even that we have to admit that they are one-hundred percent right in their assessment.
I’m merely suggesting that other people’s reactions to our work can teach us a lot about our strengths and weaknesses, provided we cultivate an attitude of being able to learn from feedback.
So I thought I’d take the time to run through three reasons why we should make that effort.
Reason # 1 — We Need Feedback to Improve & Grow
While it’s possible for each of us to act as our own critic, there’s simply no substitute for having outsiders engage with and respond to our work.
We all need people who will give us feedback. That’s how we improve. — Bill Gates
For example, sometimes we think we are being clear in our message only to realize that it’s clunky in ways we didn’t catch.
If you’ve ever written something in a flash, took a day or more break from it, and then come back to realize that it was a hurried mess, then you’ll know what I’m talking about.
The act of taking a break distanced you from your work, and you were able to see it a bit more objectively.
Well, other people are like that, times a hundred.
Their reaction is a clue as to whether our train of thought was expressed clearly, or if it jumped all over the place and missed out on providing its intended value.
Anyone interested in improvement should be open to hearing well-reasoned critiques of their work (unfortunately, so many critiques are not well-reasoned, but more on that later).
Put another way, we grow in proportion to the degree that we are capable of admitting that we still have faults that need to be worked on.
Other people can help with pointing out those faults — as painful as it may be.
We just need to take care to discern when someone’s critique is valid, versus when someone is trolling or attacking us with unnecessary negativity.
Reason #2 — No Feedback Is the Worst Kind of Feedback
If our goal is to provide value to our audience and maybe even get paid for what we do someday, then we need to care about what sort of reactions our work is triggering in others.
Comments and feedback — aka opinions — are the only way to know how our work is affecting the people it comes into contact with.
But here’s the thing: people mostly comment on what provokes either a really positive or really negative reaction out of them.
A popular example is the recent Star Wars sequel trilogy.
Most people loved The Force Awakens, but The Last Jedi is really where people drew battle lines.
It’s no surprise then that The Last Jedi is probably still the most talked-about installment of the trilogy.
It provoked strong reactions out of people, which I think many creatives would readily admit as something they want their own work to do.
It’s when people have no reaction to our work that we find ourselves in a bit of a pickle.
No reaction means our work is still dwelling somewhere in the land of mediocrity, either by playing it safe, failing to entertain/provide value, etc.
But no one should be devastated if this is the case.
After all, out of the 311 (and counting) blog posts I’ve published thus far, none have gone viral, and very few of them have even one response.
What this tells me is that I still have a lot of room left to improve my writing game, even if I can “blame” some of it on the luck of the internet draw.
At this point, however, I prefer to believe that the impetus is on me to provide more value for my readers and to not make excuses for why I haven’t seemed to be capable of doing that yet.
The point I’m trying to make, however, is this: the only alternative to hearing others’ opinions of our work (whether they be positive or negative) is a silent audience, and who really wants that?
Reason #3 — If We Eventually Want to Make a Larger Impact, Some Degree of Social Validation Is Necessary
While in the beginning stages of any creative journey it’s always best to learn how to enjoy the process for its own sake, many of us also want to make a larger impact on the world — if we’re permitted.
To do that, however, some degree of social validation is necessary.
… creativity cannot be understood by looking only at the people who appear to make it happen. Just as the sound of a tree crashing in the forest is unheard if nobody is there to hear it, so creative ideas vanish unless there is a receptive audience to record and implement them. And without the assessment of competent outsiders, there is no reliable way to decide whether the claims of a self-styled creative person are valid. — Mihaly Csikszentmihalyi
I learned this the hard way when I was forced to question my earlier I-only-want-to-rebel self.
I guess you could say that I sort of saw through the absurdity of non-conforming just to be a non-conformist. As other people have pointed out, that’s basically just a different way of conforming.
The truth is, of course, that every human being wants to feel accepted by at least one other human being, and most of us want to feel like we have a place in society at large as well.
So while it’s fine to create work for ourselves in the beginning, I think that any aspiring creative should feel unashamed in admitting that they want their work to find a broader audience eventually.
We can debate all day what a “competent outsider” is from Mihaly’s quote above, but for the sake of keeping things simple, I’ll stick to just the blogging world.
When I think about what a blog is, I don’t try to inflate it into something grand.
I see a blog as a place to share the knowledge I’m exploring with others and to help them learn and grow along with me.
It’s a place for self-reflection, sharing knowledge and wisdom, and being unafraid of updating my worldview when and if I find it is in need of updating.
From this perspective, anyone who comments is a “competent outsider,” as I’m not trying to revolutionize, say, theoretical physics.
I’m just trying to make an impact on whoever stumbles in here.
That’s why feedback is so critical, and why anyone in the same boat as me shouldn’t run away from hearing others’ opinions of their work.
Feedback is eventually what we want — we just have to embrace hearing critiques along with praise, and not let either type of comment go to our heads.
This doesn’t mean we have to be enslaved to the desire to have our work socially validated.
Honesty and authenticity are still more important than performative displays for the crowd.
But, interestingly enough, being honest enough to risk upsetting people is also the same way that we attract an audience that’s genuinely interested in what we have to say.
To Sum It All Up
Should other people’s opinions matter to the lone creative trying to produce work that impacts the world?
The answer is, of course, yes — to a certain degree.
Other people’s opinions shouldn’t determine how we view our work in the long run, but we should learn enough from them so that we can keep improving.
Silence, in many ways, is the worst form of feedback, as it reveals to us that our work isn’t yet making an impact or provoking strong reactions out of people.
We should also feel comfortable admitting that making an impact and receiving social validation is, in many ways, the same thing.
Rather than run away from people praising or critiquing our work (and sometimes they can do both), we should embrace it as a necessary element of the process.
Other people are indeed the mirror that is held up to us and our work.
Sometimes the image we see makes us uncomfortable.
But that’s okay, so long as we are willing to learn something and keep on chugging along. | https://medium.com/the-innovation/3-reasons-other-peoples-opinions-do-matter-47e84108ecef | ['Colton Tanner Casados-Medve'] | 2020-12-14 21:38:28.528000+00:00 | ['Self Improvement', 'Blogging', 'Creativity', 'Opinion', 'Growth'] |
The 10 Statistical Techniques Data Scientists Need to Master | Regardless of where you stand on the matter of Data Science sexiness, it’s simply impossible to ignore the continuing importance of data, and our ability to analyze, organize, and contextualize it. Drawing on their vast stores of employment data and employee feedback, Glassdoor ranked Data Scientist #1 in their 25 Best Jobs in America list. So the role is here to stay, but unquestionably, the specifics of what a Data Scientist does will evolve. With technologies like Machine Learning becoming ever-more common place, and emerging fields like Deep Learning gaining significant traction amongst researchers and engineers — and the companies that hire them — Data Scientists continue to ride the crest of an incredible wave of innovation and technological progress.
While having a strong coding ability is important, data science isn’t all about software engineering (in fact, have a good familiarity with Python and you’re good to go). Data scientists live at the intersection of coding, statistics, and critical thinking. As Josh Wills put it, “data scientist is a person who is better at statistics than any programmer and better at programming than any statistician.” I personally know too many software engineers looking to transition into data scientist and blindly utilizing machine learning frameworks such as TensorFlow or Apache Spark to their data without a thorough understanding of statistical theories behind them. So comes the study of statistical learning, a theoretical framework for machine learning drawing from the fields of statistics and functional analysis.
Why study Statistical Learning? It is important to understand the ideas behind the various techniques, in order to know how and when to use them. One has to understand the simpler methods first, in order to grasp the more sophisticated ones. It is important to accurately assess the performance of a method, to know how well or how badly it is working. Additionally, this is an exciting research area, having important applications in science, industry, and finance. Ultimately, statistical learning is a fundamental ingredient in the training of a modern data scientist. Examples of Statistical Learning problems include:
Identify the risk factors for prostate cancer.
Classify a recorded phoneme based on a log-periodogram.
Predict whether someone will have a heart attack on the basis of demographic, diet and clinical measurements.
Customize an email spam detection system.
Identify the numbers in a handwritten zip code.
Classify a tissue sample into one of several cancer classes.
Establish the relationship between salary and demographic variables in population survey data.
In my last semester in college, I did an Independent Study on Data Mining. The class covers expansive materials coming from 3 books: Intro to Statistical Learning (Hastie, Tibshirani, Witten, James), Doing Bayesian Data Analysis (Kruschke), and Time Series Analysis and Applications (Shumway, Stoffer). We did a lot of exercises on Bayesian Analysis, Markov Chain Monte Carlo, Hierarchical Modeling, Supervised and Unsupervised Learning. This experience deepens my interest in the Data Mining academic field and convinces me to specialize further in it. Recently, I completed the Statistical Learning online course on Stanford Lagunita, which covers all the material in the Intro to Statistical Learning book I read in my Independent Study. Now being exposed to the content twice, I want to share the 10 statistical techniques from the book that I believe any data scientists should learn to be more effective in handling big datasets.
Before moving on with these 10 techniques, I want to differentiate between statistical learning and machine learning. I wrote one of the most popular Medium posts on machine learning before, so I am confident I have the expertise to justify these differences:
Machine learning arose as a subfield of Artificial Intelligence.
Statistical learning arose as a subfield of Statistics.
Machine learning has a greater emphasis on large scale applications and prediction accuracy.
Statistical learning emphasizes models and their interpretability, and precision and uncertainty.
But the distinction has become and more blurred, and there is a great deal of “cross-fertilization.”
Machine learning has the upper hand in Marketing!
1 — Linear Regression:
In statistics, linear regression is a method to predict a target variable by fitting the best linear relationship between the dependent and independent variable. The best fit is done by making sure that the sum of all the distances between the shape and the actual observations at each point is as small as possible. The fit of the shape is “best” in the sense that no other position would produce less error given the choice of shape. 2 major types of linear regression are Simple Linear Regression and Multiple Linear Regression. Simple Linear Regression uses a single independent variable to predict a dependent variable by fitting a best linear relationship. Multiple Linear Regression uses more than one independent variable to predict a dependent variable by fitting a best linear relationship.
Pick any 2 things that you use in your daily life and that are related. Like, I have data of my monthly spending, monthly income and the number of trips per month for the last 3 years. Now I need to answer the following questions:
What will be my monthly spending for next year?
Which factor (monthly income or number of trips per month) is more important in deciding my monthly spending?
How monthly income and trips per month are correlated with monthly spending?
2 — Classification:
Classification is a data mining technique that assigns categories to a collection of data in order to aid in more accurate predictions and analysis. Also sometimes called a Decision Tree, classification is one of several methods intended to make the analysis of very large datasets effective. 2 major Classification techniques stand out: Logistic Regression and Discriminant Analysis.
Logistic Regression is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables. Types of questions that a logistic regression can examine:
How does the probability of getting lung cancer (Yes vs No) change for every additional pound of overweight and for every pack of cigarettes smoked per day?
Do body weight calorie intake, fat intake, and participant age have an influence on heart attacks (Yes vs No)?
In Discriminant Analysis, 2 or more groups or clusters or populations are known a priori and 1 or more new observations are classified into 1 of the known populations based on the measured characteristics. Discriminant analysis models the distribution of the predictors X separately in each of the response classes, and then uses Bayes’ theorem to flip these around into estimates for the probability of the response category given the value of X. Such models can either be linear or quadratic.
Linear Discriminant Analysis computes “discriminant scores” for each observation to classify what response variable class it is in. These scores are obtained by finding linear combinations of the independent variables. It assumes that the observations within each class are drawn from a multivariate Gaussian distribution and the covariance of the predictor variables are common across all k levels of the response variable Y.
computes “discriminant scores” for each observation to classify what response variable class it is in. These scores are obtained by finding linear combinations of the independent variables. It assumes that the observations within each class are drawn from a multivariate Gaussian distribution and the covariance of the predictor variables are common across all k levels of the response variable Y. Quadratic Discriminant Analysis provides an alternative approach. Like LDA, QDA assumes that the observations from each class of Y are drawn from a Gaussian distribution. However, unlike LDA, QDA assumes that each class has its own covariance matrix. In other words, the predictor variables are not assumed to have common variance across each of the k levels in Y.
3 — Resampling Methods:
Resampling is the method that consists of drawing repeated samples from the original data samples. It is a non-parametric method of statistical inference. In other words, the method of resampling does not involve the utilization of the generic distribution tables in order to compute approximate p probability values.
Resampling generates a unique sampling distribution on the basis of the actual data. It uses experimental methods, rather than analytical methods, to generate the unique sampling distribution. It yields unbiased estimates as it is based on the unbiased samples of all the possible results of the data studied by the researcher. In order to understand the concept of resampling, you should understand the terms Bootstrapping and Cross-Validation:
Bootstrapping is a technique that helps in many situations like validation of a predictive model performance, ensemble methods, estimation of bias and variance of the model. It works by sampling with replacement from the original data, and take the “not chosen” data points as test cases. We can make this several times and calculate the average score as estimation of our model performance.
is a technique that helps in many situations like validation of a predictive model performance, ensemble methods, estimation of bias and variance of the model. It works by sampling with replacement from the original data, and take the “not chosen” data points as test cases. We can make this several times and calculate the average score as estimation of our model performance. On the other hand, cross validation is a technique for validating the model performance, and it’s done by split the training data into k parts. We take the k — 1 parts as our training set and use the “held out” part as our test set. We repeat that k times differently. Finally, we take the average of the k scores as our performance estimation.
Usually for linear models, ordinary least squares is the major criteria to be considered to fit them into the data. The next 3 methods are the alternative approaches that can provide better prediction accuracy and model interpretability for fitting linear models.
4 — Subset Selection:
This approach identifies a subset of the p predictors that we believe to be related to the response. We then fit a model using the least squares of the subset features.
Best-Subset Selection: Here we fit a separate OLS regression for each possible combination of the p predictors and then look at the resulting model fits. The algorithm is broken up into 2 stages: (1) Fit all models that contain k predictors, where k is the max length of the models, (2) Select a single model using cross-validated prediction error. It is important to use testing or validation error, and not training error to assess model fit because RSS and R² monotonically increase with more variables. The best approach is to cross-validate and choose the model with the highest R² and lowest RSS on testing error estimates.
Here we fit a separate OLS regression for each possible combination of the p predictors and then look at the resulting model fits. The algorithm is broken up into 2 stages: (1) Fit all models that contain k predictors, where k is the max length of the models, (2) Select a single model using cross-validated prediction error. It is important to use testing or validation error, and not training error to assess model fit because RSS and R² monotonically increase with more variables. The best approach is to cross-validate and choose the model with the highest R² and lowest RSS on testing error estimates. Forward Stepwise Selection considers a much smaller subset of p predictors. It begins with a model containing no predictors, then adds predictors to the model, one at a time until all of the predictors are in the model. The order of the variables being added is the variable, which gives the greatest addition improvement to the fit, until no more variables improve model fit using cross-validated prediction error.
considers a much smaller subset of p predictors. It begins with a model containing no predictors, then adds predictors to the model, one at a time until all of the predictors are in the model. The order of the variables being added is the variable, which gives the greatest addition improvement to the fit, until no more variables improve model fit using cross-validated prediction error. Backward Stepwise Selection begins will all p predictors in the model, then iteratively removes the least useful predictor one at a time.
begins will all p predictors in the model, then iteratively removes the least useful predictor one at a time. Hybrid Methods follows the forward stepwise approach, however, after adding each new variable, the method may also remove variables that do not contribute to the model fit.
5 — Shrinkage:
This approach fits a model involving all p predictors, however, the estimated coefficients are shrunken towards zero relative to the least squares estimates. This shrinkage, aka regularization has the effect of reducing variance. Depending on what type of shrinkage is performed, some of the coefficients may be estimated to be exactly zero. Thus this method also performs variable selection. The two best-known techniques for shrinking the coefficient estimates towards zero are the ridge regression and the lasso.
Ridge regression is similar to least squares except that the coefficients are estimated by minimizing a slightly different quantity. Ridge regression, like OLS, seeks coefficient estimates that reduce RSS, however they also have a shrinkage penalty when the coefficients come closer to zero. This penalty has the effect of shrinking the coefficient estimates towards zero. Without going into the math, it is useful to know that ridge regression shrinks the features with the smallest column space variance. Like in prinicipal component analysis, ridge regression projects the data into ddirectional space and then shrinks the coefficients of the low-variance components more than the high variance components, which are equivalent to the largest and smallest principal components.
is similar to least squares except that the coefficients are estimated by minimizing a slightly different quantity. Ridge regression, like OLS, seeks coefficient estimates that reduce RSS, however they also have a shrinkage penalty when the coefficients come closer to zero. This penalty has the effect of shrinking the coefficient estimates towards zero. Without going into the math, it is useful to know that ridge regression shrinks the features with the smallest column space variance. Like in prinicipal component analysis, ridge regression projects the data into ddirectional space and then shrinks the coefficients of the low-variance components more than the high variance components, which are equivalent to the largest and smallest principal components. Ridge regression had at least one disadvantage; it includes all p predictors in the final model. The penalty term will set many of them close to zero, but never exactly to zero. This isn’t generally a problem for prediction accuracy, but it can make the model more difficult to interpret the results. Lasso overcomes this disadvantage and is capable of forcing some of the coefficients to zero granted that s is small enough. Since s = 1 results in regular OLS regression, as s approaches 0 the coefficients shrink towards zero. Thus, Lasso regression also performs variable selection.
6 — Dimension Reduction:
Dimension reduction reduces the problem of estimating p + 1 coefficients to the simple problem of M + 1 coefficients, where M < p. This is attained by computing M different linear combinations, or projections, of the variables. Then these M projections are used as predictors to fit a linear regression model by least squares. 2 approaches for this task are principal component regression and partial least squares.
One can describe Principal Components Regression as an approach for deriving a low-dimensional set of features from a large set of variables. The first principal component direction of the data is along which the observations vary the most. In other words, the first PC is a line that fits as close as possible to the data. One can fit p distinct principal components. The second PC is a linear combination of the variables that is uncorrelated with the first PC, and has the largest variance subject to this constraint. The idea is that the principal components capture the most variance in the data using linear combinations of the data in subsequently orthogonal directions. In this way, we can also combine the effects of correlated variables to get more information out of the available data, whereas in regular least squares we would have to discard one of the correlated variables.
as an approach for deriving a low-dimensional set of features from a large set of variables. The first principal component direction of the data is along which the observations vary the most. In other words, the first PC is a line that fits as close as possible to the data. One can fit p distinct principal components. The second PC is a linear combination of the variables that is uncorrelated with the first PC, and has the largest variance subject to this constraint. The idea is that the principal components capture the most variance in the data using linear combinations of the data in subsequently orthogonal directions. In this way, we can also combine the effects of correlated variables to get more information out of the available data, whereas in regular least squares we would have to discard one of the correlated variables. The PCR method that we described above involves identifying linear combinations of X that best represent the predictors. These combinations (directions) are identified in an unsupervised way, since the response Y is not used to help determine the principal component directions. That is, the response Y does not supervise the identification of the principal components, thus there is no guarantee that the directions that best explain the predictors also are the best for predicting the response (even though that is often assumed). Partial least squares (PLS) are a supervisedalternative to PCR. Like PCR, PLS is a dimension reduction method, which first identifies a new smaller set of features that are linear combinations of the original features, then fits a linear model via least squares to the new M features. Yet, unlike PCR, PLS makes use of the response variable in order to identify the new features.
7 — Nonlinear Models:
In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations. Below are a couple of important techniques to deal with nonlinear models:
A function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces. A piecewise function is a function which is defined by multiple sub-functions, each sub-function applying to a certain interval of the main function’s domain. Piecewise is actually a way of expressing the function, rather than a characteristic of the function itself, but with additional qualification, it can describe the nature of the function. For example, a piecewise polynomial function is a function that is a polynomial on each of its sub-domains, but possibly a different one on each.
A spline is a special function defined piecewise by polynomials. In computer graphics, spline refers to a piecewise polynomial parametric curve. Splines are popular curves because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design.
is a special function defined piecewise by polynomials. In computer graphics, spline refers to a piecewise polynomial parametric curve. Splines are popular curves because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design. A generalized additive model is a generalized linear model in which the linear predictor depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.
8 — Tree-Based Methods:
Tree-based methods can be used for both regression and classification problems. These involve stratifying or segmenting the predictor space into a number of simple regions. Since the set of splitting rules used to segment the predictor space can be summarized in a tree, these types of approaches are known as decision-tree methods. The methods below grow multiple trees which are then combined to yield a single consensus prediction.
Bagging is the way decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multistep of the same carnality/size as your original data. By increasing the size of your training set you can’t improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome.
is the way decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multistep of the same carnality/size as your original data. By increasing the size of your training set you can’t improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. Boosting is an approach to calculate the output using several different models and then average the result using a weighted average approach. By combining the advantages and pitfalls of these approaches by varying your weighting formula you can come up with a good predictive force for a wider range of input data, using different narrowly tuned models.
The random forest algorithm is actually very similar to bagging. Also here, you draw random bootstrap samples of your training set. However, in addition to the bootstrap samples, you also draw a random subset of features for training the individual trees; in bagging, you give each tree the full set of features. Due to the random feature selection, you make the trees more independent of each other compared to regular bagging, which often results in better predictive performance (due to better variance-bias trade-offs) and it’s also faster, because each tree learns only from a subset of features.
9 — Support Vector Machines:
SVM is a classification technique that is listed under supervised learning models in Machine Learning. In layman’s terms, it involves finding the hyperplane (line in 2D, plane in 3D and hyperplane in higher dimensions. More formally, a hyperplane is n-1 dimensional subspace of an n-dimensional space) that best separates two classes of points with the maximum margin. Essentially, it is a constrained optimization problem where the margin is maximized subject to the constraint that it perfectly classifies the data (hard margin).
The data points that kind of “support” this hyperplane on either sides are called the “support vectors”. In the above picture, the filled blue circle and the two filled squares are the support vectors. For cases where the two classes of data are not linearly separable, the points are projected to an exploded (higher dimensional) space where linear separation may be possible. A problem involving multiple classes can be broken down into multiple one-versus-one or one-versus-rest binary classification problems.
10 — Unsupervised Learning:
So far, we only have discussed supervised learning techniques, in which the groups are known and the experience provided to the algorithm is the relationship between actual entities and the group they belong to. Another set of techniques can be used when the groups (categories) of data are not known. They are called unsupervised as it is left on the learning algorithm to figure out patterns in the data provided. Clustering is an example of unsupervised learning in which different data sets are clustered into groups of closely related items. Below is the list of most widely used unsupervised learning algorithms:
Principal Component Analysis helps in producing low dimensional representation of the dataset by identifying a set of linear combination of features which have maximum variance and are mutually un-correlated. This linear dimensionality technique could be helpful in understanding latent interaction between the variable in an unsupervised setting.
helps in producing low dimensional representation of the dataset by identifying a set of linear combination of features which have maximum variance and are mutually un-correlated. This linear dimensionality technique could be helpful in understanding latent interaction between the variable in an unsupervised setting. k-Means clustering : partitions data into k distinct clusters based on distance to the centroid of a cluster.
: partitions data into k distinct clusters based on distance to the centroid of a cluster. Hierarchical clustering: builds a multilevel hierarchy of clusters by creating a cluster tree.
This was a basic run-down of some basic statistical techniques that can help a data science program manager and or executive have a better understanding of what is running underneath the hood of their data science teams. Truthfully, some data science teams purely run algorithms through python and R libraries. Most of them don’t even have to think about the math that is underlying. However, being able to understand the basics of statistical analysis gives your teams a better approach. Have insight into the smallest parts allows for easier manipulation and abstraction. I hope this basic data science statistical guide gives you a decent understanding!
P.S: You can get all the lecture slides and RStudio sessions from my GitHub source code here. Thanks for the overwhelming response!
— —
If you would like to follow my work on Recommendation Systems, Deep Learning, MLOps, and Data Science Journalism, you can check out my Medium and GitHub, as well as other projects at https://jameskle.com/. You can also tweet at me on Twitter, email me directly, or find me on LinkedIn. Or join my mailing list to receive my latest thoughts right at your inbox! | https://medium.com/cracking-the-data-science-interview/the-10-statistical-techniques-data-scientists-need-to-master-1ef6dbd531f7 | ['James Le'] | 2020-06-07 13:55:25.945000+00:00 | ['Analytics', 'Statistics', 'Artificial Intelligence', 'Mathematics', 'Data Science'] |
‘Bohemian Rhapsody’: The Story Behind Queen’s Rule-Breaking Classic Song | Martin Chilton
Photo: Queen Productions Ltd
Queen’s epic rock hit ‘Bohemian Rhapsody’ began life sometime in the late 60s, when Freddie Mercury was a student at Ealing Art College, starting out as a few ideas for a song scribbled on scraps of paper.
Queen guitarist Brian May remembers the brilliant singer and songwriter giving them the first glimpse in the early 70s of the masterpiece he had at one time called ‘The Cowboy Song’, perhaps because of the line “Mama… just killed a man.”
“He’d worked out the harmonies in his head”
“I remember Freddie coming in with loads of bits of paper from his dad’s work, like Post-it notes, and pounding on the piano,” May said in 2008. “He played the piano like most people play the drums. And this song he had was full of gaps where he explained that something operatic would happen here and so on. He’d worked out the harmonies in his head.”
Mercury told bandmates that he believed he had enough material for about three songs but was thinking about blending all the lyrics into one long extravaganza. The final six-minute iconic mini rock opera became the band’s defining song, and eventually provided the title of the hit 2019 biopic starring Rami Malek as Mercury.
“Just the biggest thrill”
Queen first properly rehearsed ‘Bohemian Rhapsody’ at Ridge Farm Studio, in Surrey, in mid-1975, and then spent three weeks honing the song at Penrhos Court in Herefordshire. By the summer they were ready to record it; taping began on 24 August 1975 at the famous Rockfield Studios in Monmouth, Wales. It was a moment that May described as “just the biggest thrill”.
The innovative song began with the famous a cappella intro (“Is this the real life?/Is this just fantasy?”) before embracing everything from glam-metal rock to opera. A week was devoted to the operatic interlude, for which Mercury had methodically written out all the harmony parts. For the grand chorale, the group layered 160 tracks of vocal overdubs (using 24-track analogue recording), with Mercury singing the middle register, May the low register and drummer Roger Taylor the high register (John Deacon was on bass guitar but did not sing). Mercury performed with real verve, overdubbing his voice until it sounded like a choir, with the words “mamma mia”, “Galileo” and “Figaro” bouncing up and down the octaves.
“He put a lot of himself into that song”
“We ran the tape through so many times it kept wearing out,” May said. “Once we held the tape up to the light and we could see straight through it, the music had practically vanished. Every time Fred decided to add a few more ‘Galileo’s we lost something, too.” Mercury had supposedly written “Galileo” into the lyrics in honour of May, who had a passionate interest in astronomy and would later go on to earn a PhD in astrophysics.
‘Bohemian Rhapsody’ brims with imaginative language and is a testament to Mercury’s talents as a songwriter. Scaramouche was a buffoonish character in 16th-century comedy commedia dell’arte shows; “Bismillah”, which is taken from the Quran, means “in the name of Allah”; Beelzebub is an archaic name for Satan.
“Freddie was a very complex person; flippant and funny on the surface, but he concealed insecurities and problems in squaring up his life with his childhood,” said May. “He never explained the lyrics, but I think he put a lot of himself into that song.”
“You knew that you were listening to history”
After the final version was completed — following some refinements at Roundhouse, Sarm East Studios, Scorpio Sound and Wessex Sound Studios — there was a feeling that Queen had created something special. “Nobody really knew how it was going to sound as a whole six-minute song until it was put together,” producer Roy Thomas Baker told Performing Songwriter magazine. “I was standing at the back of the control room, and you just knew that you were listening for the first time to a big page in history. Something inside me told me that this was a red-letter day, and it really was.”
The song, which appears on the album , was finally released on 31 October 1975, and the impact was instantaneous. “I was green with envy when I heard ‘Bohemian Rhapsody’. It was a piece of sheer originality that took rock and pop away from the normal path,” said Björn Ulvaeus of .
Though the group’s record company were initially reluctant to issue ‘Bohemian Rhapsody’ as a single, Queen were united in insisting that it was the right choice, despite exceeding the three-minute running time expected of most single releases. The band were told the song had no hope of getting airplay, but they were helped by Capital Radio DJ Kenny Everett, a friend of Mercury’s, who played it 14 times in one weekend and started the buzz that eventually ended with the single going to №1.
‘Bohemian Rhapsody’’s groundbreaking video
Queen also hired director Bruce Gowers to shoot a groundbreaking video, which features the band recreating their iconic pose from the cover of their Queen II album. The promo, which cost £3,500 to make in just three hours at Elstree Studios, was a superb piece of rock marketing, celebrated for its eye-catching multi-angle shots capturing Mercury in his favourite Marlene Dietrich pose. The band had fun making the video, and Gowers recalled: “We started at seven-thirty, finished at ten-thirty and were in the pub 15 minutes later.”
On 20 November 1975, the new video was premiered on Top Of The Pops to huge media and public interest. Queen watched the programme in their Taunton hotel room. ‘Bohemian Rhapsody’ became the band’s first US Top 10 hit. In the UK, it went to №1 for nine consecutive weeks, a record at the time, even holding off the surprise Laurel And Hardy novelty hit ‘The Trail Of The Lonesome Pine’, which had to settle for the №2 spot. ‘Bohemian Rhapsody’ is still the only song to have topped the UK charts twice at Christmas. It was also the first Queen single to be released with a picture sleeve in the UK. The B-side, incidentally, was Taylor’s ‘I’m In Love With My Car’.
“People should make up their own minds as to what it says”
Mercury’s ambitious song, which earned him an Ivor Novello Award for songwriting, quickly became a highlight of Queen’s live show after being unveiled on the A Night At The Opera Tour of 1975 (the closing night of which is captured on their A Night At The Odeon DVD, the deluxe box set of which features the band’s very first live performance of the song, recorded during the soundcheck).
‘Bohemian Rhapsody’ opened their celebrated Live Aid set in July 1985 and it has remained remarkably popular. In 2004, the song was inducted into the Grammy Hall Of Fame, and Mercury’s vocal performance was named by the readers of Rolling Stone magazine as the best in rock history. ‘Bohemian Rhapsody’ is the third best-selling single of all-time in the UK and, in December 2018, ‘Bo Rhap’ — as it is affectionately known among Queen fans — was officially proclaimed the world’s most-streamed song from the 20th Century, passing 1.6 billion listens globally across all major streaming services. A mere seven months later, on 21 July 2019, the video surpassed one billion streams on YouTube.
“It is one of those songs which has such a fantasy feel about it,” Mercury said. “I think people should just listen to it, think about it, and then make up their own minds as to what it says to them.”
Listen to the best of Queen on Apple Music and Spotify.
Join us on Facebook and follow us on Twitter: @uDiscoverMusic | https://medium.com/udiscover-music/bohemian-rhapsody-the-story-behind-queen-s-rule-breaking-classic-song-26e532e3f6dc | ['Udiscover Music'] | 2019-07-25 08:08:21.171000+00:00 | ['Rock', 'Features', 'Pop Culture', 'Culture', 'Music'] |
Some minor but life-saving tips in Python | Hi, I hope everyone is still ok from this tedious 2020! I was gone for a few months, and I came back with some tips that I found life saving while preparing for my master’s degree paper. (can’t believe how time flies)
So, this post will not be consistent nor explanatory but it will be a list of problems that I faced while coding and their solutions to help those of you who might have the similar problems. I was able to get the solutions from searches after searches, and it is my turn to repay back for their help. Without the help of the people answering at stackoverflow.com or the anonymous bloggers who did such a work without any ‘economical’ interest, I think I would have given up at the very first stage.
Also, one more thing is that, in my case, regardless how much I knew about the algorithms and their mechanisms, ironically the very first codings were more challenging.
CAUTION! Unlike my previous posts that were mostly very detailed with step by step explanations, this one would be less like that.
1. arma_generate_sample
First, what I wanted to do was to generate, for example, AR(1) processes with distinct coefficients.
Before moving on, let’s have do a simple practice with AR(1) process. (which is very recommended to check your minor problems)
from statsmodels.tsa.arima_process import arma_generate_sample rho=0.5
nsample=200
ar_coefs = [1,-rho]
ma_coefs = [1,0] ar1=arma_generate_sample(ar_coefs, ma_coefs, nsample=100, sigma=1)
So in this coding I generated a sample from AR(1) process with its coefficient 0.5 with sample size=100. Which would look like following;
array([-0.43327517, 0.3061977 , 0.80159066, 1.49798894, 2.58501507, 1.67462174, 1.25875958, -1.42044711, -0.68007658, 0.37558947, …
…-0.26342701, -0.53688217, 1.93173622, 1.10047058, 0.6058739 ])
ar1.shape #(100,)
Tip: notice that ar_coefs=[1,-rho] “- rho”
Then, we would like to check if the rho=0.5 is well estimated by Python.
from statsmodels.tsa.stattools import acf
modela = sm.tsa.ARMA(ar1, (1, 0)).fit(trend='nc', disp=0) print('estimated rho=%f, actual rho= %.3f' % (modela.params, rho ))
This will result in:
estimated rho=0.559307, actual rho= 0.500
Fair enough! Now let’s try generating say, ten AR(1) samples with 100 time steps each.
Here my strategy was to define a function based on the previous practice and stack the 10 samples for later use.
np.random.seed(12345) def AR1(timestep):
rho = np.random.uniform(-1,1,1)+1e-6
ar_coefs = [1,-rho]
ma_coefs = [1,0]
ar1=arma_generate_sample(ar_coefs, ma_coefs, nsample=timestep, sigma=1)
ar1=np.array(ar1)
return ar1
Then from my expectation, AR1(100) should have the same shape as our previous exercise, however, things were a bit messy.
tmp=AR1(100)
tmp.shape #(100,)
The shape were the same but in depth, they were 100% different.
array([-0.9678381828411187, array([-1.39385533]), array([-0.88148675]), …, array([1.82240015]), array([3.56664748]), array([3.20947843]), array([3.55423975]), array([3.03438549]), array([1.81995384]), array([1.59298593])], dtype=object)
This is not how we want it.
Further more I had to notice that the dtype were also different.
ar1.dtype #dtype('float64')
tmp.dtype #dtype('O')
This had to be solved before being stacked!
I could not notice such problem until I actually commanded AR1(100) so to say until I saw how it looked like. I made the plot which worked fine and then the shape was the same, so it took me a while to even notice the problem. My tip is to at least see by your own eyes if it really works!
The solution was very simple! rho=float(rho)
def AR1(timestep):
rho = np.random.uniform(-1,1,1)+1e-6
rho=float(rho)
ar_coefs = [1,-rho]
ma_coefs = [1,0]
ar1=arma_generate_sample(ar_coefs, ma_coefs, nsample=timestep, sigma=1)
ar1=np.array(ar1)
return ar1
At last, it looked liked this;
ar1=AR1(100)
ar1.shape #(100,)
ar1.dtype #dtype('float64')
array([ 0.10050124, 0.64788036, -0.9043851 , 1.62197309, -1.11857796, 0.7803612 , -0.0176377 , 0.24278273, -0.74016969, 1.49048946, …,1.54750935, 2.40578187, -0.15412767])
2. Generate 10 AR(1) processes with T=100
Then, what I wanted to do was to stack 10 AR(1) processes which would look like following;
numpy stack or other things were also possible but I found the following way more intuitive; make an empty np array of intended shape and then, fill out the empty array.
rs=10
timestep=100 ar1 = np.empty((rs,timestep),dtype=float) for i in range(rs):
ar1[i]=AR1(timestep)
Let’s check the shape of the ar1.
ar1.shape #(10,100)
3. Save numpy array into npz file and then load
This is simple and easy to search, but anyway.
from numpy import save
from numpy import load from numpy import savez_compressed np.savez_compressed('file name here', ar1=ar1, ma1=ma1)
So, let’s say we made two arrays; an array with ten AR(1) processes with timesteps 100 (of shape (10,100)) and then same for MA(1) processes, with names ‘ar1’ and ‘ma1’ respectively.
Then the arrays will be saved in npz file for later use.
Now load the data
data = load('drive/My Drive/file name here.npz')
So, I mounted Google drive in my colab note book, which is very convenient, you should take care with the file path and name!
Then from the data you will be able to retrieve the ar1 and ma1 like following.
ar1 = data['ar1'] ma1 =data['ma1']
4. Data preprocess for LSTM
In this simple coding, I prepared two (10,100) datasets meaning that the data is 2D tensor. However, LSTM takes 3D tensor as an input with shape; (sample, time steps, features). Questions like how to reshape the data? or what is feature? might arise.
First, in our example, sample=10, time steps=100 and feature=1. Feature is the most tricky part, but it can be viewed as number of ‘features’ that characterize a sequence. So, in my example, a single AR(1) process is defined by a single row vector — feature=1
However, take example of the Human Activity classification problems. Your smartphone/Apple watch collects your health data; say heart beat, blood pressure and body movement. (don’t know how nor what kind of data it collects in reality) I mean, there exist detailed explanations about the dataset, but the medical terminologies complicate the problem.
Then by analyzing the three data at each time step the application will analyze whether you’re sleeping or walking or doing sports, etc. So, in this case feature=3.
ar1.shape #(10,100) rs=10
timestep=100
feature=1 ar1=ar1.reshape(rs,timestep,feature)
ma1=ma1.reshape(rs,timestep,feature)
Here I did the reshape for ar1 and ma1 respectively and then concatenated to make a train and test dataset. However the process can also be reversed. The motive for my procedure was for later use. Actually what I did was to make so called ‘prediction_dataset’ in my own words to see the accuracy for ar1 and ma1 separately.
Anyway,
X=np.concatenate((ar1,ma1),axis=0) X= np.asarray(X).astype(np.float32)
5. Binarize and categorize
So, my problem was to classify AR process vs MA process, and labeling can be done in 2 ways.
A. Binary labeling.
ar_label=np.zeros((rs,1))#(rs,1) is our desired shape ma_label=np.ones((rs,1))
ar_label and ma_label would look like this with shape (10,1) respectively;
array([[0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.]]) and array([[1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.]])
Then label for X can be made by concatenating the ar_label and ma_label.
label=np.concatenate([ar_label,ma_label],axis=0)
Here, label would look like this with shape (20,1);
array([[0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.], [1.]])
B. Categorical labeling
One hot encoding is often preferred, so we can do that by doing the following.
y=tf.keras.utils.to_categorical(label,num_classes=2)
Then, y will look like with shape (20,2);
array([[1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.], [0., 1.]], dtype=float32)
C. Multi-label classification
So, my final goal was to do a multiple classification so, the label was made by using the following lines of code.
Similar to np.zeros or np.ones integers 2, 3, … can also fill a numpy array with np.full.
ar1_label=np.zeros((rs,1))
ar2_label=np.ones((rs,1))
ma1_label=np.full((rs,1),2)
ma2_label=np.full((rs,1),3)
For example, ma1_label would look like this:
array([[2], [2], [2], [2], [2], [2], [2], [2], [2], [2]])
Then, we want to do one-hot encoding. Actually, to_categorical is useful in this situation; it is time consuming to type [1,0,0,0] to [0,0,0,1]. Imagine if you are doing a classification with 10 categories!
label=np.concatenate([ar1_label,ar2_label,ma1_label,ma2_label],axis=0) y=tf.keras.utils.to_categorical(label,num_classes=4)
Here y will have a shape of (40,4) :
array([[1., 0., 0., 0.], …, [1., 0., 0., 0.], [0., 1., 0., 0.], …, [0., 1., 0., 0.], [0., 0., 1., 0.], … , [0., 0., 1., 0.], [0., 0., 0., 1.], …, [0., 0., 0., 1.]], dtype=float32)
D. Category to binary?
In some cases, you want to return from the one-hot encoded label to the labels 0 to 3. np.argmax can be used to do such job.
i) np.argmax
It returns the index of the maximum value.
From the numpy.org example, let x look like this with shape (2,3);
array([[10, 11, 12], [13, 14, 15]])
np.argmax(x) #5
When axis is not defined, the array will be flattened like [10,11,12,13,14,15]. Also, notice that index starts from 0, 5 implies that the 6th element has the maximum value.
np.argmax(x, axis=0) #array([1, 1, 1])
When axis=0 is defined, it will explore column-wise and then return in which row the largest value exists. So, to put it more specifically, for the first column the largest value is in the 2nd row, etc.
np.argmax(x, axis=1) #array([2, 2])
When axis=1 is defined, it will look through each row and tell in which column the largest value situates.
ii) Back to our example
So, what we want to do is to retrieve the numerical labels from the one-hot encoded labels.
So, from our practice with np.argmax, we can see that we need to look through row-wise and then get the index for the columns that have the maximum value (or which row has the value 1) — axis=1!!
y_non_cat=np.argmax(y,axis=1) y_non_cat.shape #(40,)
So, the numerical label can be returned like this; array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]) with shape (40,).
5. Confusion Matrix
The numerical label was necessary to make the confusion matrix.
y_classes = np.argmax(ar12_ma12_model.predict(X),axis=1)
So, here y_classes will have the predicted labels in numerical version, i.e.,
array([0, 0, 0, …, 3, 3, 1])
By comparing it with the true labels, we can make a confusion matrix.
from sklearn.metrics import confusion_matrix cm =confusion_matrix(y_classes,y_non_cat)#(predicted result, truth)
Confusion matrix can later be used to calculate the precision, recall or to draw roc (receiver operating characteristic) curve or calculate AUC (area under roc curve) and PRC (precision-recall) curve.
This is all for this post, and I hope this list of problem shooting experience might help your struggles.
Thank you for reading and have a nice day!
Cecilia Kim. | https://medium.com/python-in-plain-english/python-some-minor-but-life-saving-tips-b69443c8da3a | ['A Ydobon'] | 2020-09-17 12:07:16.077000+00:00 | ['Numpy', 'Python', 'Classification', 'Confusion Matrix', 'Lstm'] |
Social Share Images in Nuxt Content | Intro
When sharing blog content or articles on social media it’s important to stand out. In a sea of Twitter posts users might simply scroll by an article you’ve worked hard on if the blog preview isn’t eye catching enough!
In this post, we’ll teach you how to generate beautiful sharing cards for your Nuxt Content blog posts! This post will use concepts laid out in Jason Lengstorfs amazing article where he details how to generate images for posts using Cloundinary’s API and a custom template, however we’ll be more focused on getting this going with Nuxt Content!
I would recommend going and reading his post before continuing, as you will need to setup your own template from within Cloundinary, as well as upload any custom fonts you want to use for your template.
Setup
This post won’t go into too much detail about setting up a Nuxt Content blog from scratch, but it goes without saying make sure you have the @nuxt/content package installed and added to your nuxt.config.js modules like so:
modules: [
'@nuxt/content',
],
In order to begin generating dynamic social media cards, we will also need to install Jason Lengstorf’s package @jlengstorf/get-share-image .
# install using npm
npm install --save @jlengstorf/get-share-image # install using yarn
yarn add @jlengstorf/get-share-image
Once you’ve gotten everything installed and your template uploaded to Cloudinary, it’s time to begin generating your images!
Fetch Blog & Generate Image
From within a dynamic page component (my blog pages are in /blog/_slug.vue) we’ll need to use the asyncData Nuxt hook due to the fact that this is called before the head method where we'll need to set our Open Graph and Twitter metadata for the post.
We’re going to start by importing getShareImage from '@jlengstorf/get-share-image' and then calling this function from within asyncData after fetching the article for our specific page.
<script>
import getShareImage from '@jlengstorf/get-share-image';
export default {
async asyncData({ $content, params }) {
const article = await $content('blogs', params.slug).fetch()
const socialImage = getShareImage({
title: article.title,
tagline: article.subtitle,
cloudName: 'YOUR_CLOUDINARY_NAME',
imagePublicID: 'YOUR_TEMPLATE_NAME.png',
titleFont: 'unienueueitalic.otf',
titleExtraConfig: '_line_spacing_-10',
taglineFont: 'unienueueitalic.otf',
titleFontSize: '72',
taglineFontSize: '48',
titleColor: 'fff',
taglineColor: '6CE3D4',
textLeftOffset: '100',
titleBottomOffset: '350',
taglineTopOffset: '380'
});
return { article, socialImage }
}
}
</script>
The getShareImage function will generate an image URL via Cloudinary using the specified text, transformations, colors and fonts. For example, my URL for this post is
Since I’ve created my own template, and included my own font, my settings may be different than yours when setting the textLeftOffset or any other offsets for example. A full list of properties you can set are available on the Github page for the package.
Feel free to check out Jason Lengstorf’s Figma template available here and customize it your liking.
Setting meta tags
Great, we are generating our image via dynamic Nuxt Content article attributes!
Now how do we inject these variables into our blog pages `head` so that social media users will see our image and metadata?
To do this, we’ll leverage Nuxt.js’ built in head method that allows us to set Open Graph and Twitter meta tags. We’ll also include some useful information like the time the article was published, and the last time it was modified using the createdAt and updatedAt properties that Nuxt Content automatically injects for us.
<script>
import getShareImage from '@jlengstorf/get-share-image';
import getSiteMeta from "~/utils/getSiteMeta.js";
export default {
async asyncData({ $content, params }) {
const article = await $content('blogs', params.slug).fetch()
const socialImage = getShareImage({
title: article.title,
tagline: article.subtitle,
cloudName: 'YOUR_CLOUDINARY_NAME',
imagePublicID: 'YOUR_TEMPLATE_NAME.png',
titleFont: 'unienueueitalic.otf',
titleExtraConfig: '_line_spacing_-10',
taglineFont: 'unienueueitalic.otf',
titleFontSize: '72',
taglineFontSize: '48',
titleColor: 'fff',
taglineColor: '6CE3D4',
textLeftOffset: '100',
titleBottomOffset: '350',
taglineTopOffset: '380'
});
return { article, socialImage }
},
computed: {
meta() {
const metaData = {
type: "article",
title: this.article.title,
description: this.article.description,
url: `https://davidparks.dev/blog/${this.$route.params.slug}`,
mainImage: this.socialImage,
};
return getSiteMeta(metaData);
}
},
head() {
return {
title: this.article.title,
meta: [
...this.meta,
{
property: "article:published_time",
content: this.article.createdAt,
},
{
property: "article:modified_time",
content: this.article.updatedAt,
},
{
property: "article:tag",
content: this.article.tags ? this.article.tags.toString() : "",
},
{ name: "twitter:label1", content: "Written by" },
{ name: "twitter:data1", content: "David Parks" },
{ name: "twitter:label2", content: "Filed under" },
{
name: "twitter:data2",
content: this.article.tags ? this.article.tags.toString() : "",
},
],
link: [
{
hid: "canonical",
rel: "canonical",
href: `https://davidparks.dev/blog/${this.$route.params.slug}`,
},
],
};
}
}
</script>
You have noticed above that I am importing getSiteMeta from "~/utils/getSiteMeta.js" . This is a utility function that I use to overwrite default meta tags. We will use a computed property to override some default metadata values I've setup in this file if they are explicitly provided. This ensures we are injecting the proper variables from our Nuxt Content Markdown file into our head. That file looks like this:
const type = "website";
const url = "https://davidparks.dev";
const title = "David Parks";
const description = "David Parks is a Front-end Developer from Milwaukee, Wisconsin. This blog will focus on Nuxt.js, Vue.js, CSS, Animation and more!";
const mainImage = "https://davidparksdev.s3.us-east-2.amazonaws.com/template.png";
const twitterSite = "@dparksdev";
const twitterCard = "summary_large_image"
export default (meta) => {
return [
{
hid: "description",
name: "description",
content: (meta && meta.description) || description,
},
{
hid: "og:type",
property: "og:type",
content: (meta && meta.type) || type,
},
{
hid: "og:url",
property: "og:url",
content: (meta && meta.url) || url,
},
{
hid: "og:title",
property: "og:title",
content: (meta && meta.title) || title,
},
{
hid: "og:description",
property: "og:description",
content: (meta && meta.description) || description,
},
{
hid: "og:image",
property: "og:image",
content: (meta && meta.mainImage) || mainImage,
},
{
hid: "twitter:url",
name: "twitter:url",
content: (meta && meta.url) || url,
},
{
hid: "twitter:title",
name: "twitter:title",
content: (meta && meta.title) || title,
},
{
hid: "twitter:description",
name: "twitter:description",
content: (meta && meta.description) || description,
},
{
hid: "twitter:image",
name: "twitter:image",
content: (meta && meta.mainImage) || mainImage,
},
{
hid: "twitter:site",
name: "twitter:site",
content: (meta && meta.twitterSite) || twitterSite,
},
{
hid: "twitter:card",
name: "twitter:card",
content: (meta && meta.twitterCard) || twitterCard,
}
];
};
Unless there are overrides explicitly provided, it will use the fallback values I’ve defined at the top of this file. This is great if you want to avoid those cases where you forget to set meta tags!
The computed property meta is then being merged into the head method via a spread operator ...this.meta, . This will ensure that any default values are overridden and your article title, description and images are properly injected inside of your documents head.
Testing with Facebook & Twitter Tools
If all goes well, you should now see these meta tags in your DOM!
The next time your site deploys, you should now see an awesome looking share image when sharing your blog to Twitter, Facebook, Linkedin or anywhere else! Using tools like Twitter’s Card Debugger and Facebook’s Open Graph Debugger will be essential to tweaking them to your liking and debugging any potentially missing tags.
Wrapping Up
What’s great about this approach is that if you decide at some point in the future to update or change your template for your blogs, it will update the preview image for all of them. It also saves you the time and headaches of creating unique preview images for each individual blog in Figma or a design tool of your choosing. Just set it, and forget it!
If you’ve made it this far, good job. I look forward to seeing some awesome Nuxt Content blogs with beautiful sharing cards on my feeds in the near future. Thanks for reading! | https://medium.com/javascript-in-plain-english/social-share-images-in-nuxt-content-be5f2ae36755 | ['David Parks'] | 2020-10-29 21:27:38.054000+00:00 | ['JavaScript', 'Nuxt', 'Web Development', 'Cloudinary', 'Vuejs'] |
Introduction to Video Object Detection | Limitations of Still-image Object Detector
Before we dive deep into these papers, we need to first understand what limitations still-image object detectors have and how we can leverage our knowledge to extend current frameworks to videos.
First, simply applying image detectors to videos would introduce unaffordable computational cost by running models for each frame in videos.
The accuracy is also affected by motion blur, video defocus, and rare poses that only appear in videos but seldom observed in still images.
So, we want to design a new framework or extend current frameworks for video object detection task. | https://gl2675.medium.com/introduction-to-video-object-detection-7181cdf95aed | ['Guandong Liu'] | 2020-11-29 02:05:18.197000+00:00 | ['Computer Vision', 'Deep Learning', 'Video Object Detection'] |
Great Christmas Songs from the past, What's your favorite song? | New research has revealed which Christmas songs have been the most listened to over the past years — and for that matter, which songs have been given the most plays.
With Christmas time upon us once again— which means seasonal songs can be heard on every corner, of your shopping trips will be soundtracked by the likes of Wizzard, Band-Aid, Elton John, Slade, George Michael, all the usual suspects.
But which one of these festive favorites will you hear the most this Christmas 2020?
The Performing Rights Society, (PRS) who track plays of songs on TV and radio — issued their research, which revealed Britains’ favorite top 10 Christmas songs:
Britain’s Top 10 Favourite Christmas Songs…
1–The Pogues featuring Kirsty MacColl — Fairytale Of New York
This well-loved but occasionally controversial festive song was first released in 1987.
2- Wham! — Last Christmas
The late George Michael and his musical partner Andrew Ridgeley released this Christmas classic in 1984.
3- Slade — Merry Xmas Everybody
Noddy Holder’s evergreen Christmas tune was originally issued in 1973 and has been around for a long, long time.
4- Bing Crosby — White Christmas
Made famous by the film Holiday Inn, this eternal song first topped the US charts in 1942!
5- Wizzard — I Wish It Could Be Christmas Every Day
Roy Wood’s sleigh-belled strewn song went head to head in the charts with Slade’s Merry Xmas Everybody in 1973. | https://medium.com/the-entertainment-engine/great-christmas-songs-from-the-past-whats-your-favorite-song-a1361bb7a41 | ['Peter Moore'] | 2020-12-17 12:01:48.639000+00:00 | ['Songs', 'Artist', 'Family', 'Christmas', 'Music'] |
Official Statement from Genaro Foundation on Future Development | The Genaro Network project was established in early 2016. The non-profit Genaro Ltd. foundation (the “Foundation”) was officially registered in Singapore in July 2017. Since its inception, the project has released a number of blockchain applications, such as the Genaro Eden storage network, Genaro Sharer storage sharing solution, Genaro Network Alpha One — a public chain integrated with the storage layer, Genaro Email privacy protection mail system (verifiable encrypted data interaction), and obtained patents in a wide range of core blockchain technology areas: consensus mechanism, cross-chain protocol, blockchain virtual machine, distributed storage and other blockchain-related spheres, and open-sourced these technologies for free use by practitioners.
Meanwhile, the Foundation’s strategy has always been adhering to the principle of internationalization from the very beginning. Our product users and community members are located in many places around the world, including Russia, Ukraine, South Korea, Japan and elsewhere. We express our deepest gratitude for your attention and continuous support.
In March 2019, the Genaro project implementation team successfully released its primary research and development products. Our work achievements far exceeded the initial project conception and went beyond the structure of the whitepaper and the technical yellow paper, and completed every scheduled milestone of our initial roadmap according to plan. At the same time, we also realize that the blockchain infrastructure still needs to be continuously improved, and there are still many shortcomings in the underlying technology, which requires more like-minded people to join their efforts and work together.
In order to make our accumulated open source technology more widely adopted and to achieve more decentralized development of the community, the majority of original project initiators, advisors and supporting team members, regardless their former titles, will continue to contribute to the foundation from various angles, including as evangelists, consultants, and other roles as needed. The core technology R&D team and product operation teams are already geographically diverse. Distributed collaborative teams in different parts of the world will continue to promote the development of the Genaro project and the development of the blockchain industry as a whole. The Foundation aims to create a good collaborative environment and mobilize global community members to actively participate and advance together.
After much hard work, the Foundation’s distributed office approach has proved its effectiveness and enabled the project to make gradual and steady progress. With the stable operation of the mainnet and the continuous improvement of various product features, the rapid development of blockchain in the Asia-Pacific region as well as growing interest in blockchain technology in different provinces and cities of China, we are staying committed to a challenging path, formulating new development strategy plans, and constantly deploying blockchain applications to actively promote the development of projects within the ecosystem.
Firstly, we will continue to strengthen the development of the mainnet, and continuously improve its functions and perform maintenance tests. The main points of upcoming work include:
1. GSIOP (Genaro Streaming IO Protocol) cross-chain protocol
GSIOP is the only storage-based cross-chain protocol that allows Genaro storage to be easily used by DApps that are built on other public chains. The first version of GSIOP was launched and open-sourced on February 20, 2019. In the future, the team will release an updated version of GSIOP, add new interfaces and handshake methods, and start cooperation with other public chains to create a so-called ultimate point-of-storage for major public chains.
2. Encrypted file sharing system based on blockchain technology
Genaro is designing and developing an encrypted file-sharing system with features such as encrypted file lookup, breakpoint retransmission, offline download and other features that can be found in traditional file systems, with no sacrifice to user data privacy, thus guaranteeing its protection.
3. X-pool
As a collection of Genaro’s PoS nodes, X-pool can help storage space sharers lower their thresholds and become more flexible, creating a totally new storage space sharing network.
4. The new release of X-Mail mailbox domain name system
A distributed, open, and scalable naming system based on the Genaro Network public chain. A private key can manage multiple address aliases, and can also be used for transfer transactions, file sharing, and sending emails through aliases. This made Genaro the first independent developer of this type of domain name system, providing the application potential for decentralized storage on a public blockchain. The first edition was released via official channels on January 15, 2019, and is available in G-Box v1.0.6. The current version will be updated in the near future and we encourage everyone to promote it among the community members and blockchain enthusiasts to gain more active users.
In addition, the Genaro ecosystem project involves many areas such as secure management of encrypted assets, privacy protection, data sharing, and so on. We strongly believe that public-chain-led exploration can continuously strengthen the development of underlying technologies and significantly benefit the whole blockchain infrastructure.
At the same time, we also hope that the accumulated blockchain technology can empower the real economy and contribute to the creation of the digital economy. Based on the research and development results we have achieved and the future technological development expectations, we will focus on the following three development directions:
1. A more versatile enterprise chain solution
The current Genaro Eden storage network architecture is based on the Genaro mainnet. After a series of tests via a global open network, it has been able to operate stably. In the future, we will provide a storage network that can be used in with enterprise or private blockchains, and build enterprise versions in collaboration with our enterprise partners. We also plan to use existing Genaro solutions to complete currently available versions of Quorum and Hyperledger.
2. Trusted Encrypted Data Calculation Tool
We plan to launch the development of a smart cryptographic computing tool, which will apply the homomorphic algorithm PoC of the previously encrypted data processing. This method allows applications to utilize data encrypted by homomorphic encryption. In the future, it will be used in conjunction with storage and provided to enterprise chains integrated with storage layer users. Genaro’s smart data products will help companies tap the value of big data to serve decision-makers.
3. Establishment of BaaS solution
Encapsulating Genaro’s storage solutions into a version that can be used by enterprise chains such as Quorum and Hyperledger, the project is designing an integrated BaaS (Blockchain as a Service) solution to provide complete service content for companies interested in deploying blockchain technology.
In addition, we will explore the establishment of a two-tiered regulatory blockchain solution.
We will combine the study of the China central bank’s digital currency two-tier operating system with the existing dual-layer blockchain engineering design. We will endeavor to research and implement the project on a double-layered blockchain solution, so that the regulatory layer can set up and control the payment channel, and at the same time publicly and transparently display the distributed ledger. It can reduce the financial non-disclosure and the uncontrolled payment to a certain extent. We hope that blockchain products can be widely used without risk and under sufficient protection.
The Foundation will continue to experiment and make further improvements to the abovementioned R&D activities with the open source follow-up as a result. Any company can deploy the Genaro open source code for free, and we are happy to see that this significant research and development work in recent years is being continuously applied to an increasing number of organizations and creating value. However, we strictly oppose involvement of any entity in malicious speculation with encrypted assets or engagement in illegal finance-related activities. Genaro’s core principle since its inception has been to primarily focus on the infinite possibilities of blockchain and the development of the global decentralized technology, as well as to respect and consistently comply with laws and regulations in various regions we have been operating.
The future is here with all its challenges, but together we travel on the road to the digital economy.
Genaro Foundation
November 2, 2019 | https://medium.com/genaro-network/official-statement-from-genaro-foundation-on-future-development-5719664464cf | ['Genaro Network', 'Gnx'] | 2019-11-03 13:33:05.425000+00:00 | ['Storage', 'Innovation', 'Blockchain', 'Technology', 'Future'] |
A one year PWA retrospective | Zack Argyle | Engineering manager, Core Experience
The idea of building a “Progressive Web App” (PWA) is not new, but its definition has changed with the emergence of key technologies like service workers. Now it’s finally possible to build great experiences in a mobile browser. Being an early adopter can be scary, so we’d like to share a brief overview of our experience building one of the world’s largest progressive web apps.
Three years ago we looked at the state of our website on mobile browsers and groaned at the obvious deficiencies. Metrics pointed to an 80 percent higher engagement rate in our native apps, so the decision was made to go all-in on our apps for iOS and Android. Despite increasing our app downloads substantially, there were some obvious downsides.
Emily. Owen. We would like to take this moment to offer an apology. You were right. It was terrible.
One year ago (July, 2017) we brought a team together to rewrite our mobile website from scratch as a PWA. This was the culmination of several years of conversation, months of metrics investigation and one large hypothesis: mobile web can be as good as a native app. The results are quite…pinteresting.
Why did we do it?
There were two main reasons why we reinvested so heavily in our mobile web. The first was our users. Our mobile web experience for people in low-bandwidth environments and limited data plans was not good. With more than half of all Pinners based outside the U.S., building a first-class mobile website was an opportunity to make Pinterest more accessible globally, and ultimately improve the experience for everyone.
The second reason was data-driven. Because the experience wasn’t great, a very small percentage of the unauthenticated users that landed on our mobile web site either installed the app, signed up or logged in. It was not a good funnel. Even if we weigh the native app users more heavily for higher engagement than mobile web users, it’s not the type of conversion rate anyone strives for. We thought we could do better.
How did we do it?
In July 2017, we formed a team that combined engineers from our web platform and growth teams. Internally, we called it “Project Duplo,” inspired by simplicity and accessibility. At the time, the mobile website accounted for less than 10 percent of our total signups (for context, the desktop website drove 5x that).
Timeline
July 2017: Begin “Project Duplo”
Aug. 2017: Launch new mobile site for percentage of logged-in users
Sept. 2017: Ship new mobile site for logged-in users
Jan. 2018: Launch new mobile site for percentage of logged-out users
Feb. 2018: Ship new mobile site for logged-out users
Part of the reason we were able to create and ship a full-featured rewrite in three months was thanks to our open-source UI library, Gestalt. At Pinterest, we use React 16 for all web development. Gestalt’s suite of components are built to encompass our design language, which makes it very easy to create consistently beautiful pages without worrying about CSS. We created a suite of mobile web-specific layout components for creating consistently spaced pages throughout the site. FullWidth breaks out of the default boundaries of PageContainer, which breaks out of the boundaries of a FixedHeader. This kind of compositional layout led to fast, bug-free UI development.
In addition to Gestalt, we also used react, react-router v4, redux, redux-thunk, react-redux, normalizr, reselect, flow and prettier.
How we made it fast!
Performance was baked into the goals and process because of how tightly correlated it is to engagement, and how sensitive it is on a mobile connection. In fact, our home page Javascript payload went from ~490kb to ~190kb. This was achieved through code-splitting at the route level by default, encouraging use of a <Loader> component for component-level code-splitting. An easy-to-use route preloading system was built into our client-side router, which creates a fast experience for initial page load as well as client-side route changes. For more details on how we made it fast, check out a case study we did with Addy Osmani.
After one year, there are ~600 Javascript files in our mobile web codebase, and all it takes is one ill-chosen import to bloat your bundle. It’s really hard to maintain performance! We share code extensively across subsites for *.pinterest.com, and so we have certain measures set up to ensure that mobile web’s dependencies stay clean. First is a set of graphs reporting build sizes with alerts for when bundles exceed permitted growth rates. Second is a custom eslint rule that disallows importing from files and directories we know are dependency-heavy and will bloat the bundle. For example, mobile web cannot import from the desktop web codebase, but we have a directory of “safe” packages that can be shared across both. There’s still work to do, but we’re proud of where we are.
While the case study deals mostly with page load, we also cared deeply about a fast, native-like experience while browsing. The biggest driver of client-side performance was our normalized redux store which allows for near-instant route changes. By having a single source of truth for models, like a Pin or user, it makes it trivial to show the information you have while waiting for more to load. For example, if you browse a feed of Pins, we have information about each Pin. When you tap on one, it takes you to a detailed view. Because the Pin data is normalized, we can easily show the limited details we have from the feed view until the full details finish being fetched from the server. When you click on a user avatar, we show that user’s profile with the information we have while we fetch the full user details. If you’re interested in the structure of our state or the flow of our actions, the redux devtools extension is enabled in production for our mobile web site.
At the heart of the new site was our attempt at building a truly progressive web app (PWA). We support an app shell, add to homescreen, push notifications and asset caching. The service worker caches a server-rendered, user-specific app shell that’s used for subsequent page loads and creates near instant page refreshes. We’re excited that Apple is building support for service workers in Safari so that all users can have the best “native-like” experience.
And what kind of “native” experience would it be without a “night mode”? Head over to your user settings and try it out!
The verdict
Now for the part you’ve all been waiting for: the numbers. Weekly active users on mobile web have increased 103 percent year-over-year overall, with a 156 percent increase in Brazil and 312 percent increase in India. On the engagement side, session length increased by 296 percent, the number of Pins seen increased by 401 percent and people were 295 percent more likely to save a Pin to a board. Those are amazing in and of themselves, but the growth front is where things really shined. Logins increased by 370 percent and new signups increased by 843 percent year-over-year. Since we shipped the new experience, mobile web has become the top platform for new signups. And for fun, in less than 6 months since fully shipping, we already have 800 thousand weekly users using our PWA like a native app (from their homescreen).
Looking back over one full year since we started rebuilding our mobile web, we’re so proud of the experience we’ve created for our users. Not only is it significantly faster, it’s also our first platform to support right-to-left languages and “night mode.” Investing in a full-featured PWA has exceeded our expectations. And we’re just getting started.
* Huge shout out to the engineers and product managers who worked on the rewrite: Becky Stoneman, Ben Finkle, Yen-Wei Liu, Langtian Lang, Victoria Kwong, Luna Ruan, Iris Wang and Imad Elyafi. | https://medium.com/pinterest-engineering/a-one-year-pwa-retrospective-f4a2f4129e05 | ['Pinterest Engineering'] | 2018-07-20 17:00:30.162000+00:00 | ['JavaScript', 'React', 'Progressive Web App', 'Mobile', 'Web'] |
Don’t believe the hype | Where we try to understand what’s behind past, current and future tech trends.
Photo by Michael Afonso on Unsplash
The tech industry may not sound quite as stylish as the fashion industry, but it is certainly also prone to fashion effects. For example, tech often makes big promises and thus raises hopes, leading to trends and hypes. So how can we know if a new technology is really a “hot topic” and not just a passing fad?
Let’s look at two examples from the past digital decades to understand this better. One is the concept of Peer-to-peer (P2P), which started about fifteen years ago. The idea was that we would all connect directly with each other for Internet services, instead of going through centralized entities. A number of research initiatives and products were developed based on this concept, such as Skype, but today very little remains (in fact even Skype switched back to a traditional centralized model).
Another concept from the past decade is cloud computing, that we explained in a previous column. The idea was to provide IT as a service, thus allowing companies and people to rent computation in a dynamic, scalable way rather than having to buy and install their own servers. Here, the impact has been dramatic and the entire IT industry has been transformed.
However, it is not just a case of working out versus fizzling out. Sometimes major transformations happen only on the second or third wave. Think for example of smartphones. Before the iPhone arrived, there was a lot of skepticism among specialists about mobile computing (after all, its arrival had been heralded multiple times, to no avail). But then finally the right combination of technology, design and product innovation appeared — and mobiles took off.
So what is the lesson? In the moment itself it is very hard to tell the real impact that a new technology might have on society. However, while we can’t predict the future, there are signs to look for. For this we need to take the rational view and ask factual questions like: are the applications addressing actual issues? Is there a true groundswell of adoption outside of the technologies’ originators? Are there symptoms of a bubble associated with the domain?
Take for example two of the current hot topics: blockchain and deep learning (artificial intelligence). Both are clearly extremely interesting from a technical and scientific viewpoint, and both have a lot of ongoing activity and media exposure. But where will they be in 10 years? While not even experts know, trying to address the above questions can give a first hint.
But the best indicator remains time. After all, time is very good at separating the wheat from the chaff (or as statisticians might say, everything “regresses to the mean”). And in more prosaic terms, I would just follow the credo “don’t believe the hype” when it comes to judging current tech trends. This famous sentence was originated by Noam Chomsky, and then brought to the masses via a song of the American hip hop band Public Enemy. Or was that just another hype? | https://medium.com/martinvetterli/dont-believe-the-hype-d0da4ca7e89e | ['Martin Vetterli'] | 2018-12-03 17:24:52.044000+00:00 | ['Market', 'Artificial Intelligence', 'Business', 'Computer Science', 'Technology'] |
How Jimi Hendrix’s Obsession with Bob Dylan Led Him to Woodstock | Hendrix was staying in the Wiley Lane apartment when he drove down Rock City Road one afternoon and heard an old New York acquaintance jamming on the village green. Juma Sultan had been commuting between Woodstock and the city since 1966, when he became involved with the Saugerties-based arts collective Group 212. He and percussionist Ali Abuwi had formed a loose Afrocentric entity known as the Aboriginal Music Society, playing around Woodstock and bringing together jazz and R&B musicians united by a commitment to black consciousness.
“Their scene was 24/7, and they had connections in New York with great players who came up,” says drummer Daoud Shaw. “But I never saw a real master plan.” When Sultan told Hendrix what he was up to, the guitarist invited him back to Wiley Lane for a jam. “He was telling me about how he wanted to start a new band,” Sultan remembered. “He was thinking about getting a house in the area, and that’s when his quest for a house started, looking for a place that could become a band house up in Woodstock.”
Though Hendrix kept a low profile in town — since it was rather harder for him to blend in than it was for, say, Rick Danko — he was occasionally seen bombing along Tinker Street in a red Corvette. “Nobody in Woodstock had a red Corvette,” remembered Leslie Aday, who worked for Albert Grossman and befriended Hendrix that summer. “They were all into organic vegetables and making their own clothes.” On at least one occasion, Hendrix went to the Elephant to see a gig.
“The vibe was that he wanted to hang and connect the dots and not be Jimi Hendrix,” says musician Jon Gershen, who observed him at the club. “He was really hungry to understand how Dylan had undergone his transformation. He was feeling the Woodstock thing and wanting to make it work for him, even though it was too much of a stretch to go from where he had been to being, you know, a farmer.”
Two months after Hendrix returned from his last European tour with the Experience — and with a heroin bust from Toronto hanging ominously over him — Mike Jeffery asked Jerry Morrison to find the guitarist a house in the Woodstock area. With Juma Sultan in tow, Morrison took Hendrix to see at least four properties around Woodstock, including a large place that Johnny Winter had rented across the Hudson in Rhinecliff. Eventually they settled on an eight-bedroom stone manor house at the end of Traver Hollow Road in Boiceville, four miles southwest of Woodstock. For a city boy it was quite the retreat, complete with horses in stables and a gatehouse where Sultan and his Chinese-American girlfriend would soon install themselves. A half hour’s drive away was the beautiful Peekamoose Road Waterfall, where Hendrix and his guests took acid trips.
Hendrix with Larry Lee at “the Shokan House” in Traver Hollow, early August 1969
When writer Sheila Weller visited the Traver Hollow house, she wrote that all the talk with Hendrix was about “puppies, daybreak, other innocentia,” and that she’d climbed down some rocks to an “icy brook” with the guitarist. He told her he wanted to “write songs about tranquility, about beautiful things.” He put John Wesley Harding on the turntable and played along to “The Ballad of Frankie Lee and Judas Priest,” “riding the rest of the song home with a near-religious intensity.” At the same time, he was leaning toward the kind of avant-garde Afro-jazz-rock espoused by Sultan, who dressed in robes and pushed his friend to explore the fusion of jazz with tribal percussion.
If Hendrix was undergoing an identity crisis — did he want to be Bob Dylan or did he want to be Miles Davis? — he was keen to jettison the Experience, complaining to Sheila Weller that he didn’t want to be a “clown” anymore. The sprawlingly eclectic double album Electric Ladyland of 1968 had made his ambitions clear, though the sessions had so frustrated Chas Chandler that he quit and left Hendrix to the mercy of Mike Jeffery.
Jeffery himself was perturbed by any threat to the cash cow that was the Experience, which played its last show on June 1. Used to regular income from the group’s festival appearances, Jeffery watched with unease as a motley crew of musicians rolled up at the Traver Hollow house.
“Because he had the resources,” says singer Martha Velez, “he just brought everybody up to woodshed in that house and ride horses and try that Woodstock meditative approach to writing and creating. He was a very intelligent person who was getting trapped in his own creation. It was important to break out of that, but on his own terms. And to do that he had to have a space where there wasn’t a lot of observation.” | https://medium.com/cuepoint/how-jimi-hendrix-s-obsession-with-bob-dylan-led-him-to-woodstock-a2ce99dca0d6 | ['Barney Hoskyns'] | 2016-10-13 20:04:26.052000+00:00 | ['Rock', 'The Bookshelf', 'Music'] |
How to Develop a REST API on an 8base | By default, the 8base platform auto-generates an extremely powerful GraphQL API that gives you immediate API access to your data. Sometimes though, developers are using a 3rd party service or another tool that doesn’t easily support the authoring and execution of GraphQL queries. Instead, they require that a REST API (or discrete endpoints) be available.
Developing a REST API in 8base can easily be accomplished using the Webhook custom function type. Using Webhooks, a developer can quickly code and deploy serverless functions that become available using traditional HTTP verbs (GET, POST, PUT, DELETE, etc.) and a unique path.
Getting Started Building a REST API on 8base
To get started building a REST API on top of an 8base workspace, you’re going to need to have the following resources created/installed.
An 8base Workspace (Free tier or Paid Plan) 8base CLI installed (Instructions available here) A Text Editor or IDE (VS Code, Atom, Sublime, or any other way to write code)
Once all those things are ready, go ahead and open the command line and use the following commands to generate a new 8base server-side project.
# If not authenticated, login via the CLI
$ 8base login # Generate a new 8base project and select your desired workspace
$ 8base init rest-api-tutorial # Move into the new directory
$ cd rest-api-tutorial
That’s it!
Generating the Serverless Functions for your REST API
Let’s go ahead and generate all of our serverless functions. We can do this pretty quickly using the 8base CLI’s generator commands. Using the generators, we’ll be able to create each one of our functions for each endpoint of our REST API.
# Our Index records endpoint
8base generate webhook getUsers — method=GET — path=’/users’ — syntax=js # Our create record endpoint
8base generate webhook newUser — method=POST — path=’/users’ — syntax=js # Our get record endpoint
8base generate webhook getUser — method=GET — path=’/users/{id}’ — syntax=js # Our edit record endpoint
8base generate webhook editUser — method=PUT — path=’/users/{id}’ — syntax=js # Our delete record endpoint
8base generate webhook deleteUser — method=DELETE — path=’/users/{id}’ — syntax=js
As you’ve probably noticed at this point, we’re building a REST API that gives access to our 8base workspace’s Users table. This is because the Users table is created with the workspace by default, thus you don’t need to create any new tables for the tutorial. That said, the same pattern we’ll cover would work for any other database table you choose to use (or multiple).
Additionally — since we are just building this API for the Users table — it’s okay that all these functions are grouped at the top level of the src/webhooks director. If you are building a REST API that is going to deal with lots more tables or more custom endpoints, this directory structure might quickly feel busy/un-organized.
Nothing stopping you from restructuring your directory to better suit your organizational needs! All you need to do is make sure that the function declaration in the 8base.yml file has a valid path to the function’s handler file/script. For example, take a look at the following directory structure and 8base.yml file:
Directory Structure
Example directory structure for 8base project REST API functions
8base.yml file
functions:
listUsers:
type: webhook
handler:
code: src/webhooks/users/list/handler.js
path: /users
method: GET
getUser:
type: webhook
handler:
code: src/webhooks/users/get/handler.js
path: '/users/{id}'
method: GET
createUser:
type: webhook
handler:
code: src/webhooks/users/create/handler.js
path: /users
method: POST
editUser:
type: webhook
handler:
code: src/webhooks/users/edit/handler.js
path: '/users/{id}'
method: PUT
deleteUser:
type: webhook
handler:
code: src/webhooks/users/delete/handler.js
path: '/users/{id}'
method: DELETE
For the sake of simplicity, let stick with the directory structure that was generated for us by the CLI! It’s only important to know that such reconfiguration is possible.
Writing our REST APIs Serverless Functions
Now that we have our serverless functions generated, let’s go ahead and start adding some code to them. What’s important to know about a Webhook function is that two important objects get passed through to the function via the event argument. They are data and pathParameters .
The data argument is where any data sent via a POST or PUT request can get accessed. Meanwhile, any query params or URL params sent via the request become accessible in the pathParameters object. Therefore, if a GET request was made to the endpoint /users/{id}?local=en , the value for id and local would both be available via event.pathParameters[KEY] .
GET User endpoint
Knowing this, let’s set up the GET User ( /users/{id} ) endpoint! To help with our GraphQL queries inside the function, add the GraphQL Tag NPM package using npm install -s graphql-tag . Then, go ahead and copy the code below into your getUser function’s handler file.
/* Bring in any required imports for our function */
import gql from "graphql-tag"; import { responseBuilder } from "../utils";
/* Declare the Query that gets used for the data fetching */
const QUERY = gql`
query($id: ID!) {
user(id: $id) {
id
firstName
lastName
email
createdAt
updatedAt
avatar {
downloadUrl
}
roles {
items {
name
}
}
}
}
`; module.exports = async (event, ctx) => {
/* Get the customer ID from Path Parameters */
let { id } = event.pathParameters;
let { user } = await ctx.api.gqlRequest(QUERY, { id });
if (!user) {
return responseBuilder(404, { message: `No record found.`, errors: [] });
}
return responseBuilder(200, { result: user });
};
You’ll likely spot an unrecognized import; responseBuilder . Webhook’s require that the following keys get declared in returned objects — statusCode , body , and (optionally) headers . Instead of writing out and every single response object explicitly, we can start generating them using a handy responseBuilder function.
So let’s go ahead and create a new directory and file using the following commands and then place our responseBuilder function in there.
$ mkdir src/webhooks/utils
$ touch src/webhooks/utils/index.js
Copy in the following script.
/**
* Webhook response objects require a statusCode attribute to be specified.
* A response body can get specified as a stringified JSON object and any
* custom headers set.
*/
export const responseBuilder = (code = 200, data = {}, headers = {}) => {
/* If the status code is greater than 400, error! */
if (code >= 400) {
/* Build the error response */
const err = {
headers,
statusCude: code,
body: JSON.stringify({
errors: data.errors,
timestamp: new Date().toJSON(),
}),
}; /* Console out the detailed error message */
console.log(err); /* Return the err */
return err;
}
return {
headers,
statusCode: code,
body: JSON.stringify(data),
};
};
Awesome! Almost as if we were building a controller method that runs a SQL query and then returns some serialized data, here were exercising the same pattern but using a serverless function that utilizes the GraphQL API.
As you can imagine, the other functions are likely going to be similar. Let’s go ahead and set them all up before we move into testing.
GET Users endpoint
Let’s now set up a way to list all Users via our REST API. Go ahead and copy the code below into your getUsers function’s handler file.
import gql from 'graphql-tag'
import { responseBuilder } from '../utils' const QUERY = gql`
query {
usersList {
count
items {
id
firstName
lastName
email
createdAt
updatedAt
}
}
}
`
module.exports = async (event, ctx) => {
/* Get the customer ID from Path Parameters */
let { usersList } = await ctx.api.gqlRequest(QUERY) return responseBuilder(200, { result: usersList })
}
POST User endpoint
Let’s now set up a way to add new Users via our REST API. Go ahead and copy the code below into your newUser function’s handler file.
import gql from 'graphql-tag'
import { responseBuilder } from '../../utils'
const MUTATION = gql`
mutation($data: UserCreateInput!) {
userCreate(data: $data) {
id
email
firstName
lastName
updatedAt
createdAt
}
}
`
module.exports = async (event, ctx) => {
/**
* Here we're pulling data out of the request to
* pass it as the mutation input
*/
const { data } = event
try {
/* Run mutation with supplied data */
const { userCreate } = await ctx.api.gqlRequest(MUTATION, { data })
/* Success response */
return responseBuilder(200, { result: userCreate })
} catch ({ response: { errors } }) {
/* Failure response */
return responseBuilder(400, { errors })
}
}
PUT User endpoint
Let’s now set up a way to edit Users via our REST API. Go ahead and copy the code below into your editUser function’s handler file.
import gql from 'graphql-tag'
import { responseBuilder } from '../../utils' const MUTATION = gql`
mutation($data: UserUpdateInput!) {
userUpdate(data: $data) {
id
email
firstName
lastName
updatedAt
createdAt
}
}
`
module.exports = async (event, ctx) => {
const { id } = event.pathParameters /* Combine the pathParameters with the event data */
const data = Object.assign(event.data, { id }) try {
/* Run mutation with supplied data */
const { userUpdate } = await ctx.api.gqlRequest(MUTATION, { data }) /* Success response */
return responseBuilder(200, { result: userUpdate })
} catch ({ response: { errors } }) {
/* Failure response */
return responseBuilder(400, { errors })
}
}
DELETE User endpoint
Let’s now set up a way to edit Users via our REST API. Go ahead and copy the code below into your deleteUser function’s handler file.
import gql from 'graphql-tag'
import { responseBuilder } from '../../utils' const MUTATION = gql`
mutation($id: ID!) {
userDelete(data: { id: $id }) {
success
}
}
`
module.exports = async (event, ctx) => {
const { id } = event.pathParameters try {
/* Run mutation with supplied data */
const { userDelete } = await ctx.api.gqlRequest(MUTATION, { id }) /* Success response */
return responseBuilder(200, { result: userDelete })
} catch ({ response: { errors } }) {
/* Failure response */
return responseBuilder(400, { errors })
}
}
Testing our REST API locally
Nice work so far! Pretty straightforward, right? What’s next is an extremely important step; testing. That is, how do we run these functions locally to make sure they are behaving as expected?
You may have noticed a directory called mocks that is in each of the function’s directories. Essentially, mocks allow us to structure a JSON payload that gets passed as the event argument to our function when testing locally. The JSON object that gets declared in a mock file will be the same argument passed to the function when testing — nothing more, nothing less.
That said, let’s go ahead and run our getUsers function since it ignores the event argument. We can do this using the invoke-local CLI command, as well as expect a response that looks like the following:
$ 8base invoke-local listUsers => Result:
{
“headers”: {},
“statusCode”: 200,
“body”: “{\”result\”:{\”count\”:1,\”items\”:[{\”id\”:\”SOME_USER_ID\”,\”firstName\”:\”Fred\”,\”lastName\”:\”Scholl\”,\”email\”:\”freijd@iud.com\”,\”createdAt\”:\”2020–11–19T19:26:53.922Z\”,\”updatedAt\”:\”2020–11–19T19:46:59.775Z\”}]}}”
}
Copy the id of the first returned user in the response. We’re going to use it to create a mock for our getUser function. So, now add the following JSON in the src/webhooks/getUser/mocks/request.json file.
{
“pathParameters”: {
“id”: “[SOME_USER_ID]”
}
}
With this mock set up, let’s go ahead and see if we can successfully use our REST API to get a user by their ID set in the URL params.
$ 8base invoke-local getUser -m request => Result:
{
“headers”: {},
“statusCode”: 200,
“body”: “{\”result\”:{\”id\”:\”SOME_USER_ID\”,\”firstName\”:\”Fred\”,\”lastName\”:\”Scholl\”,\”email\”:\”freijd@iud.com\”,\”createdAt\”:\”2020–11–19T19:26:53.922Z\”,\”updatedAt\”:\”2020–11–19T19:46:59.775Z\”,\”avatar\”:null,\”roles\”:{\”items\”:[]}}}”
}
Now, what if you want to specify data? Like, when you want to test an update? The exact sample principle applies. We add a data key to our mock with the data we expect to be sent to our endpoint. Try it yourself by adding the following JSON in the src/webhooks/editUser/mocks/request.json file.
{
“data”: {
“firstName”: “Freddy”,
“lastName”: “Scholl”,
“email”: “my_new_email@123mail.com”
},
“pathParameters”: {
“id”: “SOME_USER_ID”
}
}
Lastly, not all API requests are always successful… We added error handling to our functions because of this! Additionally, it would be a real pain to continuously be editing your mock file to first test a success, then failure, etc.
To help with this, you’re able to create as many different mock files as you want and reference them by name! The CLI generator will help you and place the mock in the appropriate directory. For example:
# Mock for a valid input for the editUser function
8base generate mock editUser — mockName success # Mock for a invalid input for the editUser function
8base generate mock editUser — mockName failure
When running your tests now, you can use the different mocks to insure that both your error handling and successful responses are being properly returned. All you have to do is reference the mock file you wish to use by name via the -m flag.
# Test an unsuccessful response
8base invoke-local editUser -m failure => Result:
{
headers: {},
statusCode: 400,
body: “{\”errors\”: [\r
{\r
\”message\”: \”Record for current filter not found.\”,\r
\”locations\”: [],\r
\”path\”: [\r
\”userUpdate\”\r
],\r
\”code\”: \”EntityNotFoundError\”,\r
\”details\”: {\r
\”id\”: \”Record for current filter not found.\”\r
}\r
}\r
],\r
\”timestamp\”: \”2020–11–20T01:33:38.468Z\”\r
}”
}
Deploying our REST API to 8base
Deployment is going to be the easiest part here. Run 8base deploy … that’s it. | https://sebscholl.medium.com/how-to-develop-a-rest-api-on-an-8base-workspace-b555b67f169e | ['Sebastian Scholl'] | 2020-11-25 17:53:05.748000+00:00 | ['JavaScript', 'Serverless', 'App Development', 'Software Engineering', 'Programming'] |
Anger and Love | Anger and Love
There is a choice
Photo by Azrul Aziz on Unsplash
If I could live inside the walls that have no divide, I would not have love and anger inside. There is a fine line that exists between love and anger. Some say it is passion others view it as human. To me, they are two words that stand side by side. Which one do I choose to multiply?
Humanity
I see love and anger with the human race, over color and faith. It saddens me because the skin is so thin, and the race consciousness focuses on the difference. I am a white woman with a child and grandchildren that are mixed. I love them and kiss them on their beautiful brown skin.
It does not matter what faith you are; there is only one God. Get past the prejudice of religion and join me in spirituality where we are all one in all there is.
I do not see much love expressed. But anger is present in young and old. And our government has no respect, and rudeness is prevalent. They are leaders that represent scars of indifference. They do not know we exist because of their prejudice.
Mother and child
Photo by Bruno Nascimento on Unsplash
Give your children hugs and kisses so they may thrive. There is hatred where they play, and it has taught them to discriminate. Some declare vulnerability as being weak, but that is how they become strong.
I never saw my mother and father kiss or hug. Although I knew they were in love. But what grace my eyes could’ve beheld if they did.
Thank you, Ruchi, for the prompt. “Anger and Love”. | https://medium.com/soultouch/anger-and-love-231bb715c11b | ['Bernadette Decarlo'] | 2020-11-27 05:38:40.873000+00:00 | ['Poetry', 'Society', 'Love', 'Anger'] |
Evie Wyld’s “The Bass Rock”: The Angels in the House Write Back | “They were angry. They are men and I am a girl.”[2]
A modern gothic, The Bass Rock is part ghost story, part mystery, part family drama. It touches on religion and the hypocrisy of the church, infidelity, mental illness, loss, and war. The main topic, however, is something prevalent to women. Even the somewhat marginalised ghost — a murdered girl — does not represent an individual tragedy; rather it is a collective ghost of all women killed and tortured by their men.
“There was a feeling in the drawing room there was more than just the girl, like the hammering of rain on the windows had summoned a host of others.”[3]
The narrative focuses on three female protagonists: Sarah in the eighteenth century, Ruth in the aftermath of World War II, and Viviane in the present. What connects them is not only an uncanny house standing alone somewhere on the Scottish shoreline — the epitome of a domestic sphere as an isolated shelter amidst the raging winds of the masculine world — but also the various ways in which women have been abused by men across centuries. And not just by any men — the abusers often come from the inside, living with their victims and killing them between those four walls that were paradoxically supposed to be the female domestic space, a safe space. It is an illusion sustained by patriarchy that the domestic sphere belongs to a woman, when even there, the master of the house is a man.
“I imagine a wolfman running alongside the car. I used to do this on the drive to Scotland when I was a kid. When you look into the scrub at the side of the road, when you peer into the dark, it’s easy to see them, waiting to race you, willing you to break down.”[4]
In Western cultures, “wolfman”, “werewolf”, “wolf”, or “dog” are often associated with the masculine. The dog is the man’s best friend, not woman’s. Dogs accompany the king’s party during a forest hunt, while the queen embroiders inside the palace walls. The dog mounts a bitch, not a “female dog”.
That’s why the recurring metaphor of the wolfman lurking outside of a protagonist’s “safe space” is so important in The Bass Rock. Wyld doesn’t just play with the threat of the wolf entering the domestic space — the wolf is already inside, in the heroines’ heads, in their beds. He finds a way to control them, intimidate them: emotionally, physically, sexually.
“These things happened in every girl’s life at some point, of course — with Ruth it has been the curate, and he had only wrestled a fondle and a wet mouth out of her.”[5]
Wyld’s characters have been handed all the bad cards — from verbal abuse to gaslighting to domestic violence to rape — but they are not playing victims. What makes the story so striking is that at many times the women seem to accept the injustices inflicted upon them. No, that’s not right, I hope you are screaming now, and you are right, it is not right, but it is what makes the novel so chillingly believable. After all, this is what society has long been telling women to do: be patient, unselfish, always smiling. Don’t ever complain. Men don’t like shrews; they want to tame them.
“She wanted to do more than slap herself, she wished she had fallen harder on the tiles, and it ran through her suddenly that she would feel a great satisfaction if she slammed her head into the fence post.”[6]
As the protagonists become increasingly repressed by men, the means of control turning more violent, their façade as the angels in the house threatens to crumble. They rebel, often by self-destruction — through alcohol, sleep deprivation, neglect of both body and mind. This does prevent any men to claim them — after all, you cannot control someone who’s beyond self-control — but the cost is high. The Bass Rock, in other words, illustrates how the patriarchal order can destroy a woman.
“And if they kiss us afterwards on the mouth it is only to see how they taste.”[7]
While some extreme domestic violence, including gory murders, takes place in the narrative, the most chilling moments of abuse are — perhaps paradoxically — the “everyday” ones.
In particular, the two scenes where female protagonists are being tickled against their will by men whom they trust will stay with you. The simplicity of that “everyday” act is what makes it so uncanny — you can practically feel the character’s terror and helplessness because it’s more than probable that at some point it used to be your own.
“‘Get off me,’ but he keeps going, digging at my ribs, that awful feeling coming on me, the loss of breath, the loss of control, and I hit him hard in a panic, make contact with his ear, and he grabs my wrists and holds me down and the panic worsens.”[8]
Using something as everyday as tickling to dominate women? Now that really makes you think about current power dynamics, sexism, and personal boundaries.
“Men do these things and then they tick on with their lives as though it’s all part and parcel.”[9]
Despite the heavy themes of rape, murder, and abuse, Wyld isn’t a defeatist. She believes that there is a progression towards gender equality, and this belief is mirrored in the story development: while the protagonist Sarah of the eighteenth century ends up murdered by a jealous man, and Ruth slowly kills herself with alcohol, the contemporary heroine Viviane walks out of the narrative triumphant. She manages to save both herself and her sister from abusive relationships and, in the very last scene, we do not say goodbye to Viviane inside the gothic house that oppressed us throughout the novel with its claustrophobic atmosphere. Instead, Viviane is released from the domestic sphere, outside in the “masculine” wilderness, facing the wind and the sea.
“I feel I am looking up into space or into a deep high-ceilinged crevasse.”[10]
Overall, The Bass Rock has an optimistic message: the story progression over the centuries until the present day gives us hope that things might change for the better. Not how men treat women, perhaps, but what we as women will accept. And if there are more novels like “The Bass Rock”, our voices will be heard. | https://medium.com/curious/evie-wylds-the-bass-rock-the-angels-in-the-house-write-back-87f8e7b1dfdf | ['Denisa Vitova'] | 2020-11-12 19:44:55.149000+00:00 | ['Feminism', 'Books', 'Women', 'Abuse', 'Gender Equality'] |
Building A Sustainable Trust in Our Government — By Olumide Idowu | I was invited by the United Nations to join other young people to foster collaboration and mutual understanding with young people, as key stakeholders in the development of this country.
We (young people) are the critical segment of the population that cannot be left behind in any effort towards nation-building. One of the specific outcomes we envision through this dialogue is a better understanding of the focus areas of youth, how to address and raise awareness around concerns, and to collectively create strategies that can tackle key issues in the country as we build forward better.
“From Protest to Constructive Engagement” which comes in at a very critical time of a country like Nigeria.
My outcome after this dialogue focused more on TRUST and how young people need to start looking at how we can be part of making this trust a reality in making sure we are part of the system that we want to see in coming years.
Trust in government represents the confidence of citizens in the actions of a “government to do what is right and perceived fair” Citizen expectations are key to their trust in government. As citizens become more educated, their expectations of government performance rise. If citizens’ expectations rise faster than the actual performance of governments, trust and satisfaction could decline. Citizens’ trust towards government is influenced differently whether they have a positive or negative experience with service delivery. A negative experience has a much stronger impact on trust in government than a positive one. Targeting public policies towards dissatisfied citizens will therefore have a stronger impact on trust in government. We also look at how we can gather data the will help the government understand the kind of young people they are trying to carrying along in building a nation that is sustainable for all. We need data to inform the work of governments so that youth can better participate in policy-making.
I want to thank the United Nation and UNDP in Nigeria for the invitation to join this conversation physically with UN Deputy Secretary-General, Amina J. Mohammed, and selected influential youth in Nigeria together with Hon. Minister Dame Pauline Kedem Tallen- Federal Minister of Women Affairs and Social Development and Hon. Sunday Dare — Federal Minister of Youth and Sports Development.
I look forward to a more sustainable way to build more trust in our government and the system we are hoping for, for the future generation.
By Olumide Idowu | https://medium.com/climatewed/building-a-sustainable-trust-in-our-government-by-olumide-idowu-ea89d5c98228 | ['Iccdi Africa'] | 2020-11-18 07:34:07.070000+00:00 | ['Environment', 'Youth Development', 'Climate Change', 'Women', 'Government'] |
The Fluid Nature of Individuality | Photo by Paweł Czerwiński on Unsplash
The concept of individuality or the self is a very fluid concept that we often don’t extend enough effort in exploring.
https://hmcbee.blogspot.com/2020/12/what-is-individual-and-how-do-we-find.html
All biological intelligence are forms of collective and cooperative intelligence that maintain their identity across time in an attractor of non-linear dynamics. Collectives are formed by individuals yet a collective itself has its own individuality. In biology, it is not the individual that survives, but rather the entire collective (i.e. its species).
It is indeed insightful to study roles provided by different genders that lead to the robustness of a species. It is intriguing that the strategies at the genetic level appear to be conserved across many scales. We see robust algorithms in protein folding and in sexual reproduction mechanisms. Just as there are new behaviors that arise as the universe creates new kinds of atoms, there are analogously new kinds of behaviors that biological evolution uncovers that are conserved across species and scale.
In Physics we seek out symmetries so as to find invariances in nature and ultimately laws of nature. In Biology, these invariances are expressed on the notion of individuality or self. Invariance in biology is not just a consequence of causational invariance as found in Physics. But rather a consequence of intentionality enabled by actions forged by causality. Intentions however are a compositional thing. But they only scale if there is shared intentionality across the individuals of the collective.
Cryptocurrencies such as Bitcoin and Ethereum continue to survive as a consequence of the shared consensus among its individual holders. Mining and proof of stake mechanisms are just decentralized consensus making algorithms. Civilizations and societies scale and eventually take over the world as a consequence of consensus mechanisms that coordinate the many individuals in their respective collectives. The same robustness that private property renders in a free economy is represented in the individual subject stance of members of a species. Decision-making is local but there is an emergent behavior of the collective that preserves its identity.
Individuals intentionally sacrifice themselves in wars for the survival of the collective. The downward causation delivered by culture incentivizes the individual to de-prioritize their own individuality. In the grand scheme of biology, it is the males of a species that are sacrificed so that the females of the species continue to pro-create. Males are effectively the red shirt crewmen in Star Trek. The story doesn’t begin when a red-shirt crewman gets vaporized. It always happens at the beginning of every story.
But none of the viewers actually care. They only care about the crew surviving their journey. Thus individuality actually belongs to the crew (i.e. the collective). | https://medium.com/intuitionmachine/the-fluid-nature-of-individuality-b7c5e19de8f4 | ['Carlos E. Perez'] | 2020-12-06 10:51:46.601000+00:00 | ['AI'] |
Deep Learning Part 1 — Basic Terminology | Ever wondered what is deep learning and how its changing the way we do things. In this series of tutorials, I will dig into the terminology used in the space of deep learning. A complete look at the mathematics behind activation functions, loss, functions, optimizers and much more. Along the way, I will also share links which I felt are useful rather than mentioning it to the end of the article.
Deep Learning: To put this in simple words, “Deep learning is all about making the machine to think and learn like human brains do”. In order to do this scientists came up with the concept of Neural Networks. The term “deep” refers to the depth of the network.
Andrew Ng who co-founded and led Google Brain nicely explains the concept in his 2013 talk on Deep Learning, Self-Taught Learning and Unsupervised Feature Learning. The picture below explains the need of deep learning.
Figure 1 — Need of Deep Learning
Neural Networks: Try to search for neural network in google and you will end up with definition “A computer system modeled on the human brain and nervous system”. I found this definition to be more apt and simple to communicate to a layman. A neural network is a series of interconnected neurons which work together to produce the output.
Figure 2 — A Simple Neural Network
The first layer is the input layer. Each node in this layer takes an input OR a feature and then passes its output to each node in the next layer. Nodes within the same layer are not connected. The last layer produces the output. The hidden layer has the neurons which have no connection to input OR output. They are activated by the inputs from nodes in previous layer.
Let us now understand some of the minute details related to the functioning of the neural network. I have put a simplified diagram below in order to explain the concepts and make the diagram less clumsy.
Figure 3 — Neural Network Details
There are quite a few terms in the above diagram. Let us go through them one-by-one
Inputs (x1, x2, x3): Inputs are the input values OR features based on which the output is predicted. An example would be “Pass/Fail output (y) will be decided based on inputs Study Hours (x1), Play Hours (x2) and Sleep Hours (x3)”
Weights (w1, w2, w3): Weights indicate the strength of an input. In other words, a weight decides how much influence the input will have on the output. Considering above example, Study Hours might have higher weight compared to other two.
Bias (b1): Bias is added to neural network to take care of zero input. The bias unit ensures that neuron will be activated even in case of all zero inputs. Its important to note that this value is not influenced by previous layers. If you are aware of the linear function y = mx + c, you can relate bias to the constant ‘c’. A bias value allows to shift the activation function to the left OR right. Its explained very nicely in this stackoverflow post.
Operations within a Neuron
The operations done by each neuron are always 2 steps.
Adder OR a Pre-activation Function: Not sure if there is definite name for it, but I am going to call the first operation as adder. In this step, summation of the products of inputs and weights is calculated. We also consider the bias during this step. Notice, that bias remains same while weights and inputs may vary. Its defined by the function below:
Considering our example above, it can be written as:
Activation Function: The activation function takes the value calculated by the adder function and turns it to a number between 0 and 1 (activated (0), deactivated(1)). The function determines whether a neuron should be activated (fired) OR not, based on neuron’s input being relevant for the model’s prediction. There are many different activation functions and I am going to cover them in follow-up posts. Below is the depiction of a Sigmoid activation function.
Figure 4 — Sigmoid Activation Function
Forward Propagation
The complete process of calculating the values in hidden layers and the output layer is called the forward propagation. This type of network is also called feed-forward network. Note that I have just showed the calculations in one hidden layer with one neuron in it. In a real neural network, there can be multiple hidden layers with multiple neurons in each layer. Same process is repeated in all layers before producing final output.
Loss Function, Back Propagation and Optimizers
Before I introduce new concepts, its important to understand why we need them. For that, lets take an example. I am taking a single record with an output(y) of value of 1. We will build a simple neural network to predict this output based in input features x1, x2, x3.
Figure 5— Neural Network Example
A discussed above, the first operation is the adder function(a11) followed by the activation function(h11) in the hidden layer, which in this case translates to:
The same calculation is repeated in the output layer as below. Note that h11 is the input from hidden layer.
h21 is the output which is nothing but our Y-hat. Our true value of Y being 1.
The aim should be to bring Y and Y-hat closer so that we are accurate in predicting the output given a set of input parameters. Now its time to understand new concepts in terms of Loss Function, Back Propagation and Optimizers.
Loss Function: Simply put, Loss is the prediction error of the neural network and the method to calculate the Loss is called the Loss Function. There are many different loss functions which I will cover in follow-up topics. For now, let us use the mean squared error (MSE) loss function. Considering our example above, the MSE is:
Optimizers: Optimizers are the algorithms OR methods which are responsible for reducing the loss ( Y — Y-hat). The way to reduce the loss is by updating the weights and bias parameters. There is one more parameter called “Learning Rate” which I will introduce when I am writing about Optimizers. There are different optimizers like Gradient Descent (GD), Stochastic Gradient Descent (SGD), Adagrad, Adam etc.
Back Propagation
The objective of back propagation is to update the weights so as to to reduce the loss (Y — Y-hat). It does this by taking into account the loss function and using optimizers to update the weights. Once we complete one set of forward propagation and back propagation, we call it an iteration. We keep repeating this until we reduce (Y — Y-hat) so that we get more accurate results. The new terminology can be visualized as below.
Figure 6 — Back Propagation
So here you go. This is the gist of Artificial Neural Networks (ANN). The below diagram depicting sequence of actions may help to digest things a bit.
Figure 7 — Learning Cycle
In my next article, I will explore the different activation functions and also discuss when to use what. I hope you got a quick overview of this vast and exciting domain. Keep learning.
Move on to next article in this series:
https://srinivas-kulkarni.medium.com/deep-learning-a-to-z-part-2-mnist-the-hello-world-of-neural-networks-2429c4367086 | https://srinivas-kulkarni.medium.com/deep-learning-a-to-z-part-1-1d5bd4e9944c | ['Srinivas Kulkarni'] | 2020-12-03 20:10:10.402000+00:00 | ['Deep Learning', 'Backpropagation', 'Forward Propagation', 'AI', 'Activation Functions'] |
How to Store and Fetch From DynamoDB With AWS Lambda | Create Handler Functions
In our handler.ts file, I’m going to create a function named respond that wraps the response that gets sent back to the client. This function takes two parameters:
The data we’re sending as a response. The HTTP response status code.
export const respond = (fulfillmentText: any, statusCode: number): any => {
return {
statusCode,
body: JSON.stringify(fulfillmentText),
headers: {
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json"
}
};
};
Next, I’m going to create the two handlers that I defined as functions in the serverless.yml file.
These two functions call other functions that don’t exist yet, so let’s go ahead and create them. In your app directory, you can create another folder called dynamodb-actions , and in there create an index.ts file. In our index.ts file, we’re going to instantiate the DynamoDB DocumentClient object from the JavaScript AWS SDK.
const dynamoDB = new AWS.DynamoDB.DocumentClient();
There should be no need for you to install it on a project level since the Lambda environment ships with it. However, I would advise you to install the AWS SDK globally on your local machine for local development purposes.
The first function we will create is saveItemInDB , which will take the two relevant arguments required to create an item in our to-do-list table. We then create the parameters ( params ) for the DocumentClient’s put function by specifying the table name and the item we want to store in the table.
The second function we are going to create is getItemFromDB . We will use this function to retrieve an item from the to-do-list table by specifying the table name as well as the item ID as a key in the parameters ( params ) for the DocumentClient’s get function.
/** get a to-do item from the db table */
export function getItemFromDB(id: string) {
const params = {
TableName: "to-do-list",
Key: {
id
}
}; return dynamoDB
.get(params)
.promise()
.then(res => res.Item)
.catch(err => err);
} | https://medium.com/better-programming/store-fetch-from-dynamodb-with-aws-lambda-342d1785a5d0 | ['Lukonde Mwila'] | 2020-07-30 02:15:19.457000+00:00 | ['AWS', 'Programming', 'AWS Lambda', 'Dynamodb', 'Serverless'] |
Royce Da 5'9" On Eminem, Kaepernick, and Dinner With Racists | There’s an old parable that tells the story of six blind men examining an elephant. As the tale goes, each of them feels a different part of the pachyderm and then comes to his own conclusion as to the properties of the massive animal. A man touching the tusk compares the beast to a spear, while another flaps the elephant’s ear and concludes that it’s more like a fan. Another man patting its torso compares it to a wall. Of course, all six men are simultaneously right and wrong, hindered by the limits of their perspectives.
Adapting that idea for modern times, different people — whether jurors, witnesses, op-ed writers, or Twitter users — can have dramatically different viewpoints of the same situation. Detroit rap vet Royce Da 5'9" (born Ryan Montgomery) fixated on this theme while creating The Allegory, a soon-to-be-released album thick with examination of race in America from a variety of vantage points.
Subjects like racism, gun violence, and law enforcement have only grown more fractious and bitter as national conversations within the past decade, and yet, Royce’s stances are nuanced, sophisticated, and expressed in powerful bars on his upcoming eighth solo studio album, due to be released February 21.
One of the most powerful statements on The Allegory isn’t even poetry. “Perspective,” an interlude that recreates a memorable talk with Royce’s friend and longtime collaborator Eminem, finds Marshall Mathers reflecting on hot topics, like the landmark case Brown v. Board of Education and the reason White America pushed the button for Elvis but not Black rock pioneer Sister Rosetta Tharpe, in one of his most powerful and direct public statements on race to date.
Both Royce and Em agree that it takes something powerful and visceral, like sports or music, to bring folks together in a world that’s simultaneously more connected and divided than ever. And while they won’t be handing out Cokes and imploring the world to sing anytime soon, both men believe hip-hop can be a bridge to unite and not divide. All it takes is the blind recognizing their own blindnesses — then reaching out to one another, across misperceptions and mistrust, until all can truly understand the elephant in the room.
On what coincidentally would have been Trayvon Martin’s 25th birthday, Royce Da 5'9" phoned LEVEL for a conversation on race that’s anything but black and white.
This interview has been edited and condensed for clarity.
LEVEL: Growing up in Detroit, you lived through Mayor Coleman Young cursing in press conferences, White families leaving for suburbs like Macomb County, the war on drugs.
Royce Da 5'9": Absolutely. All I remember of the Coleman Young/Reagan era is drug dealers were making a lot of money. You had the Chambers brothers on the news every other day. I think they was flushing money down the toilet while counting — they ain’t even want the singles. Like Big Meech way before Big Meech. You had guys that were homegrown, like Butch Jones, a crime boss.
Those guys moved to Oak Park, the first suburb that hood niggas started going to. When I was about 10 years old, I remember being in my crib on Six Mile. It was just me, my little brother Kid Vishis, and my mom. Some dude pulled one of the bars off the window. Steel, welded bars! My mom heard the glass [breaking] — it was the same room Vishis was in — seen this dude trying to climb through the window, and ran out the front door, screaming, in her panties and bra, “Can somebody please help me!?”
The police came. My uncle, a pro boxer, pulled up fast as hell. Another uncle came into the living room, just started loading the chopper. Then my granddad comes, gives my dad a bag with $13,000 in it. He took that money and put it down on a [new] house, and we moved to Oak Park.
That’s the origins of the Montgomery family in Oak Park, huh?
That’s how we got to Oak Park. And the very first day, I got called a nigger. Day one. They ain’t waste no time. Playing with one of the little kids from down the street, I think his name was Josh. We disagreed on something. He came right out with it: “Nigger!” It kind of throws me for a minute, ’cause the topic didn’t really come up much, like, “If anybody ever calls you a….” We’d never really seen White people like that. We’d seen them on TV. I don’t know from where, but I knew that he wasn’t supposed to do that. I knew it wasn’t right because of the way it made me feel.
Over the past decade, the high-profile deaths of young Black men like Trayvon Martin and Philando Castile have prompted a national conversation about racism, racial profiling, and police brutality. Were you having those conversations at home growing up — and with your own kids now?
Nah. I wish I was that enlightened when my older son was coming up. I teach them at a rate that lets them maintain innocence as long as possible. My dad was more of a disciplinarian. I got a lot of game from him, ’cause he’s a wise man, but he’s old-fashioned. A lot of things he expected me to just know. My 13-year-old son has autism; I don’t assume he knows anything that I didn’t teach him myself. We could be about to cross the street, and I’ll be like, “Listen, don’t cross this parking lot or street unless blah blah blah,” and make him repeat it back to me.
“We need to start being okay with not agreeing on everything. It’s alright to not agree. It’s even alright to be racist. I’m not the racist police.”
These days, your music reaches people who may never need to give or receive “the talk” about how to deal with police. Some of your fans likely support the current president, even though you’ve been critical.
I touch on perspective a lot on [The Allegory]. I’m enamored of the idea that two people could be looking at the same thing, seeing it two totally different ways, and neither one of them is really wrong. Your truth is how you see it, based off your perspective. How you feel is your truth. If you can live in that truth, and me and you can agree on music and coexist in the same environment, the same show? We don’t have to agree on shit as long as the right song is on.
We need to start being okay with not agreeing on everything. It’s all right to not agree. It’s even all right to be racist. I’m not the racism police. I’m very aware that I have to guard my energy. I can’t put myself in the position where I’m getting upset all the time at the way that somebody views me. The only thing that I demand is respect. That’s it. I can have dinner with a racist person as long as you’re not disrespecting me. People just get so uptight when you want to start talking about the tough topics. Everybody ignoring it is not going to make it go away!
On one Allegory track, “Perspective,” Eminem runs down an abridged history of racism in America and speaks about hip-hop unifying people. Does that interlude reflect conversations you’ve had with Em about race?
How it came about was we were on the phone, talking about him growing up. A lot of people think he’s from the trailer park. He’s from Detroit; grew up in the hood around Black people. We talk all the time about how tough it was, him being White and into hip-hop, and Black people thinking he’s trying to act Black. They used to beat him up all the time, just jump him. He couldn’t understand why. It wasn’t until he met Proof and Proof took a liking to him [and] started vouching for him that he got accepted at the Hip-Hop Shop.
“If it wasn’t for Marshall Mathers, I don’t think I would like Whites. And on the flip side, if it wasn’t for Proof, I don’t think he would’ve liked Black people.”
It goes both ways. I talked to him about a lot of things that I went through in Oak Park, the racist shit that happened to me that started when I was young and didn’t understand. We both came to a very clear understanding. I feel like God put him in my life to teach me that it’s not cool to generalize. Because if it wasn’t for Marshall Mathers, I don’t think I would like Whites — and on the flip side, if it wasn’t for Proof, I don’t think he would’ve liked Black people. He assumed Black people didn’t like him, because they used to beat him up. God places people in your life for a particular reason. Marshall restored my faith in people. It’s not really about converting people; it’s just about gaining understanding.
What do you remember about the conversation that inspired “Perspective”?
He said so many beautiful things, man. He does this all the time. We’ll be talking, and he’ll drop so much knowledge. This particular time, I was like, “If I send you a beat, can you talk on it and express some of these things?” The phone conversation version was way better. When I sent the beat, he talked for 12 minutes. We edited it down. A lot of the album excerpts and shit like that are there for you to know that somebody has that perspective. It’s not necessarily my perspective.
It sounds like you largely agree with his perspective, though.
I think the main point he was making is that hip-hop brings people together, and I 100% agree. I just don’t think we utilize it as much as we can. I don’t think it’s this big kumbaya party in hip-hop, where everybody’s there. Hip-hop can be that bridge. You got great men like Farrakhan, real leaders. They’re trying to touch the hip-hop artists’ platforms, to get them to understand that we’re the new leaders, the people that kids are gonna listen to.
Last year, you were one of the first artists to defend Jay-Z’s partnership with the NFL. Your song “Black Savage” ran as part of the NFL’s Inspire Change ad spots that recently aired during Super Bowl LIV. In the wake of those ads, Kaepernick has been weighing in on Inspire Change.
What exactly did Kaepernick say?
He commented on a picture of Jay-Z and Beyoncé sitting down during the national anthem — “I thought we were past kneeling?” — with a chin-rubbing emoji. [Editor’s note: Jay-Z has denied that his family sat in protest.] Most understood that quip to tie back to the larger criticism of, “When I was saying this, I got shut out of the NFL!”
Guys like Kaepernick dedicate their entire childhood and teenage years, putting their body through a rigorous hell to play some shit on God level. Then you get there and realize that you’re a slave! Kaep just decided he didn’t wanna be a slave.
“It’s obvious that the players are not gonna come together and say, ‘Listen, we’re out of here — we make the NFL — or else you’re gonna change these things.’ They don’t have the courage.”
This is not just the NFL. This is all professional sports. This is the music business. This is America. On an ownership level, Black people are not allowed economic inclusion. That’s a known thing, especially in the NFL. They’re saying, “We’re not letting you buy into nothing.” There’s White guys on those levels that get mad if you try to obtain ownership, like, “How dare you? That’s not where we place you in our minds.”
We as prominent Black people need to realize that the amount of revenue we bring to America — that guys like Kaep bring to the NFL — is astronomical. And the strength is in the collective. This is not about me taking Kaep’s side or Jay-Z’s side. It’s about taking steps. So I’m always gonna support his protesting. His protesting brought on action; he took it as far as you can take protest as a player. It’s obvious that the players are not gonna come together and say, “Listen, we’re out of here — we make the NFL — or else you’re gonna change these things.” They don’t have the courage.
You mean a general strike, like in the 1982–1983 season?
Yes! All it’s gotta be is all the Black people: “How dare you treat Kaep like that right in front of us?” That’s like breaking a man as a slave owner, right in front of his family, to put the fear in the wife. That way, she’ll just submit her baby to you, so when that baby grows a little older, he’ll just accept slavery as his reality and won’t give them any kind of problems. They made an example out of Kaep. Jerry Jones said it himself: “None of my ’Boys better not do that!”
“Black Savage” is a militant song and video — you don’t dilute the message. You’ve got credibility; Jay’s got credibility. At the same time, the NFL still has so much power.
My manager, Kino, had a conversation with Tidal, who said they were looking for a song to launch the initiative. Kino sent them the song, they liked the song, and that was it. The only involvement the NFL had, as far as I was concerned? They reached out around the time the thing was going on with T.I., his daughter, and her hymen. Everybody was outraged for a minute. They asked if I would take T.I off the song. I said, “Absolutely fucking not.” T.I.’s not coming off the song — take it or leave it. They decided to take it, and that was good.
Oh, wow.
I went to a game in Detroit. When they did the anthem, I was still sitting down, not even on purpose. I noticed toward the end of the anthem, they were kind of looking at us ’cause we sat down. I remember thinking to myself, “I dare one of these motherfuckers to say something.” I’m not trying to offend, but White privilege is expecting me to stand up for the national anthem and looking at me like you’re the police of that. Like if you say something to me, I gotta do what you say. Like I can’t stand up and knock your teeth out of your mouth. There was a point in time where by law they had that kind of authority, and that mentality continues to pass down.
It’s not just happening through families. All these things are programmed: the way our children view themselves, what’s pretty and what’s not, what’s socially acceptable, what’s considered cool. Not too many years ago, Black people were only the help on TV. There’s still a slave on the front of the fucking Cream of Wheat box, bro. You can get that at any supermarket. It’s not an accident, like, “Oh, we forgot to change that. Sorry about that!” No! If somebody makes a big deal about it, maybe they’ll change it. But if you don’t say nothing, they’re not gonna say nothing. That says everything you need to know. | https://level.medium.com/royce-da-59-on-eminem-kaepernick-and-dinner-with-racists-72526da2113a | ['Gregory Johnson'] | 2020-02-21 17:38:11.482000+00:00 | ['Equality', 'Culture', 'Sports', 'Race', 'Music'] |
🌌 CARTOON: Parking | 🌌 CARTOON: Parking
Is there life on Mars?
Cartoonist’s Note №1
I really don’t care if there’s life on Mars. I just want there to be parking.
Cartoonist’s Note №2
Like all of my cartoons, this one was “not distributed in topics.”
Cartoonist’s Note №3
Medium recently demonetized cartoons, poetry, flash fiction and other short articles. This was likely a deliberate attempt to drive creators of short content from the platform by crippling them financially (aren’t billionaires wonderful?). Help even the score by buying me a coffee.
Cartoonist’s Note №4
My new one-man Medium magazine is called — Rolli.
Psst: you might like these cartoons, too:
Psst: you really ought to watch this strange old cartoon in its entirety:
Psst: here’s an important message:
Medium CEO Ev Williams
Medium’s unexpected and unwelcome change in the way it compensates members of its Partner Program — payment is now based on how long it takes people to read an article rather than how much they like it (if at all) — rewards creators of long, padded articles, and essentially demonetizes shorter content.
As nearly all of my posts are poetry, cartoons, or efficient fiction/essays, I’m faring very poorly under this new system.
Since the change was implemented, my Partner Program earnings have plummeted by 90%, and hundreds (if not thousands) of others are in the boat.
If you want to help, send a tweet or email to Medium CEO Ev Williams (ev@medium.com) and/or to yourfriends@medium.com expressing your displeasure with the unfair new system, and asking them to restore all the missing earnings which have essentially been stolen from its writers. I and others would be so grateful!
Slowly re-reading this post 100 times would also help enormously ;)
So would buying me a coffee. | https://medium.com/pillowmint/cartoon-parking-503bf4161184 | ['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites'] | 2020-01-23 15:26:22.059000+00:00 | ['Art', 'Comics', 'Cartoon', 'Science', 'Cities'] |
GET VERIFIED ON FACEBOOK WITH THIS HANDY GUIDE | GET VERIFIED ON FACEBOOK WITH THIS HANDY GUIDE Kontentino Follow Feb 15 · 5 min read
Social media has thousands of fake accounts. Knowing this, it makes sense that companies like Facebook, Twitter, and Snapchat let some high-profile individuals and companies verify themselves to show that they’re real. This verification process gives anyone who interacts with the page peace of mind of knowing they’re not interacting with a fake account.
On Facebook, when they verify a person of public interest or a page, they put a blue check mark badge by their name. Think about big brands or celebrities. If you don’t meet Facebook’s “public interest” qualification, you’ll get a grey check mark badge for verification. The grey check mark is more for smaller businesses or local shops. So, how do you get a Facebook page verification?
Facebook’s Verification Guidelines
Facebook has a set of guidelines that your page has to follow for the page to be eligible for the verification process. There are four significant verification guidelines, and they are:
Be Authentic — Your Facebook page has to represent a registered business or entity, or a real person. Be Complete — The page or account has to have an active presence. You have to complete your “About” section, profile photo, cover photo, and have a least one post. Be Notable — Your business page has to represent an often-searched, well-known entity, brand, or person. Facebook will review accounts that gain feature spots in multiple news sources. However, the review team doesn’t consider promotional or paid content as review sources. Be Unique — Your page has to be a unique presentation of the business it represents. You can only verify one account per company or person unless you have more than one language-specific account for the same business. General interest accounts usually don’t gain a verification status.
How to Get Verified on Facebook (Blue Check) — Step by Step
Verification is free on Facebook, but you want to have your page set up ahead of time to ensure Facebook grants your verification request. Check that your website, bio, email address, and description are all updated and current. You want to link to your business’s official website, and you should link back to Facebook from your website.
Next, go to your Facebook page’s “About” section. Fill in your business’s address or addresses, phone numbers, mission statement, company overview, and other social channel handles. The goal is to have as much information filled in as possible because this makes your page look legitimate to Facebook when they verify it. Once you have this all on your page, you can start the verification process.
Step One — Go Into Page Verification
If you look at the top left of your FB business page, you’ll see a “Settings” button. Click it. This will then open a new page in your tab. Look to your left to find the “General” menu and click it. When the “General” menu opens, look for “Page Verification.” Click “Edit” by “Page Verification,” then click “Get Started.” You can also click this link to request a blue check verification.
Step Two — Instant Verification
Facebook gives you two ways to verify your page for a blue check mark. One is an instant verification, and one is more detailed. For instant verification, input your business’s phone number and click “Call Me Now.” Facebook will call the listed number with a verification code. You input the code and wait for Facebook to verify you.
Step Three — Detailed Verification
If you don’t want Facebook to call and give you the code, you can opt for the detailed verification instead. To do this, click “Verify This Page with Documents.” Facebook will prompt you to upload a document that proves your business is legitimate. The document should clearly show your business name and address. Facebook allows you to use:
Articles of incorporation
Business license
Business phone or utility bill
Business tax file
Certificate of formation
Step Four — Additional Information
You’ll see an “Additional Information” box when you go through the verification process. Take advantage of this. Write a quick introduction and outline a compelling reason why Facebook should verify you. Include links to your brand’s Wikipedia page, website, or relevant press articles. Keep it short, concise, and quick.
Step Five — Wait for Verification
Once you verify your page through uploading documents or having Facebook call you and inputting the code, you’ll have to wait for them to complete the process. Facebook will review your request and either verify your page or deny your request. This process can take between two to 30 days.
How to Verify a Facebook Page (Grey Check) — Step by Step
If you’d like to have the grey check by your business instead of the blue because you don’t have the public interest qualification, it’s a straightforward process. Like with a blue check mark, you do want to have your page as complete as possible regarding your address, bio, phone numbers, websites, and everything else we outlined above.
Go to your Facebook page’s “Settings.” Find the “General” tab. When you click it, you’ll be able to look to your right and see a “Page Verification” tab. If you hit “Edit,” it will ask you if you’d like to verify the page. Check that you would and enter a phone number for your business, language, and your country. Click “Call Me Now.”
Facebook will call you with the verification code, and this is the code you put in the box. Submit it and wait for Facebook to review it. This usually only takes around 48 hours to review, but Facebook has a 30-day window. If they approve it, a grey check will appear next to your page’s name.
Use Kontentino to Boost Your Social Media Presence
Are you ready to step your game up and improve your social media presence? If so, Kontentino can help you schedule, plan, and coordinate your social media posts. Our insights and reporting can help you gauge how well your posts do once they go live. Sign up for a free trial now.
Written by Andrej Miklosik | https://medium.com/strategic-content-marketing/get-verified-on-facebook-with-this-handy-guide-dd7b3b6c8534 | [] | 2020-02-15 14:39:57.630000+00:00 | ['Facebook Marketing', 'Verification', 'How To', 'Facebook', 'Social Media'] |
Six Chart Design Lessons from Visualizations of COVID-19 | A month later, I still couldn’t get a clear definition for what qualified as a “recovered” case, even asking colleagues working at the CDC. The Morbidity and Mortality Weekly Reports (MMWRs) had started to contain early learning about the novel coronavirus and the growing case counts in the US. I felt alarmed when I saw that Tableau had launched a ‘ready to use’ workbook for anyone to jump start their analysis of the case data and wrote 10 Considerations Before You Create Another Chart about COVID-19, which still has relevant lessons months later.
Now, I’ve delivered nearly 50 talks, written articles, and given podcast interviews about the messiness and challenges of visualizing COVID-19 data, thanks to my intersecting public health and dataviz expertise. I collaborated with Tableau as a public health data advisor on the COVID-19 Resource Hub to share useful context and information for analysts from a public health perspective.
The US is still hungry for information, though I know many of us are just fatigued at this point. Maybe some are hoping for a glimmer of a number that will give some hope of a return closer to ‘normal’? But instead, we’re peaking in a third wave as I write this article.
What have we learned as the charts evolved?
Have we ever watched visualizations on a topic evolve in real time on a topic with such a personal impact to our lives? The only parallel I can think of is visualizing polls and elections, but seldom are those charts updated for six or more months with significant changes made to the design.
In the midst of a global pandemic, I’ve watched visualization designers grapple with incomplete datasets — particularly in the early months — even John Burn-Murdoch at the Financial Times, an outlet widely praised for their data journalism in the pandemic, was putting out tweets asking for insights about different countries’ data gaps.
Often, visualizers were learning about the complexity of epidemiology in real time — or, in the worse cases, spinning up dashboards and charts of this trending news topic without much consideration of those complexities. At the same time, public health professionals were managing a pandemic response without clear national guidelines and leadership here in the US.
Charts, maps, and graphs are powerful tools for communicating information about current events like the COVID-19 crisis, and the world has looked to visualizations for answers. Ben Schneiderman, a leader in the data visualization field, called this “data visualization’s breakthrough moment.” Simple is often effective, with many small multiple tables and line charts packed with data. But all of that simplicity can mask complexity.
As researchers and evaluators, the American Evaluation Association (AEA) community understands the challenges of communicating complex information in easy-to-understand visualizations (this blog on visualizing regression results from Stephanie Evergreen comes to mind). While your evaluations and studies may not be COVID-related, you have likely faced some of the same design challenges as those charting the pandemic.
Lessons that evaluators and researchers can apply to their own visualizations
The journey through the COVID charts of 2020 has important reminders for any of us creating visualizations in the social sciences. Here are six lessons you can apply when visualizing your research results.
1. Visually display uncertainty, sample sizes, and unknowns.
When we create visualizations, we imply a certain degree of factfulness and objectivity. That creates a challenge when visualizing incomplete, messy datasets. Matthew Kay conducted a tour on uncertainty visualization research at Tapestry in 2018, and paired with colleague Jessica Hullman to talk about how these principles connect to visualizing COVID in April 2020. I highly recommend watching both talks and following Multiple Views to learn more about the nuances of the different marks and visual encodings used to alert readers to uncertainty in a chart.
Visually displaying uncertainty in some encoding on your chart, rather than burying it in a footnote or accompanying text, is particularly important today.
Take the example below of a case trend chart from the Georgia Department of Public Health. The mark used to plot the data from the last 14 days, which is likely reporting incomplete information, changes from a bar to a dot, and there’s a lightly-shaded box over the window of uncertainty, reminding readers that data from that period are likely incomplete (which is further explained in the accompanying text). While there has been much to criticize about Georgia’s COVID dashboards, this chart provides more context than many others with similar information.
Source: Georgia Department of Public Health on October 26, 2020
Many early analyses and studies of COVID-19 drew from limited data available, with results summarized in MMWRs and pre-prints. Even today, as we grapple with gaps in detailed, disaggregated information for specific populations like schools, small denominators and convenience samples can limit the generalizability of findings.
We should be transparent about those limitations in the titles and sub-titles of charts — or risk they be taken out of context. For example, when making sweeping statements like, “Schools aren’t Superspreaders,” the authorburied that data was sourced from a small cohort of students in a convenience sample of schools. Further, we should carefully consider the language we use and avoid stigmatizing those infected. (Guidance on language choice is more detailed and nuanced then I’ll cover in this post.)
2. Enable understanding with reference lines and bands.
Many of the metrics you need to track and report on COVID-19 were unfamiliar to most Americans last year. What is test positivity, and why is it better if the number is low? What is R0? Adding clear reference lines to charts can aid in interpretation. In the example below from COVID Act Now, the physical reference lines also provide additional severity encoding and color, which can aid accessibility.
Chart from COVIDActNow.org sourced on August 5, 2020. For more details on test positivity and the complexity of this metric, see the COVID Tracking Project.
You can also add marks to add important dates like when key policy actions were taken or other visual representations that help readers answer why something happened and what the data means. Consider how you can aid understanding within the chart, and not just solely relying on sharing your summary findings in the accompanying text.
3. Add thoughtful annotation layers.
In addition to reference lines, annotation layers with short text statements further explaining individual data points can help readers quickly make sense of key pieces of information. The text may include direct labeling of individual data points, adding precision where it is needed, and allowing the shape of the marks on the chart to otherwise communicate the broad story.
Data journalists have excelled in their use of annotation layers and key callouts to communicate changes in the global and local progression of COVID-19. The trackers from the Financial Times’ John Burn-Murdoch explained daily updates on country progressions, accompanied by Twitter threads calling attention to more insights or data gaps. Instead of wondering what to look at or where to focus our attention, annotation layers give us context clues.
Source: Twitter from March 11, 2020
Other annotation layers and descriptive text enable us to explore a chart more effectively, as on Our World in Data. We can borrow from these approaches as researchers by adding these kinds of key callouts in our charts, particularly for dissemination to audiences who may not have all of our context.
4. Anticipate opportunities for misinterpretation and design with them in mind.
How will someone new interpret your chart or graph? In the rush to create infographics communicating risk about COVID-19, we saw a number of misguided comparisons made between early COVID deaths and the flu or other diseases that understate the severity of the illness and the necessity of taking proactive public health prevention measures. Pre-prints of new research on COVID-19 were scoured for new insights by journalists and others, with numbers from tables or charts sometimes taken out of context or without the necessary framing of limitations.
If you’re communicating research findings to a wide audience, seek feedback on your chart before you publish. How would someone without any subject matter expertise interpret your graph? What number would they focus on in the table? And what conclusion might they draw?
When communicating research about key public health issues or other topics that motivate individual action, add clear information on where the reader can find more details and learn more about a given topic whether to better understand the science behind a behavior change recommendation or to view the original data source. For COVID-19, this could include the clear guidance to wear a mask, physically distance, and wash hands frequently with a link to the CDC website on prevention.
5. Collaborate with subject matter experts.
Evaluators bring a deep toolkit of research methods to the table, but don’t always evaluate programs within their primary area of expertise. Widely engage subject matter experts in the research design process — they can provide important insights and context in the dissemination phase.
Today, we’re further challenged to decouple our own experiences from the impact of new research about COVID-19. It’s easy to have confirmation bias and quickly click retweet when a chart confirms our beliefs about a prevention measure like mask wearing. It’s easy to misinterpret a model.
Some of the most illuminating visualizations of COVID-19 have resulted from the collaboration of experts in public health and data visualization. Visualization designers leverage their knowledge of how to communicate data clearly and effectively, and health experts ensure the visualizations communicate accurately and with appropriate language. These collaborations have extended into some exceptional data simulations, like the People of the Pandemic game.
6. Remember the people behind the numbers.
Perhaps more important than any other lesson from visualizing the pandemic: remember that behind these numbers are people. Giorgia Luipi writes in her Data Humanism Manifesto that ‘data is people’ rather than just numbers, which carries far beyond visualizing public health data.
The more we aware we are of how individuals see themselves or their communities represented in our graphs, the more we can avoid missteps in design and language that distract from the information we’re looking to communicate and respect the personal ways people connect to our numbers. Luipi and her team reimagined Andrew Cuomo’s slides from his COVID briefings, shifting away from austere bar charts and depicting the lives lost as individual dots reminiscent of stars in the sky. While the visuals are certainly less precise, they give us pause in a different way.
Image by Giorgia Luipi & her team at Pentagram, shared in Fast Company
In her essay on The Ethics of Visualizing During a Pandemic, Bridget Cogley lists various values we carry with us in our work, and challenges us to think about how those values change in the middle of a pandemic. If you read only one linked article in this essay, please make it this one.
Image from Bridget Cogley’s The Ethics of Visualizing During a Pandemic
I’m particularly struck by the ‘do no harm’ principle: we face an infodemic rife with misinformation as part of the current pandemic, and charts can be used not only to inform, but also to mislead. Before we hit publish on a new visualization, we must consider how what we’ve created could do harm — through misinforming, confusing, countering public health prevention recommendations, or using poorly-chosen language.
The six considerations shared here aren’t new ideas, but in a time when we’re facing an infodemic in parallel with a pandemic, evaluating the unintended consequences of our visualization and ensuring we communicate what we know (and acknowledge what we don’t) clearly has never been more critical. As we communicate research and evaluations that impact policy, future program investments and structures, and even individual actions, we should take the same care.
Amanda Makulec is the Senior Data Visualization Lead at Excella and holds a Masters of Public Health from the Boston University School of Public Health. She worked with data in global health programs for eight years before joining Excella, where she leads teams and develops user-centered data visualization products for federal, non-profit, and private sector clients. Amanda volunteers as the Operations Director for the Data Visualization Society and is a co-organizer for Data Visualization DC. Find her on Twitter at @abmakulec | https://medium.com/nightingale/six-chart-design-lessons-for-evaluators-to-consider-from-visualizations-of-covid-19-336bd732e6f4 | ['Amanda Makulec'] | 2020-12-16 14:03:26.877000+00:00 | ['Evaluation', 'Covid 19', 'Public Health', 'Design', 'Data Visualization'] |
Ry Cooder and His Street-Corner Symphony of Old L.A. | By Bradley Bambarger <2005>
Ry Cooder is a master of musical reclamation, having dusted off everything from classic Tex-Mex and Hawaiian styles to folk-blues and, most famously, Cuban son. But the veteran guitarist/composer — and producer of Buena Vista Social Club — is also a man with a socio-political tale to tell.
Cooder’s new album, Chávez Ravine (Perro Verde/Nonesuch), brims with infectious Latin sounds, both vintage and futurist. Underlying the grooves, though, is a concept record about a vanished Los Angeles and the doubled-edged bulldozer of “progress.”
Chávez Ravine was a Mexican-American enclave in L.A. paved over in the ’50s to make way for Dodgers Stadium, with the residents displaced and their communal culture broken up. For Cooder, it’s a metaphor for an America strong-armed into history by big-business and official duplicity.
“What happened in the era of Chávez Ravine was the sowing of seeds for the world we have now,” says Cooder, 58. “It was when modern corporate fascism was born. It was when dissent was deemed anti-American, the age of Hoover-ism and McCarthy-ism. It was a time of when the poor were disempowered on a mass scale.
“Everybody says, ‘It can’t happen here.’ But you wake up one morning, and it has,” Cooder adds. “There’s politics in this record, but politics is part of life. We’re all affected by it, can’t get away from it. It’s the same old story, but people have short memories. Listen to all those Nixon apologists on cable.”
Cooder was born and raised in the Santa Monica area of Los Angeles, where he still lives. Chávez Ravine represents what Cooder sees as the lost L.A., a quieter, more intimate, almost bucolic town. “L.A. isn’t so much a city these days as it is an enormous shopping mall,” he says. “The highways are just a conveyer belt that draws everyone on to the next consumer transaction.”
There’s a weary disgust in Cooder’s voice when he talks about the civic injustices of old L.A. or the endless traffic of the contemporary city. Yet, for the record, he praises the “interesting food” in L.A. these days, as well as the “golden light” that blesses his ocean-side neighborhood. He adds, “I try to stay in a one-mile radius of my house.”
Cooder’s easy, generous conversation mirrors his new album’s charming lyricism — diatribe and didacticism are really foreign to both the man and his music. He plays his rootsy guitar and sings on Chávez Ravine (both soulfully and amusingly in character), and the tunes waft on a hybrid wind from conjunto and corrido to Latin pop, ’50s jazz and R&B. Joe McCarthy, Jack Webb, union strife and UFOs give the words a sense of time and place, as do rich references to Latin life on the street and in the home.
Cooder dove into his four-year research mission on Chávez Ravine after seeing an exhibit of period photographs of the area by Don Normark. The pictures stirred memories of a culture that he never experienced as a young kid but heard about via radio ads for Latino bands. “The Latin scene was in the air and in my head, even if I never went there,” he says, “just like the Western swing halls here and the old Central Avenue jazz scene.”
Vital for Chávez Ravine were collaborations with figures steeped in local lingo and imagery, such as William “Little Willie G.” Garcia. A vocalist with ’60s Chicano hit-makers Thee Midnighters, he sings several songs on Chávez Ravine, including a stylish take on Lieber & Stoller’s “Three Cool Cats.”
Garcia, 58, grew up in South Central L.A. but had relatives who lived in Chávez Ravine. He remembers the place as “a wonderfully multi-ethnic neighborhood. There were a lot of Mexican-Americans, but it was also a real melting pot, with Chinese, Filipinos, Italians, Swedes.”
Two Chávez Ravine highlights co-written by Garcia point to the album’s range. “Muy Fifi” deals with the eternal (a mother-daughter tussle about bad boys), whereas “Onda Callejera” is rooted in the period-specific (L.A.’s 1943 Zoot suit riots, when hundreds of sailors were sent by taxi to quell, or perhaps start, trouble).
Garcia says the goal was “to capture what the neighborhood would sound like if you were driving along the street windows down, overhearing conversations. Ry was out to make a street-corner symphony.”
Typical of Cooder’s work, Chávez Ravine was a collective effort. Past confederates Jim Keltner (drums), Flaco Jiménez (accordion) and Blau Pahinui (vocals, guitar) appear, along with such new friends as Mike Elizondo (bass). Los Lobos guitarist David Hidalgo guests, along with pianists Chucho Valdés and Jacky Terrasson.
Cooder’s son, Joachim, 26, provides percussion and electronics that morph the atmosphere from tropical to urban. Juliette Commagere, Joachim’s mate in the L.A. band Vagenius and the album’s secret weapon, sings like a bilingual bird.
But it’s the contributions by Latino elders that give Chávez Ravine the ring of truth. Some of the loveliest moments — such as “Muy Fifi” — come courtesy of singer Ersi Arvizu, of the ’60s Latina group The Sisters and, later, El Chicano. Garcia describes The Sisters as “East L.A.’s Supremes.”
Lalo Guerrero, dubbed the Father of Chicano music, wrote a song especially for Chávez Ravine, as well as sang two pertinent old tunes. Pachuco boogie bandleader Don Tosti contributed the spoken words of “the traveler” to “El UFO Cayó.” These were the octogenarians’ last sessions, with Guerrero dying in March and Tosti passing last August. Of Guerrero and Tosti, Garcia says, “They made being Mexican-American cool.”
Referencing Guerrero and Tosti, Cooder adds, “This album is just some crazy fantasy of mine, but it’s the old guys that really make you believe. They have something that you can’t just get from TV or a magazine, just like a Sonny Boy Williamson or Compay Segundo. When they’re gone, man, it’s gone.”
Cooder himself has become an elder with wisdom to impart, having created a world of music since his days as a session star in the late ’60s. He jammed with The Rolling Stones back then, supposedly coming up with the riff for “Honky Tonk Women,” although uncredited; as to who really wrote what, Cooder says, slyly, “Ask Keith [Richards].”
In the ’70s, Cooder created an influential string of genre-hopping Americana discs for Warner Bros. during the label’s glory days. The ’80s brought film scores, including the slide-guitar theme to Paris, Texas that Garcia echoes many others in calling “cosmic.” The ’90s and beyond have seen Cooder acting as catalyst for projects with musicians from India, Africa and, of course, Cuba. (See sidebar below.)
As far as the guitarist’s authenticity in the Latin realm goes, Garcia insists that no one should ever regard Cooder as a dilettante. “Ry is such a student of the music that he’s become a teacher, even to those of us who’ve grown up in these traditions,” he says. “I think of him as a Latino trapped in a white man’s body.”
Ry Cooder on Record
Since the early ’70s, Ry Cooder has gone back in time and off the beaten track to create timeless musical travelogues. Showcasing him as guitarist, producer, composer and occasional singer, these 10 discs underscore his collaborative empathy and ever-fertile imagination.
Chávez Ravine (Nonesuch, 2005). A musical picture-book about a multi-ethnic Eden paved over in old Los Angeles, Chávez Ravine brims not only with rue and ire but charm and humor. Cooder gets help with the tuneful storytelling from a host of old pals and Latino legends, including the late Lalo Guerrero.
Mambo Sinuendo, with Manuel Galbán (Nonesuch, 2003). Cooder and Cuban guitar star Galbán play slinky, atmospheric duets. Cooder has described this unsung masterpiece as “a road trip through different wordless fantasy landscapes. Sometimes you’re in bright daylight, sometimes the streets are dark and empty.”
Fascinoma, Jon Hassell (Waterlily Acoustics, 1999). Cooder produced this moody beauty of improvisational Ellingtonia and ambient soundscapes for Hassell and his whisper-toned trumpet, helping the veteran sound-painter come up with his most organic, wide-ranging disc yet.
Buena Vista Social Club, with Compay Segundo, Ibrahím Ferrer, Rubén González, etc. (Nonesuch, 1997). This Cooder-helmed worldwide smash is one of the late 20th-century’s priceless records, documenting fast-vanishing wisdom from the sages of Cuban son. Far from a historical exercise, it’s sweet’n’sultry musical magic.
The End of Violence (Outpost/Universal, 1997). One of Cooder’s most consistently compelling film scores (to a Wim Wenders movie) sees his guitar alternately keen and echo over a subtly electronic rhythm bed. Songs from the soundtrack, plus some of Cooder’s key instrumental cues, are available on a separate release.
Music by Ry Cooder (Warner Bros., 1995). This handy double-disc anthology of highlights from Cooder’s early film scores features his crying solo slide guitar theme from Paris, Texas, one of his most iconic spots on record. Great songs with the likes of Freddy Fender are also here. This set compliments River Rescue: The Very Best of Ry Cooder, which surveys his ’70s solo albums.
Talking Timbuktu, with Ali Farka Toure (Hannibal, 1994). This trans-Atlantic summit saw Cooder and Malian singer/guitarist Toure underline the two-way connection between contemporary West African music and American blues. It’s simple and circular, poetic and hypnotic.
A Meeting by the River, with V.M. Bhatt (Waterlily Acoustics, 1993). A single late-night session yielded this early, Grammy-winning example of “one-world music.” Cooder plays meditative, beautifully recorded instrumental duets with Bhatt, an India-born innovator of the mohan vina, a lap-held slide guitar.
Bring the Family, John Hiatt (A&M, 1987). Cooder’s performance as a sideman for rootsy singer/songwriter John Hiatt preserves his most incendiary rock’n’roll guitar-playing. He was part of a super-group backing Hiatt for this hit, along with bassist Nick Lowe and drummer Jim Keltner. The album is all highlights, though they include “Thing Called Love,” “Memphis in the Meantime” and “Have a Little Faith.” The sequel, Little Village, was a let-down, though.
Chicken Skin Music (Warner Bros., 1976). This is one of Cooder’s best discs from his ’70s Warner period, when he ranged like a party-minded preservationist across various Americana idioms. “Chicken skin” means goose bumps in Hawaiian parlance; the album has some Hawaiian slack-key guitar tunes, but it’s Cooder’s duet with Tex-Mex star Flaco Jiménez on the country weeper “He’ll Have to Go” that makes the skin tingle.
(Originally published in July 2005 in The Star-Ledger of New Jersey.) | https://bradleybambarger.medium.com/ry-cooder-and-his-street-corner-symphony-of-old-l-a-e1a9679590b5 | ['Bradley Bambarger'] | 2020-12-13 18:02:34.887000+00:00 | ['Ry Cooder', 'Los Angeles', 'Chavez Ravine', 'Latin Music', 'Music'] |
How I Overwhelmed the New MacBook Air With the Apple M1 Chip. | How I Overwhelmed the New MacBook Air With the Apple M1 Chip. Thomas Underwood Follow Dec 7 · 6 min read
Stressing out the new Macbook Air
I drove 3 hours round trip to get the new MacBook Air the day after I ordered it. It was either that or wait 3–4 weeks for delivery. I don’t know about you, but I was too excited to start playing with this new laptop and putting it through its paces. I have had several MacBooks over the years. Being a contractor (programmer) I have worked on fully loaded versions of 2017, 2018, and most recently the 2019 MacBook Pro with an i9 and 32GB of ram.
Like many people, I have been intrigued by the performance claims and glowing reviews for the new Apple M1 chip along with the outlandish battery life. That got elevated to new levels of hype watching different YouTube videos of the MacBook Air being able to drive five 1080p and 4K monitors on the new Apple integrated graphics built into the M1 SoC (system on a chip) which includes the processor, ram, gpu, and other Apple hardware features all in one package.
The hype is real
When I strolled up to the window at the Apple store the salesperson was excited to give me the new MacBook Air (16GB with 1TB SSD) and tell me about how incredible the performance was. He said that he and the other employees had been throwing 4K video editing content and other stress tests at the MacBook for hours and hadn’t been able to crash it or cause the computer to become unresponsive. I took that as a challenge to see if I could max out the new M1 SoC with my day-to-day work-flow. I wasn’t surprised when I was able to do exactly that.
Recently, in another one of my posts, I decided to try and build a high-performance image hosting service. The basic premise is that I need to be able to load thousands of photos (thumbnails) almost instantly. I don’t want to wait for buffering, loading screens, or downloads. See my article below if you are interested in reading more about that project.
One of the tasks that I mention in my previous article is that I have to process images and reduce their size by scaling down each one into a thumbnail of 250x250 instead of loading the full resolution image. This reduces the image size roughly by 1/10th of the original size and allows me to load thousands of image files seemingly instantly without loading screens. I am able to do this by writing this lazy Node.js code.
Lazy Create Thumbnails Code
Any astute Node.js developer will probably say “that isn’t a very efficient script” — and you would be right. That is the point in testing out the performance of the new 8-core M1 SoC and 16 GB of ram. I wrote something that I would generally do in a hurry to batch process large amounts of data within a reasonable amount of time. If the code isn’t performant enough for what I am doing then I would begin to optimize it for my needs. It reminded me of one of the old adages one of my professors would always say “Who cares if you write a poorly performing program. In 2 years it will run twice as fast when the next generation processors come out.” He is definitely right when it comes to server-side code, but in the front-end space, we have to be as efficient as possible to prevent terrible user experiences like unresponsive pages.
For this stress test, I was processing about 7,000 images at once asynchronously with Node.js and converting them to 250x250, and writing them to the SSD.
A batch of images to process into thumbnails
You will see below that the new MacBook did fairly well. It made it to about 5,000 images before the performance began to drop exponentially. | https://medium.com/swlh/how-i-overwhelmed-the-new-macbook-air-with-the-apple-m1-chip-d8cf011729a0 | ['Thomas Underwood'] | 2020-12-23 12:06:08.865000+00:00 | ['Front End Development', 'Technology', 'Nodejs', 'Programming', 'Apple'] |
ESLint for Noobs | ESLint Rules 😱
If you explore the content of .eslintrc.json — surely you will see a field rules , and in it some more fields. The inner fields are the rules — to be more specific, rule names e.g. no-unused-vars on the other side the rule options i.e. if that rule breaches what ESLint is suppose to do, e.g. "error" viz. report an error.
Rules comes in various combinations of settings. As an example,
"quotes": ["error", "double"]
Here, the rules translates: if the quotes are not double quotes (“”), report error. Again, if we look into another variation of the same example —
"quotes": ["warn", "double"]
Meaning, if the quotes (“”) are not double quotes, report it as a warning rather than error.
It is possible to disable the rule with off as shown below.
// Reports Error
"quotes": ["error", "double"]
// OR
"quotes": [2, "double"] // Reports Warning
"quotes": ["ward", "double"]
// OR
"quotes": [1, "double"] // Reports Nothing i.e. off
"quotes": ["off", "double"]
// OR
"quotes": [0, "double"]
To summarize,
"off" or 0 - turn the rule off
or - turn the rule off "warn" or 1 - turn the rule on as a warning (doesn't affect exit code)
or - turn the rule on as a warning (doesn't affect exit code) "error" or 2 - turn the rule on as an error (exit code will be 1)
Now that you understand how to read these simple rules, explore an example by yourself—
use-isnan: ["error", {"enforceForSwitchCase": true}] | https://medium.com/altdel-in/eslint-for-noobs-98dc332a472b | [] | 2020-11-30 10:24:32.874000+00:00 | ['Beginners Guide', 'Programming', 'JavaScript', 'Typescript', 'Eslint Config'] |
Denialism is A Strategy. It is a Conscious Choice in… | It’s Not About Freedom. It’s About Power and Control.
Source: Amazon
Revised Nov. 11, 2020.
Widespread conspiracy theories. Disbelief in fact and scientific reasoning. Suspicion of experts. Targeted suppression of marginalized individuals and groups. Gaslighting on a mass scale.
These are the themes of our times. These are the actions of power and control abuse.
From systemic racism, to COVID-19, to vaccines, to climate change, to natural selection, and to even an ellipsoid Earth, deniers distort, subvert, and obfuscate both honest debate and forward progress on these issues and more. They even deny the existential threats to their nations.
A common counterargument to all these obtuse, obfuscating, and false positions is that its messengers are deluded, cultist, tribal, and furthermore bamboozled.
What if I posited something else to you? What if these folks who disbelieve in empirically supported facts and theories weren’t joking? That they weren’t tricked into it either? Consider instead they have staked out these positions because their disbelief is dependent on the pertinent factor being untrue.
Denial is powerful. It’s a fundamental coping skill involved with trauma and grief. Denial lets the aggrieved sort out their routines in preparation for the coming stages. In denial, people realize the shock, but it hasn’t landed yet.
As psychologist, Dr. Heather Neill, states,“Denial is a defense mechanism, and it’s an instant, non-reflective process” (2020).
Deniers though are using rationalizations — like the examples described in this article — to justify power and control behaviors. While denial is a stage of grief, denialism is different. It is psychological abuse. Perpetrated en masse, it is a power and control strategy.
It’s common to label deniers as fools. I propose a person isn’t fooled into taking outrageous stands like the Sandy Hook shootings being faked or the September 11th attacks being planned by the US government. One is coached into believing them, for sure, but grooming is not the same as tricking.
Grooming is a process where you convince someone slowly, so in a while you don’t have to trick them at all. They’re ready to accept the lie willingly. There are numerous examples of grooming out there, but when it comes to neurotypical adults with full, legal agency? Ones spreading baldfaced lies? Ones making conspiratorial charges toward their enemies? Ones claiming there are “illegal” and “legal” votes — and the legal ones just happen to only count for their leader?
There has to be a significant position which is aided by their disbelief. That position is power and control.
COVID-19 and the Sinister Dangers of Denialism
Consider a typical COVID-19 denial stance. Usually it goes like this:
“COVID-19 is either…”
a) a hoax
b) a bioweapon created by China’s government or a Chinese company
c) a milder disease, like seasonal flu
d) any of the above
The denier’s position is staked on the predicate that COVID-19 has to be any or all of these options. They have been groomed into taking one of these or option “d”.
Deniers are convinced that the economy is being destroyed by lockdowns. Their freedom is infringed by wearing masks. That social distancing is mass social control. These positions are a false dilemma. They were groomed to accept them by powerful & political entities who need them to believe it.
The powerful need people scared for their homes and food supply, so they will want to reopen businesses and go back to work. They need people to care about this more than the probability of catching COVID-19. Therefore a COVID-19 denier’s position rests on the disease being less contagious, less dangerous, or outright fake.
Consider how particularly insidious the essential worker status is in this situation:
There is no federal level pandemic planning or team as of Nov. 2020, to study and qualify worker essentiality. Pres. Trump dissolved that team. There were millions working at their job sites, putting their health at risk under the auspices of essentiality. After lockdowns ended, most people returned to work and school. Exposure, spread, and infection rates have topped 100,000 new cases per day, as of Nov. 2020.
This sinister abuse of power has allowed corporations to force workers to the office. It has meant service workers cutting hair, serving food, and pouring coffee for unmasked customers, inside facilities incapable of supporting social distancing postures.
It has meant hospice, nursing home, and in-home caregivers coming into regular contact with the immunocompromised, the disabled, and elderly, the ones most susceptible to COVID-19 infections and death.
“Essential” now includes school teachers, who are responsible for the students pouring into schools across the country. Most school buildings were never planned for social distancing. Often schools are older structures with poor ventilation, especially in poorer districts.
Schools have scrambled late into rolling out remote learning, because of neglectful and absentee leadership — all of whom are empowered by their toxic relationship with deniers.
Further locking deniers into their positions is the belief that government is inherently tyrannical and dysfunctional, and thus incapable of helping people. That the only way to beat this disease is to exercise personal choice and individual responsibility. This take exempts the powerful from culpability for their shortfalls, ignorance, and greed. The powerful run government. They need deniers to believe the government is broken by nature, and that is an untreatable character defect.
Louis Gohmert, a US Congress Representative caught COVID-19 in late July 2020, and he hasn’t changed his positions on the pandemic. He has alluded to COVID-19 conspiracy theory and used xenophobic rhetoric about the disease. On August 5, 2020, a Georgia school student caught COVID-19 the day after schools reopened. Georgia state Governor, Brian Kemp has not reversed his position on masking orders. He sued Atlanta mayor, Keisha Bottoms for implementing a citywide mandate.
South Dakota Governor, Kristi Noem and North Dakota Governor, Doug Burgum have refused to implement mask orders or other measures. Neither has Nebraska.
Last summer, Gov. Pete Ricketts had threatened to withhold CARES Act funds from any city or county which imposed one. He has threatened to sue Douglas County, whose officials are considering a local mask order. Omaha has issued one and renewed it through February 2021. As of November, Ricketts has still refused to implement a statewide mask order — even after coming into contact with an infected person.
What does this mean? It means…
Deniers are validated by terrible leadership. They can see the actions of these politicians, argue the government is useless, and only individuals can counteract the pandemic. Their resolution then is to force people back to work. Of course, there have been workers active this whole time.
Deniers are unmoved by empiricism. It is not enough to provide qualified data and evidence. They will cherry pick only the data which fits their narrative, while rejecting the rest.
Deniers are not persuaded by empathy. A denier reserves none for anyone who doesn’t agree with them. They are suspicious of others as a rule. It is a fool’s errand to entreaty, cajole, or entice a denier. They already see your attempts as patronizing transactions meant to demean them.
Recovery empowers denialism. It gives deniers a reason to state the disease isn’t that bad, while minimizing 240,000+ American deaths (Figure dates from Nov. 11, 2020).
The denier’s position is dependent on people worrying about their personal needs more than infection risk and community good. They want people to believe government is inherently inept. The sinister outcome: mass confusion and fear. Denialism is a power and control strategy geared to this outcome.
If people believe their government is broken and corrupt not as a matter of circumstance but by nature, then they don’t question why government is failing them. They don’t ask why there aren’t better options than going to work and sending their children to school, despite the observable and measurable actions taken in other countries which produced better results than in the United States.
Why Denialism Aides Power and Control
Why do deniers hold onto dangerous positions like these, despite being told by qualified information sources, direct and personal observation, and even personal experience with calamity itself?
On a personal level — i.e. people you know — the locus of their very identity depends on reality being untrue. It’s a self-preservation move with destructive results but they refuse to budge. They feel powerless and denialism gives them the illusion of power and control.
Deniers in your family or local community have woven their denialism into their identity and worldview. They’ve shaped their community and relationships with it. This makes denialism a strategy to protect all of these. It’s a strategy to gate-keep from outsiders, gaslight the insiders, and attack intruders.
On a systemic level, denialism is used by authoritarians to gaslight and demoralize people. This lets them swindle, dismantle, and defraud communities and nations. Systemic rot requires systemic reform and resistance. It can’t be accomplished by individuals or small groups alone.
Redemption from Denialism is Hard
For someone to cease denialism, they have to be willing to change. Deniers have been abused themselves, just like in any other dysfunctional system or relationship. Self-awareness and courage are needed to develop the will to leave an abusive situation. Leaving denialism is a process, not an act. It’s akin to leaving a cult or an abusive spouse.
A human being’s basic needs involve comfort and shelter within a community or family. Losing the known is terrifying. Entering the unknown requires support, empathy, and strength from a new network.
Like with denial in grief, a denier has to open up and confront their changed world. They have to be ready to move to the next stage. | https://medium.com/swlh/denial-is-a-strategy-c6f574fe3a87 | ['Jack Rainier Pryor'] | 2020-11-15 14:55:00.612000+00:00 | ['Covid 19', 'Mental Health', 'Politics', 'Climate Change', 'Self Care'] |
‘This Is America’ Brilliantly Wraps History and Sociology in the Garb of Pop Culture | While watching the popular music video for Childish Gambino’s new track “This is America”, I was completely captivated by the immaculately crafted cinematography, choreography, composition, and other aesthetic aspects of director Hiro Murai’s production. The first three minutes are comprised of a continuous shot, which moves throughout a vast warehouse, keeping our attention on the star and narrator — a shirtless and constantly contorting and gyrating Donald Glover, with an entourage of sometimes synchronized dancers behind him.
It was for this reason — the presence of pleasant visual elements — that I found myself completely caught off guard by the two acts of extreme violence that were later depicted. The song and dance continued, and I did not have a chance to react to what I had witnessed. But this technique was deliberately employed to enhance a specific message; the cycle of American violence occurs so quickly (and within the context of sound-bite and meme culture), that we barely have enough time to process a tragedy before the next one strikes.
The opening scene features a seated Black man playing an acoustic guitar accompanied by the mellow yet cheerful vocal harmonies, “We just wanna party, party just for you. We just want the money, money just for you.” The tone dramatically shifts when our star dances into view, casually pulls out a pistol, holds it to a hooded prisoner’s head (the man who was previously playing acoustic guitar), and brutally executes him. The first deep, dark, bass-driven verse is introduced as Glover proclaims, “This is America,” mere seconds after pulling the trigger.
The gun is then delicately handed to man who protects it with a red cloth, symbolizing the fact that, in American discourse and policy alike, firearms themselves are often more highly regarded than actual human lives. Another important message in the first scene is the very specific posture Glover assumes before shooting the hooded man. He is referencing a prominent historical Jim Crow poster by imitating the pose of the man in the illustration (see image below). Due to the additional juxtaposition of gun violence and prison, this could be a nod to Michelle Alexander’s groundbreaking book “The New Jim Crow: Mass Incarceration in the Age of Colorblindness.”
The hooded prisoner, which is an unmistakable reference to the gut-wrenching images of Abu Ghraib, represents American violence abroad. But the execution itself represents the scourge of racist police violence domestically. This scene may also contain symbolism regarding America’s barbaric practice of capital punishment (the U.S. is one of the last holdouts in the developed world).
The second act of violence occurs when Glover’s character enters another room where an all-black gospel choir is singing the chorus. Our star is handed what appears to be a fully automatic AK-47 and riddles the choir with bullets as they immediately fall to the ground. This is a clear allusion to white supremacist Dylan Roof, who, in a disgusting act of domestic terrorism, murdered nine Black Americans during a church service in 2015. At this point, the school children who accompany Glover call to mind our nation’s epidemic of horrific mass shootings, many of which have taken place in schools.
And all the while, during this shocking violence and the chaos that ensues, Glover and his crew continue their elaborate dance routine, as if to distract the audience from the deeply unsettling aspects of the unfolding scene.
There is a distinct possibility that Glover is including an element of self deprication, or hyper self-awareness, as he has personally participated in the vast spectacle of American consumer culture (through his career in television, film, music, and comedy) which distracts from more urgent social issues.
Within the context of the American racial landscape, this notion calls to mind a scene from the second season of the Netflix series Dear White People, which I recently binge-watched during the two days following its release. To avoid spoilers, I’ll be as vague as possible: Two characters discuss the grotesque history of minstrel shows, and how, even in modern times, the work of Black Americans can be exploited for the purpose of entertaining white audiences. Based on the reference to the Jim Crow minstrel poster juxtaposed with lines like, “I’m so pretty,” and “Watch me move,” it’s possible that Glover is expressing a similar sentiment here. | https://medium.com/an-injustice/an-art-school-inspired-critique-of-childish-gambinos-this-is-america-c46e02d0eef5 | ['Matthew John'] | 2020-06-19 20:08:53.463000+00:00 | ['Politics', 'Race', 'Childish Gambino', 'BlackLivesMatter', 'Music'] |
On the Verge | The verge and foliot escapement. The first mechanical clocks, building on the older principle of water flow for energy, employed a solid weight to yield the computation of time. If we just tied a weight and let it drop, we would not have to wait long, and we would be able to measure time only in seconds. So the trick for a mechanical clock without water, but instead using a weight, is to delay the time of the weight drop. This was done with an escapement. The verge is the vertical post and the foliot, the horizontally rotating bar. This animation (which can be viewed by clicking on the above image) is from Peter Ceperley who has a good explanation of the gear train powered via the escapement control. Think of this escapement is the medieval equivalent of a square wave — a mechanism to achieve an oscillating signal. | https://medium.com/creative-automata/on-the-verge-943d848bf998 | ['Paul Fishwick'] | 2016-12-20 17:44:17.523000+00:00 | ['Engineering', 'Kinetic', 'Machine'] |
10 Java Articles You should Read in 2021 | 10 Java Articles You should Read in 2021
Best stories of Javarevisited publication
Photo by Joshua Earle on Unsplash
Hello guys, another month pass by and we are ready with another edition of the best of Javarevisited. The growth is good, with more readers and authors are joining Javarevisited every day. If you are also a Java author or write on medium, you can join the Javarevisited publication to give your story more exposure.
Anyway, here are the most viewed articles of Javarevsitied last month.
This was the best and most viewed article of this month on Javarevisited, it wasn’t like the viral article of last month which got 147K views but this one got around 12.5K views.
The list contains some of the recent worth reading books for Java developers on key skills like Spring Boot, Best Practices, and recent Java development. If you haven't read it yet, I highly recommend it to read it now. | https://medium.com/javarevisited/top-10-java-articles-you-should-read-aug-2020-3f87a543f478 | [] | 2020-12-12 09:37:34.989000+00:00 | ['Articles', 'Coding', 'Javarevisited', 'Java', 'Programming'] |
Insertion-Sort — Everything you need to know! | Hello reader! If you are here, you’re probably a computer science student or a computer science enthusiastic (or not…). Either way, this article was made for you.
The study of sorting algorithms is one of the most fundamental areas of computer science. There are many sorting algorithms out there and they all perform in different ways. So, I decided to create a series where I’ll cover the basics of the most famous sorting algorithms.
The first one of this series will be Insertion-sort, a pretty straightforward and easy to understand algorithm. Shall we begin?
Insertion-sort
Intuition
Let A be a array of numbers. Let n be the number of elements of A. So, we will iterate over A starting from the second index until n. Let j be the current index of the iteration.
The main idea of the algorithm is to ensure that A[1]…A[ j - 1] is always sorted. So, if necessary, each element A[ j] will be inserted in the correct position of the sequence A[1]…A[j -1] by shifting the elements of this sequence to the right until the element A[ j] is in the right spot. This ensures that in the next step of the iteration, the sequence A[1]…A[j -1] remains sorted. Note that, in the first iteration, the sequence A[1]…A[ j - 1] = A[1].
Consequently, in the last iteration, j = n and the sequence A[1]…A[n-1] is sorted. So, after A[n] be placed in the correct position of the sequence, the array A is fully sorted.
Pseudocode
Here’s the algorithm in pseudocode:
Code image created in carbon.now.sh
Note that, in our pseudocode, array indexing start’s from 1. So, the first element of a given array A is A[1].
Line by line explanation
In the first line of the code, we create a loop starting from the second index until the last index.
In the second line, we save the value of A[j] in a variable called key. This is necessary because we need this value through the whole iteration and it can be updated in line 5.
In the third line, we set the value of i to j - 1. (Remember that we want to shift each element of the sequence A[1]…A[j-1] to the right until we find a place to put A[j]. So, we will start with A[j - 1].)
Code image created in carbon.now.sh
In the fourth line, we have a while loop that goes all the way to line 6. If both conditions are satisfied, the A[j] element is in the wrong position and we need to shift the A[ j - 1] element to the right. The fifth line is responsible for shifting the elements of the sequence one position to the right and after that, line 6 decrements i to change the current element of the sequence A[1]…A[j - 1].
If we got here in the seventh line, it means that one of these conditions in the while loop happened: i = 0 or A[i] ≤ key.
If i = 0, all elements of the sequence A[1]…A[ j - 1] are greater then A[ j] and were shifted to the right. The line 7 will put the element A[j] in the first position of the sequence.
if A[i] ≤ key, all elements of the sequence A[i + 1]…A[j - 1] are greater then A[j] and were shifted to the right. The line 7 will put the element A[j] right after A[i], keeping the sequence A[1]..A[j -1] sorted in the next iteration.
Complexity
Now, let’s calculate the time complexity of Insertion-sort.
Well, we know that each line of the code will spend some time to execute. Lets call this time as ci, where i is the number of the line. So c1 is the execution time of the line 1, c2 is the execution time for the line 2, etc…
To calculate the time complexity, we will calculate how many times each line i is executed and multiply it by ci. After that, we sum the values for all lines.
Let tj be the number of times the while loop is executed for a given j.
Code image created in carbon.now.sh
So, for a given n, line 1 is executed n times.
Lines 2, 3, 7 are executed (n - 1) times.
Line 4 is executed sum of tj, j = 2,3,..,n times.
Lines 5 and 6 are executed sum of (tj - 1), j = 2,3,..,n times.
After summing all these values, we get this formula below:
Equation 1
Best case
The best case is when the input array is already sorted. In this situation, the while condition is always false and is executed only one time for each j, giving a total number of times equals to n - 1. Lines 5 and 6 are never executed.
So, we replace the number of execution times of line 4 to n - 1 and of lines 5, 6 to 0 into the formula 1.
And now, we can rewrite this equation as an + b, where a and b are constants.
So, for the best case, Insertion-sort runs in Θ(n) (Linear time) for a given n.
Worst case
Let A be the input array. The worst case is when A is inversely ordered. In this situation, every element of the sequence A[1]…A[j-1] is going to be greater then the A[j] element. So, the while loop will run exactly j times, tj = j for each j.
When, replacing tj for j in equation 1, we will have to solve this summations. Just remember that this is a sum of a arithmetic progression.
And now we replace those values in equation 1 and rewrite the equation in the form of an² + bn + c where a,b,c are constants.
So, for the worst case, Insertion-sort runs in Θ(n²)
Average case
Let A be the input array. On average, half of the elements of the sequence A[1]…A[j-1] are going to be smaller than A[j] and half the elements are greater. Therefore, on average, we will check only the half of the sequence A[1]…A[j-1]. So, tj is about j/2.
After replacing tj to j/2 in equation 1, we will have to solve this summations
And now we put those values in equation 1 and rewrite the equation in the form of an² + bn + c where a,b,c are constants.
So, for the average case, Insertion-sort runs in Θ(n²)
Implementation
Here’s the implementation of Insertion-sort in Java.
There are few differences from the pseudocode version.
The first one, is that, in this version, we don’t need to pass n as an input parameter. That’s because, in java, we can access the number of elements of an array by accessing the attribute length. So, for a array A, n = A.length.
The second one, is that, in Java, array indexing start’s from 0. So, we need to change the first part of the condition in the while loop to i ≥ 0, instead of the pseudocode version i > 0.
Here’s the link of the github repository in case you want to test it out!
Link: https://github.com/leosampsousa/Insertion-sort
Conclusion
I hope you have enjoyed this article and learned more about Insertion-sort! We will see in the next articles of this series, that there are other sorting algorithms that perform much better!
This was the first article i wrote here, so feel free to give me some insights or to correct any mistake i could have made above.
I’ll see you in the next algorithm! | https://medium.com/techatsoma/insertion-sort-everything-you-need-to-know-3d762880c764 | ['Leonardo Sampaio De Sousa'] | 2020-12-15 12:21:50.128000+00:00 | ['Algorithms', 'Computer Science', 'Sorting Algorithms', 'Java', 'Programming'] |
Ta dose crypto du vendredi 23 mars — Le Japon met la pression sur Binance | Japan orders Binance crypto exchange to close or face police action, bitcoin rally falters …
According to a report by financial news outlet Nikkei, Japan's Financial Services Agency (FSA) has ordered the closure… | https://medium.com/la-bulle-crypto/ta-dose-crypto-du-vendredi-23-mars-le-japon-met-la-pression-sur-binance-9dac09ef90 | [] | 2018-03-23 08:01:01.137000+00:00 | ['Google', 'Blockchain', 'Binance', 'Huawei', 'Bitcoin'] |
How to Make a Custom Screensaver for Mac OS X | Step 1: Project Setup
Screensavers for Mac OS X are developed with Xcode. The first thing you’ll need to do is create a new Xcode project with the category “Screen Saver.”
I named my project Pong, but you can name yours anything you’d like. As of Xcode 10.2.1, Xcode will generate some files: PongView.h , PongView.m , and Info.plist .
Since Objective-C isn’t the prettiest language and Swift is becoming more mainstream, we’re going to delete these auto-generated Objective-C files and create a new file, PongView.swift .
The structure of this Swift file is quite simple. You can copy-paste the template from below:
We will use the function draw() to render content for each frame of the screensaver animation.
We will use the function animateOneFrame() to update the “state” of the screensaver every time the animation timer fires (this timer is created and handled automatically by the OS). It’s important to call setNeedsDisplay(bounds) at the end of this function so that the OS knows to re-draw the screen.
Important: Since we got rid of the Objective-C stuff and are using Swift, the project’s Info.plist file needs to be updated. Prefix the value for the key “Principal class” with the name of the Xcode project. For example, my Xcode project is called Pong, so I changed the value from PongView to Pong.PongView .
Step 2: Implement Logic
The state of our pong screensaver is going to be tracked with just a few simple variables:
In the init() function, we’re going to configure the ball’s initial position and velocity. We set the initial position to the center of the screen and set the initial velocity to a random vector with magnitude 10 .
Now we need some helper functions to determine when the ball makes contact with the edges of the screen and/or with the paddle. The following two functions will do the trick:
The function ballHitPaddle() is not 100% robust for verifying contact between the ball and paddle, I know. But for our purposes, it’s good enough.
Now, we’re going to use these helper functions to update the “state” of our screensaver in animateOneFrame() .
The paddle is guaranteed to track the x-position of the ball. Therefore, it will always make contact with the ball before it bounces on the bottom of the screen (unless you set the ball’s velocity really high, in which case it could “jump through” the paddle). | https://medium.com/better-programming/how-to-make-a-custom-screensaver-for-mac-os-x-7e1650c13bd8 | ['Trevor Phillips'] | 2019-07-15 16:03:58.432000+00:00 | ['Software Development', 'Computer Science', 'Programming', 'Design'] |
The Perils of the Digital Paywalls | The Perils of the Digital Paywalls
Welcome to the era of information inequality
Photo by Misha Voguel from Pexels
“It’s a big question for democracy: The business model of news is changing, That’s a big challenge for the world.” — BuzzFeed CEO Jonah Peretti
The tremors from the meteoric rise of the internet giants have shaken up the news industry. Globally, a greater number of people are consuming news through social media platforms, cutting the streams of revenue that used to keep traditional newspapers in circulation.
To compensate for the lost revenue, papers are increasingly erecting digital paywalls. The online news that people used to get for free is paid for now. On one side, the benefits of subscription-based news are quite broad. Apart from helping them stay afloat, the news organizations can now focus on delivering fact-based news and not waste time chasing advertising money, which forced them to publish more clickbaity content. It looks to be the ideal model for the fourth pillar of democracy.
On the other hand, what the digital paywall does is what every other wall is meant to do. It restricts access. It keeps the information accessible to only a privileged few. Many people who can’t pay for the news are deprived of factually correct and unbiased journalism. And like how with the rise of capitalism the chasm between the rich and the poor is growing wider every passing day. With the rise of the information age, the chasm between the elites and the masses will grow even wider, courtesy — the paywalls.
You thought rising inequality is dangerous for society? Hold on and fasten your seatbelts. You are entering an era where not only the rise in economic inequality is more acute, exacerbated by the pandemic, but the rise in information inequality is even more exponential. And the ones who are already grappling with inequality are now feeding on fake news too. A recipe for disaster?
Informed citizens are happy citizens
“Knowledge is power. Information is power. The secreting or hoarding of knowledge or information may be an act of tyranny camouflaged as humility.” — Robin Morgan
Journalism is not just about delivering factually correct content. We can say it to be the fourth pillar of democracy — Executive, Legislature, Judiciary being the other three, as it keeps all the other institutions in check. But if we learn anything from history, it’s that the executive and sometimes even the judiciary can become the pillars on which not democracy, but the ruling government rests.
Unbiased journalism thus becomes a hallmark of a progressive democratic nation. If a nation has an independent press, you know it’s having a functioning democracy. If we go through the World Happiness Index and the World Press Freedom Index, we notice that most of the countries that protect press freedom are the ones that are the happiest. On the contrary, countries, where the press is gagged, are the ones with the unhappiest people.
The United States and the United Kingdom, for instance, rank 45 and 35 respectively on the Freedom of Press Index and rank 18 and 13 on the Happiness Index. China and Russia rank 177 and 149 on the Press Freedom Index and 94 and 73 on the Happiness Index. Many studies including the one done at the University of Missouri have found that free and fair press has a large impact on the people’s happiness.
Now the important point to understand is that journalism inherently doesn’t have the power to keep the cathedrals of power in check. It keeps the citizens of a nation well informed about the various policies of the government, which in turn shapes public opinion. It’s the well informed public which is reading factual news that reins in the government if they feel the policies are detrimental to them.
The Information Age
Enter the Information Age, where traditional journalism is losing ground to misinformation, fake news, disinformation campaigns, political narratives, and what not. Content is flowing freely. Everyone is a source of news now. Anyone can be a journalist. Anyone can be a reporter. Anyone can be an editor.
But not everyone can be an unbiased journalist. Not everyone will know how to keep their own prejudices miles away from the story they are publishing. Not everyone will know the values and ethics inherent to journalism.
And advertisers, unfortunately, favor data over truth and facts. And it’s not their fault. They have to find their customers and sell their products and services. Social media companies give them enough data to pinpoint their prospects.
According to the Pew Research Center, advertising revenue for newspapers in the United States fell from $37.8 billion in 2008 to $14.3 billion in 2018, a 62% decline. During the same time, newsroom employment dropped by nearly half.
So now all the advertisement money that was encouraging quality journalism has veered towards social media platforms, leaving the traditional media houses high and dry.
So what can the news organizations do?
Build a wall
Newspapers worldwide are erecting digital paywalls to compensate for the lost revenue. We have metered paywalls which news organizations such as The Economist, New York Times, and Washington Post have adopted. In this model, a few articles are free to read for everyone every month before the reader hits the paywall. This is the most popular subscription model in America with around 62% newspapers adopting it.
Then we have the hard paywall where non-subscribers only get a sneak peek of the content, and to read the entire article the reader must subscribe. The Times of London is one such organization that follows this approach.
Then comes the freemium model where some content is free and value-add content is charged.
At the other end of the spectrum, we have news organizations such as The Guardian which keeps its content free but readers can support the organization by contributing as less as $1.
The problem
The digital paywall does what every other paywall is meant to do: create divisions.
Restricting information to a privileged few
What if Tim Berners-Lee had patented the internet, and it was available to only those who could pay for it. Can you visualize the jarring division that it would have created and the advantage that the wealthy would have over the poor? Forget about the whole internet. Just imagine if Wikipedia was behind the paywall.
Digital paywalls restrict the flow of information to the masses and in doing so leave them in a storm of information that lacks rigor and reeks propaganda.
News is not any other form of content. It’s a public good. Everyone has the right to free and fair investigative journalism to make an informed decision about their lives and about their representatives. By limiting access, we deny people their right to quality information.
Serving the privileged few
Furthermore, let’s dig deeper into the economic model that the major publications are saying will keep quality journalism alive. By erecting paywalls, the publications are basically separating the wheat from the chaff; the paying ones from the non-paying ones.
In turn, the advertising money will shift from free services to paid services. And the economic incentives will therefore nudge the news organization to cover stories that the paying customer would get hooked on.
In other words, the news organizations will favor reports that the paying customer is concerned about and the issues of the average person will take a back seat.
Echo chambers for the privileged few
If you have subscribed to a news service, you are highly unlikely to subscribe to another. Research has shown that people tend to read information that confirms their biases. When the news is free, however, people will stumble on a unique perspective one way or the other.
If you are paying for the news, then you will lock yourself in an echo chamber. And as already mentioned, since advertisement money is still chasing your attention, and economic considerations weigh over the essence of journalism, there are chances that the news you are reading is curated to confirm your biases.
The Solution
The situation might look grim, but that does not mean that news organizations have exhausted all their options. There are still ways with which we can keep news free for everyone without the publication going kaput.
Publicly funded media houses
The publicly funded media news has its perils. At some places, the information that the public receives is as good as the government’s intention of protecting democratic values. And many times the publicly owned news organizations have become the go-to channel for governments to spread their propaganda.
But that doesn’t mean that the model doesn’t work.
As reported in an article in The Washington Post, a non-profit, municipally owned newspaper was launched in 1919 in Los Angeles when the public was grappling with the fake news of the early 20th century — yes, fake news is not a 21st century problem. They declared it as the “peoples newspaper” and the “first municipal newspaper of the world.”
But, people rejected the idea in 1913, ostensibly, because of the misinformation campaign fanned by the local publishers.
As described in the article,
One post-mortem report described the paper as a “successful experiment” brought down by “active, determined opposition” from the city’s local business community.
Is the time ripe to experiment with such public models again?
The membership model
The Guardian membership model is the perfect example of how quality journalism can be accessible to everyone without necessarily paying for it.
The well-to-do members can donate to the publications and keep it afloat and profitable. In return, these paying members get access to certain stories or services which the non-paying readers won’t.
And this model works. For the financial year 2018/19, The Guardian reported a profit of £800000. It’s first in over two decades and only after three years into the membership model.
Revenue sharing partnerships with tech companies
Google recently decided to pay $1 billion to news publishers. In his blog post, Sundar Pichai, the Chief Executive of Google announced the partnerships with some leading publishers and further explained,
I have always valued quality journalism and believed that a vibrant news industry is critical to a functioning democratic society… It’s equally important to Google’s mission to organize the world’s information and make it universally accessible and useful… Alongside other companies, governments, and civic societies, we want to play our part by helping journalism in the 21st century not just survive, but thrive.
Social media companies are facing pressure from governments globally to pay for the content that gets published on their platforms. The Australian government recently drafted legislation to force Google and Facebook to pay for the news.
So, Facebook and Google are now developing platforms such as the Facebook Journalism Project and the Google News Initiative which will treat news publications as partners and therefore give them a slice of the revenue pie.
It might be the first step in a journey of thousand miles, but it definitely is in the right direction. And by partnering with news agencies, the social media companies can also deal with the scourge of fake news.
The Freemium model
Keeping most of the digital content free and charging for certain value add content is another way of keeping authentic news accessible to everyone. This is the second most popular model in America as pointed out in this report of the American Press Institute.
Most reputed news organizations have kept the Corona virus coverage free for everyone. Why? Because this information is a public good. If factual information about the virus doesn’t permeate in the society, misinformation can spread its wings and fan public hysteria.
Similarly, basic news reporting should be accessible to everyone. If it’s not, then fake news takes center stage.
Conclusion
We are in an era where news organizations are facing competition not only from other publications but also from social media platforms, bloggers, and other such content generators. Their revenue pie is shrinking every single day. And a digital paywall seems to be a way out of the doom and gloom.
But as discussed above, other revenue models can better serve people, with no one hitting the paywall.
Objective, well-investigated journalism is a public good. What the president did that jeopardized the role of the judiciary, how he did it, what are its consequences, how can we prevent it in future… All the information should be available to everyone and not only to a privileged few. The masses should be able to access all the arguments, all the judgments, all the public documents in a clean, understandable format.
They shouldn’t be reduced to a group of kids trying to get a sneak peek of the overall picture from a crack in the wall of a theater.
As said by Aaron Swartz, | https://medium.com/discourse/the-perils-of-the-digital-paywalls-e8e1606cad9c | ['Mehboob Khan'] | 2020-11-01 04:57:46.626000+00:00 | ['World', 'Society', 'News', 'Equality', 'Fake News'] |
Google TW 面試趣 - Test Engineer, Android | 某天收到一封來自 Google TW - Test Engineer, Android 的面試邀請,於是開始請HR幫忙安排後續一切事情,首先要先感謝 HR Jenny 很用心、很細心的跟我說明接下來面試的流程,也提供我許多面試相關的參考文件,真的是萬般感謝,讓我感到備受尊重。當然! 面試不外乎還是要刷刷題目~這我就不多說了!
Google TW 面試流程:
Recruiter Prescreen → Phone Interview → Onsite Interview ( 4–5 sessions ) → Hiring Committee Review → Offer Review → Offer Delivery (Yippee!)
先說結果,很可惜的我在第二關就直接GG,但我還是覺得整個經驗是令我感到有點神奇與黑人問號的?
面試時間約莫45分鐘(透過Google Hangout視訊會議),視訊電話接通之後,先與interviewer打招呼,她請我先自我介紹,我開始敘述我的工作經驗bla~bla~bla~5年的工作經驗如何如何,之前是如何建立自動化測試..如何測試..測試過什麼樣的產品..寫過什麼測試工具與建立測試架構...等(我約莫用3分鐘時間口述完畢)。
接著直接在 Google Doc 上貼上一段題目,第一次在Google Doc寫code囉~XD 如下:
Vampire number: positive integer v that can be factored into two integers x*y,
where base-10 digits of v are equal to the set of base-10 digits of x and y. These two factors are called the fangs.
Examples:
688 = 86 x 8
1260 = 21 x 60
1530 = 30 x 51
125460 = 204 x 615 = 246 x 510
12546 = 246 x 51
Please implement a method to check if it is a vampire number.
boolean isVampire(int x, int y) {
[fill here]
} | https://medium.com/drunk-wis/google-tw-%E9%9D%A2%E8%A9%A6%E8%B6%A3-test-engineer-android-ee58b732bade | [] | 2019-07-28 16:38:46.553000+00:00 | ['Work', 'Google', 'Test Engineer', 'Interview', '面試心得'] |
Becoming a Monopoly Was Always Facebook’s Goal | Becoming a Monopoly Was Always Facebook’s Goal
‘Copy, acquire, and kill’
Photo: Chip Somodevilla/Getty Image
Wednesday’s filing of a major government antitrust suit against Facebook is a landmark in the internet’s history. We knew the suit was coming; we didn’t know it would call for a full-on breakup that would split off Instagram and WhatsApp from the parent company. You can read the Federal Trade Commission’s 53-page complaint here.
Some commentators were quick to question how the FTC and 46 state attorneys general could credibly claim Facebook’s 2012 Instagram acquisition and 2014 WhatsApp acquisition constituted monopolistic behavior, given that the deals withstood antitrust scrutiny at the time. Indeed, both purchases were mocked by many as frivolous overpays, and few foresaw Instagram or WhatsApp growing into the giants that they’ve become under Facebook’s ownership.
That Facebook saw something others didn’t may be a testament to the company’s farsightedness and business acumen. But just because certain regulators and pundits didn’t recognize what Mark Zuckerberg was up to at the time doesn’t mean the company is innocent of anticompetitive behavior.
On the contrary, we now have evidence of what I and many others argued from the start: that Zuckerberg’s goal was always to monopolize social networking. Internal Facebook emails published as part of this year’s Congressional antitrust investigation make clear that the acquisitions were primarily about neutralizing potential competitors, not improving Facebook’s products. They were part of a long-term strategy that has come to be known as “copy, acquire, and kill,” aimed at ensuring Facebook would be not just a social network but the social network, even as social networking became a massive, world-changing global industry.
Zuckerberg has said on multiple occasions that Bill Gates was his childhood hero — a tech titan known as much for his ruthless monopolization of desktop computing as his ingenuity and philanthropy. He reportedly used to shout “domination!” at the end of Facebook staff meetings. And his company has spent the past year frantically integrating Instagram, WhatsApp, and Messenger with Facebook in a fairly naked bid to complicate a breakup bid.
These anecdotes are not an indictment in themselves: There remains much to be proven if the FTC is to succeed in forcing a breakup. But they at least mean absolutely no one should be surprised that Facebook’s actions over the years are now being called monopolistic — least of all Zuckerberg or the company itself. | https://onezero.medium.com/becoming-a-monopoly-was-always-facebooks-goal-7d710d58fcb2 | ['Will Oremus'] | 2020-12-09 22:58:58.487000+00:00 | ['Facebook', 'Antitrust', 'Technology', 'Social Media'] |
How to Build a Reporting Dashboard using Dash and Plotly | A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below. | https://towardsdatascience.com/how-to-build-a-complex-reporting-dashboard-using-dash-and-plotl-4f4257c18a7f | ['David Comfort'] | 2019-03-13 14:21:44.055000+00:00 | ['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science'] |
A Second Step into Feature Engineering: Feature Selection | Not all features are created equal Zhe Chen
Feature Selection
There would always be some features which are less important with respect to a specific problem. Those irrelevant features need to be removed. Feature selection addresses these problems by automatically selecting a subset that is most useful to the problem.
Most of the times the reduction in the number of input variables shrinks the computational cost of modeling, but sometimes it might happen that it also improves the performance of the model.
Among the large amount of feature selection methods we’ll focus mainly on statistical-based ones. They involve evaluating the relationship between each input variable and the target variable using statistics. These methods are usually fast and effective, the only issue is that statistical measures depends on the data type of both input and output variables.
The classes in the sklearn.feature_selection module can be used for feature selection/dimensionality reduction on sample sets.
Whenever you want to go for a simple approach, there’s always a threshold involved. VarianceThreshold is a simple baseline approach to select features. It removes all features whose variance doesn't reach a certain threshold.
Univariate Feature Selection
Univariate feature selection examines each feature individually to determine the strength of the relationship of the feature with the response variable.
There are a few different options for univariate selection:
We can perform chi-squared (𝝌²) test to the samples to retrieve only the two best features:
We have different scoring functions for regression and classification, some of them are listed here:
Recursive Feature Elimination
Recursive Feature Elimination (RFE) as its name suggests recursively removes features, builds a model using the remaining attributes and calculates model accuracy. RFE is able to work out the combination of attributes that contribute to the prediction on the target variable.
Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features.
Feature Extraction (Bonus)
Feature extraction is very different from Feature selection: the former consists in transforming arbitrary data, such as text or images, into numerical features usable for machine learning. The latter is a machine learning technique applied on these features.
We’ve decided to show you a standard technique from sklearn.
Loading Features from Dicts
The class DictVectorizer transforms lists of feature-value mappings to vectors.
In particular, it turns lists of mappings (dict-like objects) of feature names to feature values into Numpy arrays or scipy.sparse matrices for use with scikit-learn estimators.
While not particularly fast to process, Python’s dict has the advantages of being convenient to use, being sparse (absent features need not be stored) and storing feature names in addition to values.
DictVectorizer is also a useful representation transformation for training sequence classifiers in Natural Language Processing (NLP).
Feature Hashing
Named as one of the best hacks in Machine Learning, Feature Hashing is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. For this topic, sklearn’s documentation is exhaustive, you can find it in the link above.
Feature Construction
There’s no strict recipe for Feature Construction, I personally consider it as 99% creativity. We’re gonna take a look at some use cases in the next lectures though.
So far, you should take a look at the Feature Extraction part of this marvellous notebook from Beluga, one of the best Competitions Grandmasters on Kaggle. | https://medium.com/mljcunito/a-second-step-into-feature-engineering-feature-selection-d597619e6e2b | ['Simone Azeglio'] | 2020-08-21 15:56:55.915000+00:00 | ['Data Analysis', 'Mljcunito', 'Python', 'Data Science', 'Machine Learning'] |
Using $auth module’s redirect in tandem with $router.push in Nuxt.js | Recently I came across the issue of using the auth module in Nuxt.js and invoking a $router.push in subsequent line of code in the same method. The conundrum began when the lines after the auth.loginWith method did not execute as intended since the page was redirected to the redirect URI.
It has been only a week in the Vue.js land, so I suppose this issue is something faced by many newbies.
Photo by Yeshi Kangrang on Unsplash
So, here goes where it all started:
I have a authenticate() function, whose body looks like:
Now, notice that once the line: 2 gets invoked, the execution is handed over to the auth middleware of nuxt.js. Thus, using a $router.push in line:5 is redundant.
Before we proceed any further, let’s take a look where the auth’s configs are defined:
Go to: nuxt.config.js
Find the key auth
Notice, the home key.
Bingo!
This is exactly where we want to tweak.
Before we do any tweaking, let’s make it clear what we are trying to do and why:
We want to use all the good things offered by the auth module, other than the redirect thing.
module, other than the thing. The lines after the $auth.loginWith are necessary to be executed before the view can be updated. In this case it is a loading and success notification that needs to be shown before the user is redirected to the dashboard upon successful login.
Now, the only thing left to do is disabling the redirect case in auth option
home: false will do the job!
The auth object in nuxt.config.js would look like this now:
Bam! We are done. | https://medium.com/consol/using-auth-modules-redirect-in-tandem-with-router-push-in-nuxt-js-d6d703e0a85a | ['Abrar Shariar'] | 2020-03-14 10:03:59.248000+00:00 | ['JavaScript', 'Front End Development', 'Vuejs', 'Nuxtjs', 'Config'] |
The Next Level of Data Visualization in Python | The Next Level of Data Visualization in Python
How to make great-looking, fully-interactive plots with a single line of Python
The sunk-cost fallacy is one of many harmful cognitive biases to which humans fall prey. It refers to our tendency to continue to devote time and resources to a lost cause because we have already spent — sunk — so much time in the pursuit. The sunk-cost fallacy applies to staying in bad jobs longer than we should, slaving away at a project even when it’s clear it won’t work, and yes, continuing to use a tedious, outdated plotting library — matplotlib — when more efficient, interactive, and better-looking alternatives exist.
Over the past few months, I’ve realized the only reason I use matplotlib is the hundreds of hours I’ve sunk into learning the convoluted syntax. This complication leads to hours of frustration on StackOverflow figuring out how to format dates or add a second y-axis. Fortunately, this is a great time for Python plotting, and after exploring the options, a clear winner — in terms of ease-of-use, documentation, and functionality — is the plotly Python library. In this article, we’ll dive right into plotly , learning how to make better plots in less time — often with one line of code.
All of the code for this article is available on GitHub. The charts are all interactive and can be viewed on NBViewer here.
Example of plotly figures (source)
Plotly Brief Overview
The plotly Python package is an open-source library built on plotly.js which in turn is built on d3.js . We’ll be using a wrapper on plotly called cufflinks designed to work with Pandas dataframes. So, our entire stack is cufflinks > plotly > plotly.js > d3.js which means we get the efficiency of coding in Python with the incredible interactive graphics capabilities of d3.
(Plotly itself is a graphics company with several products and open-source tools. The Python library is free to use, and we can make unlimited charts in offline mode plus up to 25 charts in online mode to share with the world.)
All the work in this article was done in a Jupyter Notebook with plotly + cufflinks running in offline mode. After installing plotly and cufflinks with pip install cufflinks plotly import the following to run in Jupyter: | https://towardsdatascience.com/the-next-level-of-data-visualization-in-python-dd6e99039d5e | ['Will Koehrsen'] | 2019-01-24 17:59:54.173000+00:00 | ['Data Science', 'Data Visualization', 'Python', 'Education', 'Towards Data Science'] |
Roxting | From that day on, I developed an obsessive routine: I would come back from class, buy a Suntory beer, and spend the night in my dorm refreshing Aika’s YouTube post. I did that for three torturous days.
On the fourth day, she replied with “Somebody That I Used to Know” by Gotye.
I blinked at my laptop. Did she mean… that she knew me?
To cast my doubts aside, I posted Toby Keith’s “Do I Know You?” My full name and country were displayed on my YouTube account, so she had all the information she needed. I waited five minutes. Hit “Refresh.” Waited five minutes. Hit “Refresh.” The screen remained unchanged.
I brought my fist to the desk and leaned my chin into it. Perhaps the answer lay in Aika’s previous reply, not in a new one. I Googled the lyrics of “Somebody That I Used to Know” and examined the first verse.
Now and then I think of when we were together
Like when you said you felt so happy you could die
Told myself that you were right for me
But felt so lonely in your company
But that was love and it’s an ache I still remember
Yes, this was probably the Aika I used to be with. And no, we hadn’t grown apart because she’d gone to Australia; she’d gone to Australia because we’d grown apart. But why had that happened? I replayed the memory in my head.
Back then, besides studying, I’d had a personal project: Ourse, a mobile game where couples could buy a home, then add furniture, build rooms, and even raise a pet. It’d been designed to strengthen intimate relationships. Involving myself in it, however, weakened mine. I didn’t have time for going out, not even to meet Aika. So to spend more time with me, she’d visit my apartment to write her reports, prepare tea for me, and sleep alone in my bed.
The only form of communication we had was music (more specifically American and British rock). She would curate the playlist one day — Pink Floyd, The Beatles, Coldplay. I would the next — The Ramones, Nirvana, The Doors. Instead of talking, we let the songs echo our feelings.
One afternoon, she put on “Do You Still Love Me?” by The Dave Clark Five:
Do you still love me
It’s been such a long time
Since I held you in my arms
Do you still miss me
It’s been such a long time
I still need your lovin’ charms
That same night I played “Still Into You” by Paramore:
You felt the weight of the world
Fall off your shoulder
And to your favorite song
We sang along to the start of forever
And after all this time I’m still into you
Since Aika had neither complained nor lamented, I thought this dynamic had been enough for her. I was wrong. | https://humanparts.medium.com/roxting-6e7df1b0bd5b | ['Alexandro Chen'] | 2020-03-26 20:15:31.011000+00:00 | ['Short Story', 'Love', 'Fiction', 'Fiction Friday', 'Music'] |
Graph Databases, Neo4j, and Py2neo for the Absolute Beginner | Are you interested in graph theory and NoSQL database design? Graph databases may offer you the tools that you need. Whether it comes to pattern matching, recommender systems, fraud detection, social media, or more, graph databases are a great option. Neo4j is the most popular open-source graph database management system available to the public. In the following article, we’ll discuss graphs, relational databases, neo4j, and py2neo for the beginners. If you’ve taken an introductory python course [CSC 111 at Smith College], you’ve got all the tools you need to get started.
So what is a graph?
If you’ve taken discrete math [MTH 153 at Smith College], a lot of this will be familiar to you already. Imagine that you are the owner of three dogs named Rex, Fido, and Spot. If you want to visually show out the relationship between your dogs, you could first draw out everyone as bubbles on a sheet of paper. You, Rex, Fido, and Spot would each be a bubble on the paper. However, you want to show more than just those bubbles. You want to represent your relationships with your dogs. You can draw out those relationships as lines in between those bubbles. For example, you can have lines which point from any of the dogs to you, representing the “Pet Of” relationship. So if a line were drawn from Rex to you, it would show “Rex has the ‘Pet Of’ relationship with You”, which means that Rex is your pet. Lines can be directed or not. If they’re directed, the relationship goes one way, like the “Pet Of” relationship, but if they’re not directed, the relationship goes both ways, like, for example the “Loves” relationship between one of your pets and you.
You, Rex, Fido, and Spot
This image above is a very basic graph. You are your dogs are each nodes, entities which are stored in the database. The relationships between you and your pets are called edges. A graph is any structure made up of a series of nodes which may be connected by edges. They’re an incredibly useful structure in computer science. The internet, for example, can be considered a graph, with different web pages as nodes and hyperlinks between them as edges. The first algorithm behind Google search ranking worked by constructing a graph of the internet and then weighting different pages based on how many hyperlinks went to them. If you’ve ever wondered how Facebook suggests people you may know, one of the key parts of this algorithm is a graph. It adds everyone to a massive graph and suggests people based on your friends’ connections. Graphs have many applications, so there are many tools out there for using them.
A graph database management system is any tool which takes this kind of structure and plunks it into the computer. They also provide a query language to quickly search for information in it. The database itself is all of that info plunked on the computer. neo4j is a graph database management system (hereafter abbreviated to GDBMS), and the one we’re talking about in this tutorial.
How does this compare to relational databases?
If you’re coming to this article, you likely have some experience in relational database design. If not or if you’ve forgotten most of it, a quick refresher — relational databases are databases designed using relations, tables of information which look like Excel sheets. The names for your columns are called your schema, and for every row of your “Excel” sheet, you must have something in each column, even if it’s just a null. Different columns are attributes. For example, if you had a person named “Alice” in your relation, her name and her age would each be an attribute, meaning it would be a column on the table. In order to query this table for information, you can use “Standard Query Language”, often abbreviated to “SQL.” SQL is a query language, not a programming language, but it is incredibly robust and gives one the ability to search across multiple tables and for specific attributes. If you’re familiar with sets, relational database tables are each sets, and we use operations like the Cartesian product or set complement in order to work with multiple tables. Where graph databases are formed off graphs, relational databases are formed off of sets. There are many other concepts, from Boyce-Codd Normal Form to the basics of set theory, that I recommend you explore but that we won’t be covering in this tutorial.
Graph databases bear some similarities to relational databases, but are also incredibly different. One doesn’t need to adhere to a schema when using a graph database and they do not depend on set theory. They’re also much better at modeling relationships. With relational databases, a relationship between two items takes slightly more math to model than it does with a graph database. With many relationships and millions of data points, this extra computational cost exceeds a reasonable amount. Graph databases require significantly less math to model those same relationships.
Installing neo4j
Installing neo4j is fairly simple, but just in case, we’ll go over the process. Go to this page, click download, and fill out the form that it asks for. It’ll give you a key after this page. Copy and paste that key somewhere or leave this page open and then download neo4j. Once you’ve downloaded it, open it up to get the installation wizard and then you’ll be asked for the activation key. Just paste that back in and you’re ready to start using neo4j!
Using neo4j
This screen should be similar to what you see. In my projects section, I have two different projects, but you’ll see just one called “My Project.” I’m going to be working in a project called “Example Project” throughout the course of this tutorial. Now, it’s time to get started on the more interesting stuff. In order to make your first graph, all you have to do is click the “Add Graph” button. Choose to make a local graph. You’ll then be prompted to name the graph and to provide a password. I made the graph’s name “Graph” and the password “password” since we won’t be working with any particularly sensitive information.
Your screen should now look like this
Click the start button in order to get the database running. The graph box should then look like this.
From there, click the manage button. It will lead you to the management menu, where you should see a button that says browser. Press that button and it will open up a new window which should look like this.
This is where you can experiment with neo4j, write in code, and check visualizations. It’s an incredibly useful tool. You can type in commands in that top box, next to the dollar sign. I highly recommend the tutorial series that it recommends. Those are very helpful.
Creating a Graph
Now, we want to go through the process of making a graph. Let’s say that we have a group of friends named Alice, Bob, and Cam. Each of them will be a node connected by edges in our graph. The first thing we need to do is make nodes for each of them. We’ll start with Alice. The syntax for creating a node in neo4j goes as follows:
CREATE (n:label {property:”Value”})
If you want to have multiple labels or properties, you can do so like this:
CREATE (n:label:label2:label3 {property1:”Value1”, property2:”Value2, property3:”Value3”})
So, if we’re interested in adding Alice first, we can create a node for her using the following syntax:
CREATE (n:person {name:”Alice”})
We can do the same thing for Bob and Cam. A quick tip for neo4j is that if you want to see all of the nodes in your graph, you can enter the following into the command line:
MATCH (n) RETURN (n)
Doing this and pressing enter, we can see Alice, Bob, and Cam as nodes. Neo4j provides tools for graph visualization, which we will be using throughout this guide. This graph is of the three friends. If you wish to close it, go to the top right corner of the box and press the x. Otherwise, you can pan around the screen, drag the nodes, and even go into full screen to explore the graph. This gets more interesting with more elements and with relationships.
Each friend is a node in this graph
Of course, this isn’t a very good graph because none of our nodes are connected to one another. We can rectify that right now by creating relationships. Let’s say that even though both Alice and Bob are friends with Cam, they’re best friends with one another. How would one model this relationship? Neo4j provides the tools to model it. Just like we can have nodes with labels and properties, we can have relationships between two nodes with labels and properties. The syntax is as follows:
MATCH (a:person), (b:person) WHERE a.name=”Alice” and b.name =”Bob” CREATE (a)-[r:FRIEND {type:”best”}]->(b) return r
This creates a best friend relationship between Alice and Bob. R is the variable representing the relationship between Alice and Bob, and similar to nodes, it has the label “FRIEND” and the properties within the curly brackets. We can make similar relationships between Alice and Cam or Bob and Cam while omitting the brackets with “type:’best’” in them in order to make normal friendships. An interesting thing to note about this relationship is that it is a directed one, from Alice to Bob, which is why there is an arrow pointing at the variable b. This process is useful, but a bit tedious, which is why we’ll go over doing it in py2neo later on.
Cypher
So far we’ve created nodes and we’ve created relationships, but we haven’t explored the language we’re using to do so. Cypher is an open-source graph query language used for neo4j, among other graph applications. It has many similarities to SQL, but the syntax is slightly different and the underlying mechanics are very different. Instead of querying across relations, we query across graphs. This means the mechanics must be much different.
Let’s say that we have our friend graph and we want to return the name of every single one of our friends. Saying something along the lines of the following will return every node in the graph:
MATCH (n) RETURN n
However, this gives us too much information. To get just their names, we can say the following:
MATCH (n) RETURN n.name
This is a pretty simple query to make in neo4j, but it’s a good one to start out learning the syntax. If you’re familiar with SQL, this equivalent to “SELECT name FROM graph;” If not, this is still pretty simple. By saying MATCH n, we take the variable n, and since we have nothing narrowing it down, n represents every element in the entire graph. RETURN n.name gives us back the name for n, and since n is every node, that means we get the name of everything in our graph.
If you want to delete everything from your graph instead, you can say the following:
MATCH (n) DETACH DELETE n
Now, if you’re looking to query a specific node, a new line is added to your query. Let’s say that we’re only looking to return the node for Alice in this graph. We can do as follows:
MATCH (a) WHERE a.name = “Alice” RETURN a
If you’re familiar with SQL, this where clause is almost exactly the same as it is in SQL, except for the fact that you use a variable. Where you would have previously said “SELECT * FROM graph WHERE name=”Alice””, you now must choose a variable to represent what you’re searching for.
There are many more complex ways to query in Cypher, for which I recommend you check out the following tutorials. Otherwise, the information above will cover basic searches across a graph database.
Py2neo
Cypher is a great language, but I’m personally a big fan of python, especially for automating the process of creating nodes and relationships. In order to do this, I use py2neo, a python package which works with neo4j. You can install it with just the command “pip install py2neo” from the command line. From there, open up your preferred python editor (I will be using python’s built in editor IDLE for simplicity’s sake), and we can get started! The most important thing to start is to have neo4j open and the graph that you want to use running.
The bolt port is circled in red
Go to the manage page for your graph (the one where you can open up your browser) in order to continue. Once you have it open, you’ll be able to see this screen. Note down the bolt port of your graph. We’ll be using this to access the graph from python. It’s also helpful to open the browser at this point, if you don’t already have it open. While we’ll be working primarily in python, seeing neo4j’s graph visualizations is only possible if you are using the browser. Now, open up a new python file and name it whatever you want. Next, we’re going to start importing things into our program.
These are the first three lines
Make a main function for your code. Now, the first thing to do is link to our neo4j graph. Since you have the bolt port ready, let’s start by connecting like so:
Put in bolt://localhost:(the bolt port you have listed on your manage screen) as an argument for graph and we can now begin creating nodes and relationships. Say that we want to model our original graph. There’s you and your dogs Rex, Fido, and Spot. For the sake of this graph, we’ll say that your name is Alex. We want to have four different nodes then. Rex, Fido, Spot, and Alex. We also want two different kinds of nodes: human and dog. We’ll start by doing this in a more simple manner. First, we can start by instantiating the nodes. The syntax for doing so is as follows.
Variable_name=Node(“Label in quotes”,property1=”value for property1")
So if we want to put you and your dogs into our database, we can write out the following.
Alex = Node(“Person”,name=”Alex”)
Rex = Node(“Dog”,name=”Rex”)
Fido=Node(“Dog”,name=”Fido”)
Spot=Node(“Dog”,name=”Spot”)
However, this just creates the nodes and doesn’t put them into the database. We want to actually push them into our database. We can do that using the following code:
We can use graph.create(variable_name) to put the node into the graph itself. Now, we want to draw out the relationships between all of the nodes. We can do this by similarly creating a relationship. The syntax for a relationship is as follows.
Relationship(node1, “TYPE OF RELATIONSHIP”,node2)
So if we wanted to say that Alex loves Rex, we could say the following.
Relationship(Alex,”LOVES”, Rex)
We can put that straight into the graph.create() code to get the following.
graph.create(Relationship(Alex,”Loves”,Rex))
This code puts the relationship into our graph. This code down below results in the following visualization:
The code
Resulting visualization
As you can see, neo4j is a great tool of working with graphs and py2neo makes it even simpler to use. You can combine the python code above with any variety of different packages or tools to get an even more robust graph. There are also many more tools in py2neo that we haven’t explored. The documentation covers many more utilities, tools, and variations of ideas that we’ve previously explored. Good luck using neo4j! | https://medium.com/smith-hcv/graph-databases-neo4j-and-py2neo-for-the-absolute-beginner-8989498ebe43 | ['Ananda Montoly'] | 2020-01-14 19:59:12.499000+00:00 | ['Neo4j', 'Python', 'Graph Database', 'Research', 'Data Visualization'] |
5 Reasons to Not Use Observables in C# | When I was first introduced to Observables in C#, they sounded pretty damn good. “They just model streams of data”, “It’s just data over time” and “It’s just the push equivalent of an IEnumerable”. After working with them for a little while, I don’t think they’re as good as I was told.
The statements above might be true. Observables might be simple from a birds-eye perspective. Unfortunately simple doesn’t always mean easy and there’s are some things that will end up biting you in the ass. Here’s five of them.
The Framework Has No Consistent Name
When searching for documentation and solutions to problems with Observables, I always felt like I didn’t know what to search for. Previously I’ve always known the terms to use, django {your-problem-here} , serilog {how-to-do-x} {framework}-{problem description}
However with Observables I had a lot of problems finding people encountering similar issues, and I was never really sure what to put for the {framework} part of my queries.
I spent a little time trying to figure out what the right term is. Turns out — nobody seems to know.
On Stack Overflow the most used tag is system.reactive , but looking official documentation and Stack Overflow questions, I've seen it referred to as:
With all of these names that are subtly different — what am I supposed to search for?
We’ll call it Rx.Net in the rest of the post — at least it’s short
The Documentation Is Hard To Find
Finding documentation online for Rx.Net is pretty close to impossible. While figuring out the search terms to use is hard, finding comprehensive documentation is much, much harder.
Rx.Net is the .NET flavour of ReactiveX, which luckily seems to have quite a bit of documentation! Unfortunately you pretty much have to know what you’re looking for already. If I want to read the documentation for how the C# Select works, I need to know that in ReactiveX, Select is called Map
When I know that, in the ReactiveX documentation I can find a language-agnostic explanation of what a Map/Select does. There's also language-specific documentation for each of the different flavours, including Rx.Net!
Just kidding
There’s also no API reference online anywhere. I can find the documentation in the editor, so I know it’s been written — but it isn’t hosted anywhere. The closest we get is this API reference from MSDN. From 2011.
Observables are hard to debug
Observables can be a pain to debug. Most debuggers aren’t particularly suited for tracing streams of data, and the stack traces you get are unimpressive. Let’s dive a little deeper into the stack traces. Take this piece of code which throws an error after a few integers have passed through the stream.
[Fact]
public void ObservableTest()
{
IObservable<int> observable = Observable.Range(0, 5)
.Select(i => i * 2)
.Do(i =>
{
if (i > 5)
{
throw new Exception("That's an illegally large number");
}
});
observable.Subscribe(
onNext: (i) => Console.WriteLine(i),
onError: (err) =>
{
throw err;
});
}
The humongous Stacktrace is as follows:
Error Message:
System.Exception : That's an illegally large number
Stack Trace:
at MyProject.Test.ObservableTest.<>c.<Foo1>b__0_3(Exception err) in /home/geewee/programming/MyProject.Test/ObservableTest.cs:line 30
at System.Reactive.AnonymousSafeObserver`1.OnError(Exception error) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\AnonymousSafeObserver.cs:line 62
at System.Reactive.Sink`1.ForwardOnError(Exception error) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Internal\Sink.cs:line 61
at System.Reactive.Linq.ObservableImpl.Do`1.OnNext._.OnNext(TSource value) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Linq\Observable\Do.cs:line 42
at System.Reactive.Sink`1.ForwardOnNext(TTarget value) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Internal\Sink.cs:line 50
at System.Reactive.Linq.ObservableImpl.Select`2.Selector._.OnNext(TSource value) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Linq\Observable\Select.cs:line 48
at System.Reactive.Sink`1.ForwardOnNext(TTarget value) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Internal\Sink.cs:line 50
at System.Reactive.Linq.ObservableImpl.RangeRecursive.RangeSink.LoopRec(IScheduler scheduler) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Linq\Observable\Range.cs:line 62
at System.Reactive.Linq.ObservableImpl.RangeRecursive.RangeSink.<>c.<LoopRec>b__6_0(IScheduler innerScheduler, RangeSink this) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Linq\Observable\Range.cs:line 62
at System.Reactive.Concurrency.ScheduledItem`2.InvokeCore() in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Concurrency\ScheduledItem.cs:line 208
at System.Reactive.Concurrency.CurrentThreadScheduler.Trampoline.Run(SchedulerQueue`1 queue) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Concurrency\CurrentThreadScheduler.cs:line 168
at System.Reactive.Concurrency.CurrentThreadScheduler.Schedule[TState](TState state, TimeSpan dueTime, Func`3 action) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Concurrency\CurrentThreadScheduler.cs:line 118
at System.Reactive.Concurrency.LocalScheduler.Schedule[TState](TState state, Func`3 action) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Concurrency\LocalScheduler.cs:line 32
at System.Reactive.Concurrency.Scheduler.ScheduleAction[TState](IScheduler scheduler, TState state, Action`1 action) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Concurrency\Scheduler.Simple.cs:line 61
at System.Reactive.Producer`2.SubscribeRaw(IObserver`1 observer, Boolean enableSafeguard) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Internal\Producer.cs:line 119
at System.Reactive.Producer`2.Subscribe(IObserver`1 observer) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Internal\Producer.cs:line 97
at System.ObservableExtensions.Subscribe[T](IObservable`1 source, Action`1 onNext, Action`1 onError) in D:\a\1\s\Rx.NET\Source\src\System.Reactive\Observable.Extensions.cs:line 95
at MyProject.Test.ObservableTest.ObservableTest() in /home/geewee/programming/OAI/MyProject.Test/ObservableTest.cs:line 25
Removing all the Rx.Net internal stuff, this is the only information in the stack trace we care about:
Error Message:
System.Exception : That's an illegally large number
Stack Trace:
at MyProject.Test.ObservableTest.<>c.<ObservableTest>b__0_3(Exception err) in /home/geewee/programming/MyProject.Test/ObservableTests.cs:line 30
at MyProject.Test.ObservableTest.ObservableTest() in /home/geewee/programming/OAI/MyProject.Test/ObservableTests.cs:line 25
We get the line number where we subscribed to the observable, and the function inside the chain where the error was thrown. I have no idea what transformations the data has gone through. If the stream has been dynamically constructed I might not even know what it looks like.
These issues aren’t unique to observables. Composing long LINQ-expressions suffer from the same issues. When composing several different functions together, the stack traces very quickly stop becoming meaningful. The below is the complete stack trace the Enumerable/LINQ version of the Observable code.
Error Message:
System.Exception : That's an illegally large number
Stack Trace:
at MyProject.Tests.ObservableTests.<>c.<TestLinq_StackTraces>b__2_1(Int32 i) in /home/geewee/programming/MyProject.Test/ObservableTests.cs:line 55
at System.Linq.Utilities.<>c__DisplayClass2_0`3.<CombineSelectors>b__0(TSource x)
at System.Linq.Enumerable.SelectRangeIterator`1.MoveNext()
at MyProject.Tests.ObservableTests.TestLinq_StackTraces() in /home/geewee/programming/MyProject.Test/ObservableTests.cs:line 60
It’s much cleaner, but it doesn’t give us any more information than the observable version.
The poor debugging and stack-traces are a perpetual thorn in my side when working with Observables. But if the same issues exist with long chains of Enumerable, why is that less of an issue? The short answer — I don’t know.
My best guess is that when using Observables there’s a strong tendency to keep everything as an Observable stream. The longer your streams are the harder debugging with a minuscule stack trace becomes.
It’s Hard To Use Scopes
A very common thing I need to do is attach some sort of context to a request or a series of events.
var id = "myCoolId"
using (LogContext.PushProperty("id", id))
{
// Every log statement in here will have the `id=myCoolId` attached
await DoSomething(id);
}
This is a very common usage patterns for logging, for example in Web Apps if you want to attach a specific GUID to a Request, so you can later correlate all logs for that specific request.
Something like this isn’t possible when using Rx.Net, as it’s threading model doesn’t carry over the ExecutionContext. [1] [2]
This means that we can’t use something like an AsyncLocal or a ThreadLocal to keep context for a specific piece of data or stream. If we want the myCoolId to be attached to all of the logs, we'll need to pass it into every step of our observable chain, and manually pass it to every logging call. I know there's a value in being explicit, but this is a time where choosing Rx.Net locks you out of some pretty handy language features.
It never really took off
Looking at the timelines, it seems like Rx.net in the hype cycled peaked a few years ago. If it ever really took off.
Google Trends for system.reactive compared to the Javascript version of the ReactiveX framework.
Comparing it in google searches to RxJS it seems much less widely used. Looking at the Stack Overflow trends reveals a little more nuanced picture however:
Stack Overflow trends for system.reactive compared to the Javascript and Java versions of the ReactiveX framework.
While still much less popular than RxJS and the Java ReactiveX version, Rx.Net does seem to predate their popularity by quite a few years. Microsoft was out early with the reactive paradigm but never really seemed to manage to make it take off.
Now while your technology choices shouldn’t be about how hyped something is — the popularity is always worth taking into account. Some old technologies are sturdy, battle-tested and well-documented. Then they don’t have to be shiny.
However with some of the pitfalls, particularly around the documentation issues, Rx.Net feels neither robust or shiny to me.
While this is a negative article, I’m not saying Rx.Net or the Reactive paradigm is always bad. Some of the issues are about the tooling and documentation. I mention that most debuggers aren’t suited for debugging streams, but this is a tooling problem and not inherent to the reactive paradigm. E.g. Jetbrains has an excellent Java Stream Debugger.
Observables seems like a very natural fit for some languages and domains, like reacting to user interactions in JavaScript, as the large adoption of RxJs suggests. I’m just saying that it doesn’t feel like a particularly natural fit for C#, and if you have some data that can be modelled as either Enumerables or Observables, I would think twice about using Observables.
Did you enjoy this post? Please share it! | https://medium.com/swlh/5-reasons-to-not-use-observables-in-c-be757392a09e | ['Gustav Wengel'] | 2020-11-12 19:28:12.724000+00:00 | ['Observables', 'Dotnet', 'Csharp', 'Software Development', 'Reactive Programming'] |
Natural Language Processing Explained | Co-authored by a NLP dream team: Stella Liu (AI Product Manager), David Kearns (Product Manager in Data and AI), Nadine Handal (Data Scientist), Shubham Agarwal (AI Research Scientist), Michael Flores (AI Architect) and me (Machine Learning Developer)
Introduction
While AI has sometimes become synonymous with chatbots like Siri and Facebook Discovery bots, this is only scratching the surface of what the underlying technology — natural language processing (NLP) — can do for your product or your organization.
Google “natural language processing” though and you’ll get high level messaging on how ubiquitous it is or deep technical details on its implementation — co-reference resolution, neural network dependency parser and entity recognizer… anyone?
We (a data scientist, NLP consultant, NLP research scientist and AI product manager with a collective 50+ years of experience in this technology) decided that this is not helpful for people to grasp the potential of this technology and understand how to get started.
We collectively wrote this article for you — a tech-savvy but non-technical reader — to get the inside scoop on what this technology is, common success patterns that the team has seen that work in real-world situations, questions you can ask yourself to vet its relevance to the problem you’re facing and a few links so you can pursue deeper research on your own.
Our goal is to de-mystify NLP and provide a way for you to get started. Let us know if we succeeded in the comments.
What is Natural Language Processing?
Natural Language Processing (NLP) is an interdisciplinary field that spans techniques to process, understand, and analyze human language. NLP enables most of the current state-of-the-art AI applications by providing algorithms that convert the data humans understand (e.g. English, French, speech, tweets, e-mails etc.) into data computers understand and can operate upon. If you find yourself with any sort of written, typed, or spoken data in your business, you should be asking someone: “How can NLP help me turn this data into business value?”
Real world applications of NLP
If you’ve ever found yourself saying “Ok Google” or “Hey Siri”, you’ve experienced NLP without even knowing it. Many of the experiences we have today are supported in some way by NLP. When you request something from a virtual assistant like Siri, the system is using NLP to understand the meaning of your phrase so that it can respond appropriately. The four key patterns of NLP use cases that this article will explore are improving the consumer end user experience, democratizing access to big data in a large organization, speech recognition (translating speech to text) and sentiment analysis.
With NLP, we can make applications easier for users to interact with. For instance, imagine re-creating the experience of filling out a long, lengthy online form on the internet. No one enjoys tediously filling out all of the information that’s required from them. Instead of spending time trying to understand the information the online form needs, end users can be guided through the entire process of completing the form by a chatbot. NLP could also be used to auto-correct misspellings or misinformation. If desired, the entire online form can be replaced by a chatbot that engages the end user in conversation to extract the information needed.
Getting access to data in a large organization is notoriously time-consuming. Business analysts usually have to resort to creating an IT ticket to access the information they need or using SQL. Advances in NLP can help these business analysts go straight to the source and query the database in natural language — asking the computer questions like “What region performed best last quarter?” or “Compare store x and store y revenue in 2010 and 2012.” Salesforce’s WikiSql initiative, Narrative Science and Kueri are all examples that allow users the ability to ‘talk’ to their data.
NLP can also translate speech to text which is the backbone of chatbots. In the business context, translating speech to text helps automate manual business processes. For instance, in healthcare, auditing clinical trial interviews used to be a very lengthy task — analysts had to listen to hours of clinical interview trials in order to determine whether or not the person consented to do the trial. With advances in NLP, these audio interviews can be converted to text and consent can be determined with a simple “crt+f” operation.
Finally, sentiment analysis of text can help businesses tap into human-generated text such as news, blog posts, forums, and social media and understand the consumer in a deeper way. As consumers have started voicing their opinions on these public platforms, businesses can analyze this raw data to make sense of trends across their opinions and take action proactively.
Why NLP is better than traditional programming
To understand when an NLP solution is needed, we have to understand the end goal and the problem we’re trying to solve. While the definition of natural language processing is very broad — covering written and spoken language, the fact that you have data with human language doesn’t always mean you need to use NLP.
For example, if the goal is to identify a specific term “x”, a simple key word search operation can achieve this goal since this does not require deep understanding of what the text is about. On the other hand, if your application needs to assist lawyers in extracting the information they need from a huge collection of legal documents, then simple keyword search is not sufficient. Think of it this way — if you were to hire an assistant, they can’t do their research without proper training. They’d have to understand the legal language, the relationships and concepts that are relevant for their research subject. They will also need to know the domain in which they’re working in such as business law vs international law. NLP helps go beyond simple keyword search and by being more domain-specific. You can implement your own NLP application that can be trained specifically for your domain (like business law) to extract the relevant information from large amounts of textual data even more accurately and efficiently than simple keyword search.
Additionally, traditional programming techniques to organize data involve developing long list of rules (such as the key word search operation) that require hand-tuning by engineers. Thus, when you are dealing with fluctuating environments with changing data, traditional programming techniques quickly become inefficient and time-consuming to maintain. NLP techniques can adapt to fluctuating data, simplify code and perform better. | https://medium.com/ibm-watson/natural-language-processing-explained-76feb28dc16c | ['Ethan Koch'] | 2020-02-29 23:26:25.711000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'NLP', 'Technology', 'Naturallanguageprocessing'] |
Anime and the Familiarity of Future Funk | Anime and the Familiarity of Future Funk
Future funk’s reliance on anime points to its undying love affair with escapism.
Urusei Yatsura’s Lum Invader has become the unofficial poster girl for future funk.
Sometimes future funk feels like a secret. The genre, an outgrowth of the early 2010s vaporwave movement, which was in turn an outgrowth of 1980s R&B, is afraid of Google. Artists with names like Z.E.R.O. and Mélonade stock the virtual record shelves of Bandcamp. Beyond the visibility of the biggest artists like Yung Bae or Saint Pepsi — the latter of which changed his name following a complaint from the brown bubbly giant, further playing up the genre’s secret society-like nature — finding a specific future funk song can be infuriating and confusing.
A search for future funk on YouTube is likely to yield a smattering of disorganized playlists unified only by their technicolor album covers. Sift through a few songs and you’ll find missing tracks, banished into oblivion by YouTube’s content identification system. (Newsflash, many future funk samples aren’t cleared.) And yet, amidst the pitched up rehashes of Anri and Armenta’s greatest hits, future funk’s disorganization feels frighteningly familiar, from its collage-style anime art to its disco-infused baselines.
Lo-fi hip-hop operates on a similar wavelength. Though notably downtempo compared to future funk’s high-speed churn, the brand of beats often labeled “music to study to” is familiar both in its sounds and visuals. The never ending loop of beats bouncing around channels like Chilled Cow are accompanied by fuzzy frames of a non-descript anime girl bobbing to an unending deluge of tunes. Recently, the internet has co-opted the “lo-fi anime girl,” transporting her room from its usual, blackened rainy void to Romanian villages or the Mykonian villas of Greece.
The Romanian Version of “lo-fi” girl, created by u/marccolo.
Though their sonic outputs differ, both lo-fi hip-hop and future funk embrace distinctly Japanese art styles. Musically, they are ancestral tributes, paying homage to a litany of genre’s and artists through sampling and indirect inspiration.
Visually, characters like “lo-fi anime girl” or Sailor Moon have become comforting, even unifying, reminders of music with digital origins. And while future funk has developed largely outside of the geographic boundaries of Japan, it still manages to capture a collective of personal experiences that meld together thanks to the unwavering power of escapism.
Lum Invades Future Funk
Future funk doesn’t exist without Japan’s economic boom in the 1980s. A rise in the entertainment sector ripened the markets for visual exports. Ahead of the success of Hayao Miyazaki’s Studio Ghibli in the mid-80s, a number of other anime-production houses dove into Japan’s burgeoning animation industry. One show, Mamoru Oshii’s adaptation of Rumiko Takahashi’s manga Urusei Yatsura helped to lay claim to the sense of longing that fuels future funk.
The story follows Ataru, a hapless, lecherous high school student whose interstellar fate entangles him with the horned, bikini-wearing demon Lum Invader. A game of interspecies tag culminates in Lum’s affection toward Ataru — subsequently she moves in with him, laying the foundation for future romantic comedy anime in the process.
As influential as Urusei Yatsura was, the show’s greatest export is a two-second loop of Lum sandwiched between Ataru and Shinobu from the show’s intro, “Lum’s Love Song.”
The clip, popularized by the Artzie Music video Tanuki’s “BabyBabyNo夢,” which has been played 12 million times on YouTube and sits as the cover song for one of the most popular future funk playlists on the platform, plays into future funk’s prominent themes of escapism. The trio is surrounded by city lights and shooting stars as streaks of neon groove above a strawberry-vanilla dance floor. It’s simultaneously realistic and supernatural, the cityscape clashing with Lum’s horns protruding from her turquoise hair.
Like Sailor Moon, whose influence in future funk is at least in part owed to the genre’s musicians watching the show’s English broadcast in the 1990s as children, Lum has become a staple in future funk’s stylistic beginnings. She and her tiger-stripped go-go boots grace the unofficial covers of countless uploads. Couple that with Lum’s accessibility — the entire Urusei Yatsura series is available (again, unofficially) on YouTube — and you have a recipe for an endless reserve of music video fodder.
But anime’s presence in future funk visuals reaches beyond the inherent nostalgia of childhood television. As an art form, anime boarders on the edge of realism. Characters are grounded in reality — usually as high school students — while any number of supernatural phenomena befall them. That means it’s entirely possible for a cool, sunglasses sporting character to be illuminated by an iridescent interstellar sunset without batting an eye.
When it’s accompanied by lyrics about the never ending flow of time or flying away on an airplane of dreams, the anime imagery of future funk provides a whimsical backdrop for the genre’s club friendly tunes. Even as record companies and platform algorithms get wise to future funk’s audio copyright infringement, anime’s influence has remained, spawning reimaginings and original character artwork that keep the good times rolling. | https://medium.com/the-gleaming-sword/anime-and-the-familiarity-of-future-funk-4ae4704ec3fe | ['Brandon Johnson'] | 2020-10-07 01:13:22.874000+00:00 | ['Anime', 'Japan', 'Future Funk', 'Music', 'Internet Culture'] |
Taming long and nested if/else statements | Taming long and nested if/else statements
Back to basics — Writing conditionals
Image by Steve Buissinne from Pixabay
Case 1. Simple if / else if / else blocks
if (mode === "A") {
return { isDisabled: true };
} else if (mode === "B") {
return { isDisabled: true };
} else {
return { isDisabled: false };
}
This is a common pattern that we see in applications. However, it is ripe for future spaghetti like updates. Let us see how we can tame it.
Option 1. Lose the else
We will try removing the else keyword. It is in fact, redundant.
if (mode === "A") {
return { isDisabled: true };
}
if (mode === "B") {
return { isDisabled: true };
}
return { isDisabled: false };
Option 2. Use a switch statement
The previous solution is a little less cluttered but why not use a switch statement here?
switch(mode) {
case "A":
return { isDisabled: true };
case "B":
return { isDisabled: true };
default:
return { isDisabled: false };
}
Both if and switch blocks has more or less the same code density. Switch statements are notorious for unintended case fall throughs and forgotten defaults.
Option 3. What about a ternary?
return mode === "A"
? { isDisabled: true }
: mode === "B"
? { isDisabled: true }
: { isDisabled: false };
Ternaries may seem alien if you are not used to them but with proper formatting (use Prettier), it will read much easier, they are much less wordy than imperative if/else statements. Yet, the object pattern may be the best approach here.
Option 4: The object pattern
const status = {
["A"]: { isDisabled: true },
["B"]: { isDisabled: true },
}
return status[mode] || { isDisabled: false };
Much better! Adding a new case should be as easy as adding a new map within the object. However, we cannot substitute nested expressions with the object pattern, let us take a look at another example.
Case 2. Nested if else blocks
const getMessage = (selected, mode, action) => {
if (selected.length === 0) {
return "Empty List";
}
if (selected.length > 1 && action !== "edit") {
return "Multi edit";
}
if (selected.accessible && mode === "A") {
if (action === "edit") {
return "Multi select A Edit";
}
return "Multi select A View";
}
return "No OP";
}
During code-reviews, this style is considered a code smell. This innocent little function will grow into a monster soon with more conditions slapped on as the application ages. It is also hard to unit test since the number of combinations are numerous; by its nature, no one will understand it fully to update any existing tests. If we’re lucky, the author would include some comments.
We can’t just create an object model here. How about ternaries.
const getMessage = (selected, mode, action) => {
return (selected.length === 0) {
? "Empty List"
: (selected.length > 1 && action !== "edit")
? "Multi edit"
: (selected.accessible && mode === "A")
? (action === "edit")
? "Multi select A Edit"
: "Multi select A View"
: "No OP";
}
Nested ternaries may be less wordy but are confusing, to me it is an unreadable blob of code, the mental gymnastics it takes to decipher it is not trivial. Code should be self documenting as much as possible and here is what I recommend:
const emptyListMessage = (selected) =>
selected.length === 0 ? "Empty List" : null; const multiSelectEditMessage = (selected, action) =>
selected.length > 1 && action !== "edit" ? "Multi edit" : null; const multiSelectModeAMessage = (action) =>
action === "edit" ? "Multi select A Edit" : "Multi select A View"; const accessibleModeMessage = (selected, mode, action) =>
selected.accessible && mode === "A"
? multiSelectModeAMessage(action)
: null; const defaultMessage = () => "No OP"; const getMessage = (selected, mode, action) =>
emptyListMessage(selected)
|| multiSelectEditMessage(selected, action)
|| accessibleModeMessage(selected, mode, action)
|| defaultMessage();
Our code now is now mostly functions and expressions, the very essence of functional programming, they have a mathematical quality; once properly unit tested they tend to hold strong and become resilient to bugs. Here, we’ve almost doubled the code surface yet, it reads quite well. Each part of the logic is self documented which is quite useful where the conditions are complex. We can unit test these smaller functions much easier and they will sometimes even get reused. To me, the biggest challenge here is in naming these so that they communicate the intent effectively.
Tip: If you have complex conditional expressions, move them to their own function. Applying this to our example above, we get a function like this, a sweet little one liner that can be simply unit tested. | https://medium.com/javascript-in-plain-english/taming-long-and-nested-if-else-statements-5a1e483dc777 | ['Rajesh Naroth'] | 2020-12-12 08:59:03.496000+00:00 | ['JavaScript', 'Web Development', 'React', 'Clean Code', 'Programming'] |
Announcing SQLDelight 1.0 | Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog
SQLDelight started as a project 4 years ago on the ContentValues and SQLiteOpenHelper APIs from Android with the goal of making writing SQL easy and safe. The library was an early adopter of Kotlin internally, but has been generating Java since its inception. We love Kotlin and weren’t satisfied with a Java API that we knew could be done better in Kotlin, so a year ago we embarked on a complete rewrite focusing entirely on Kotlin.
At the same time Kotlin multiplatform was newly announced and promised easy and safe code sharing across Android and iOS. For Cash App it was the perfect first step to sharing meaningful code, once the schema is shared everything above it could be too: creating viewmodels, syncing with the server, running tests. So, after over a year of rewriting SQLDelight from the ground up, we’re excited to start talking about the new version and what changes it brings for Android and multiplatform development.
The premise of SQLDelight is unchanged: Write SQLite and let the Gradle plugin generate APIs to run your queries for you. SQLDelight files use the .sq extensions and are contained in your src/main/sqldelight folder in a Gradle module.
A simple file with some CREATE TABLE , INSERT , and SELECT statements is all you need to get started:
-- src/main/sqldelight/com/sample/TennisPlayer.sq CREATE TABLE TennisPlayer(
name TEXT,
points INTEGER,
plays TEXT AS Handedness
); insert:
INSERT INTO TennisPlayer
VALUES (?, ?, ?); top10:
SELECT *
FROM TennisPlayer
ORDER BY points DESC
LIMIT 10;
SQLDelight will generate a TennisPlayerQueries class which can run these queries.
val tennisPlayers: TennisPlayerQueries tennisPlayers.insert("Naomi Osaka", 5270, Handedness.RIGHT)
tennisPlayers.insert("Aryna Sabalenka", 3365, Handedness.RIGHT)
tennisPlayers.insert("Simona Halep", 6641, Handedness.RIGHT) val top10: List<TennisPlayer> = tennisPlayers.top10()
.executeAsList() println(top10[0].name) // prints "Simona Halep"
Getting the TennisPlayerQueries object requires a platform driver. SQLDelight includes drivers for SQLite usage in Android, JVM, and iOS platforms. For more in depth information refer to the readme.
For previous users of Square’s SQLite libraries, the code generation has been overhauled using Kotlin, which means a new API generated off of SQL for Kotlin users. To migrate your code from an earlier release of SQLDelight we published some artifacts and have a short guide.
The compiled code also keeps track of active queries and will notify downstream when the results have changed. There’s an extension available to expose this behavior as an RxJava Observable, meaning this release also deprecates SQLBrite. We’ve also added an Android Paging extension to allow exposing queries through a DataSource .
There’s a lot of other features like migration verification, custom types, and set parameters which are all documented in the readme. Give it a try and reach out if you run into any problems! | https://medium.com/square-corner-blog/announcing-sqldelight-1-0-d482aa408f64 | ['Alec Strong'] | 2019-04-18 20:40:51.606000+00:00 | ['Kotlin', 'Android', 'Engineering', 'Cash App'] |
My So-Called (Millennial) Entitlement | My So-Called (Millennial) Entitlement
Did we really expect too much, or were we just gaslit?
I am at the San Francisco International Airport some barely recent morning, registering for a travel program called Clear when the automated kiosk assisting me makes a strange request: “Stand still while we scan your irises.” I’ve barely digested this first ask when another takes its place: this time, the kiosk wants my fingerprints. I find this slightly less alarming; I already use those to access my banking app, buy coins for my mobile games, and unlock the phone that hosts all this information in the first place. But my eyeballs — which I had only just learned could be used as ID, and from a machine at the airport, no less — my dude. Those are the windows to my soul! Ever heard of foreplay?
Clear is a private company that prescreens air travelers using biometric authentication. Becoming a member is like ordering the half-soup, half-sandwich version of TSA PreCheck: it works, if all you want is a taste and are willing to pay for it. With Clear, you don’t need your ID to go through security, but you still have to remove your shoes. You get to wait in a shorter line (sometimes), but you still have to take out your laptop. Basically, the Cleared still participate in the most annoying aspects of air travel and pay almost 10 times the PreCheck fee for the privilege.
If the worst has already happened, that means it’s survivable.
How we decided on this valuation of convenience—it’s $179 per year—is not the point, though. My point is that some random startup casually acquired my eye-prints, and some small voice is telling me I should care more than I do. Someone out there definitely cares about this, no doubt. I’m sure at least one other traveler was not sated when a brisk Google search revealed that Clear is based in her hometown and run by a female CEO, ergo it must be a secure and entirely trustworthy business.
But I was sated. It’s the future, right? What’s the worst one could do with my retinal scans? I already gave my social security number to Camel in exchange for a pack of promotional cigarettes one time (or 12). Somewhere in Midtown Manhattan, a market-research firm knows how many condoms I used in May of 2011 (give or take). And when I think about the fact that every hard document I’ve reproduced on a digital copy machine — at work, at the bodega, at the library — is saved on a hard drive somewhere (lots of somewheres, in fact), I feel a sense of hopelessness that, in its own demented way, translates to freedom.
That’s why I unlock my phone with my fingerprint. It’s also why I talk shit in front of Alexa, why I haven’t put tape over my laptop camera, and why I still have a Facebook account. I don’t expect the worst to happen.
Because the worst has already happened. It is happening, and it will continue to happen.
I find this to be an honest, useful framework. If the worst has already happened, that means it’s survivable. And if the worst is a given in the future, too, we know that ignoring it won’t make it go away. There’s opportunity in having nothing to lose. You just need the right attitude.
Or perhaps you need the right conditioning.
Imagine: You’re 11 years old when two teenagers bring guns to their high school and kill 13 people. They injure 21 more. Your sixth-grade humanities teacher explains the inexplicable to your class after lunch period. You have to imagine that this is a first for at least some of your classmates, crying over the national news. It won’t be the last.
When you’re 15, two planes crash into two towers. You know the towers; had toured them on school trips just like all the other famous Manhattan buildings for which you know the names, if not the functions. In fact, you’d visited the towers just one week before the planes hit. There had been a renaissance fair in one of the lobbies.
At 17, your high school economics teacher tells you that social security will run out before you retire. You’ve already been paying taxes for three years. In 2018, you learn that he was exaggerating, thank goodness — by 2034, retirees can expect to receive a whopping 79% of the full benefit they receive today. You will not be of retirement age until the 2050s.
And when you’re 21, the market crashes. You’ve had a bachelor’s degree for three months. It cost $100,000 to earn, all before interest. Your class valedictorian moves back in with her parents, and no, your internship is not hiring. Five years later, the unemployment rate for people your age is almost double the national average.
Millennials are known as entitled, but as a group, I don’t think we could have lower expectations.
Neuroscience has confirmed that you were making sense of these events with an underdeveloped brain. Along with your emotional maturity and your hormones, it’ll be a work-in-progress until you’re around 25. And the same way the small hurts of being small can still seep into your present — the way your grandmother eyed you with disgust when you went for a second helping — the chipping away of every institution you were raised to believe in can have unintended consequences.
Me: Do you use Touch ID to unlock your phone?
Friend: Ya.
Me: Do you know anything about the technology behind it? Or like, how secure it is?
A beat. A blank stare.
Friend: No?
Me: Same.
My friends do not need to understand the technology behind touch ID any more than they need to understand black holes. They are not convinced that adjusting their social media privacy settings is some sort of moral duty, a symbolic middle finger to Facebook on behalf of all the little guys who understand internet economics to varying degrees, or not at all. Mostly, they were confused as to why any thinking person would have an assumption of security.
“It’s not that I don’t care about being hacked, or about my data being stolen or sold,” one friend tells me. “I assume that vulnerability because there are no physical systems or structures that have succeeded, so why would something that is essentially invisible do a better job than something tangible?”
Millennials are known as entitled, but as a group, I don’t think we could have lower expectations.
I’ll go: I don’t expect to own a home. I don’t expect to retire well, or at all. I don’t expect anyone to give me anything I haven’t explicitly asked for, and even then. I don’t expect it will ever be affordable to continue my education in any formal way. If a package gets lost in the mail, I don’t expect to see it again. I don’t expect the government or the banks or the universities to do anything that benefits regular people. I don’t expect them to hold each other accountable on our behalf. I don’t expect them to expel abusers from their ranks, or to put my safety over their legacy. I don’t expect to feel safe in large crowds or alone late at night. And I don’t expect that my privacy will be respected, online or in general.
America only cares about what the superstars are up to. The rest of us, from the benches to the bleachers, are left to our own devices. And we can play whatever games we want.
As far as I can tell, security — whether financial, technological, physical, or emotional — is not a thing. You don’t get to decide whether some drunk asshole drinks his drunk ass off and gets behind the wheel. Likewise, you don’t get to decide if the drunk Congress or the drunk banker or all the drunk administrations of all the drunk institutions do what’s right for you. Sometimes they will do the right thing for somebody, but statistically speaking, that somebody is not you.
Sometimes the right thing comes served in a shit sandwich, or one guy does the right thing but it’s later counteracted by the next guy and just so we’re clear, it’s always a guy. Or sometimes, we learn that what we thought was the right thing was actually the wrong thing, in ways we didn’t anticipate, except for those of us who did anticipate it but were not asked or heard because we do not employ lobbyists and because the powers that be can’t listen to us until they sort out whether our bodies are legal or not.
Mark Zuckerberg’s Congressional hearing was probably the biggest mainstreaming of data privacy issues yet, and Facebook, with its many transgressions, made for an appropriate scapegoat. But I want to know why it’s Mark Zuckerberg’s fault that American adults of voting age lack the critical thinking skills to differentiate between fake Russian bot news and The Guardian. I want to know the plan for bringing internet literacy to those who are not digital natives. I want to know why the U.S. government is being celebrated for protecting our egos and baby-proofing the internet instead of telling us the truth: Dirty tricks are less likely to work on people with more education.
What happens when your brand of exceptionalism breeds millions of people who voted a sentient conspiracy theory into office? Where does the fault lie? After all, it’s not Facebook who’s spent decades underpaying teachers and closing schools in low-income neighborhoods. Facebook doesn’t have the jurisdiction to end standardized testing or combat the quiet continuation of white flight. Facebook’s biggest mistake? Profiting off of state-sanctioned dumbness.
We’re only supposed to be dumb enough to believe that the fight is red vs. blue and not top vs. bottom. We’re only supposed to be dumb enough to believe in Democracy the Concept™ without casting a critical eye toward its practical application. This is a dumbness cultivated by and for Washington, and Zuckerberg’s misusing of it for corporate gain almost blew the lid off the entire thing. Commence finger-wagging.
On an episode of his podcast Revisionist History, Malcolm Gladwell argues that we should treat education as a weak-link network, where strengthening the weakest links has the most positive outcome for all. This is in contrast to a strong-link network, where a couple of superstars at the top carry the weaker players on the bottom. He illustrates this dynamic using soccer and basketball. An average soccer team with one star player is less likely to win a match than an above-average team with no star players — soccer is a weak-link sport. Conversely, an NBA team with a superstar or two fares better than a team on which all the players are equally, decently good — basketball is a strong-link sport.
Much to its detriment, America acts like a strong-link country. It is the type of place where electing one mixed-race president means we solved racism. (Imagine if the lesson we took from electing one white man was that all white men who lack upward mobility just need to work harder.) We raise up a few undoubtedly smart and deserving people in each field, send them around the world like brand ambassadors for democracy, poster-adults for how advanced and distinguished and American we are. Meanwhile, most of us back home — 78%, in fact — are living paycheck to paycheck. Is that freedom ringing? We’ll call right back after we pay this phone bill.
These are complex problems. In addition to the 3000ish words here, I have written and cut an additional 4500 trying to make sense of it all. I remain overwhelmed by the number of solutions that contradict one another, the knowns and unknowns, the countless logical ends I haven’t considered. But I eventually found my demented silver lining: America only cares about what the superstars are up to. The rest of us, from the benches to the bleachers, are left to our own devices. And we can play whatever games we want.
While grim on its face, this perspective has pushed me to take inventory of myself, my own power. What can I do right now? Am I solving problems I actually care about, or were these problems unconsciously inherited from another time, problems propagated by those with a vested interest in resolving them with more money, more power, more loopholes? Should I devote my energy to righting a system that, by design, has only consistently benefited one demographic and has yet to even prove itself as a scalable model for a generation that’s tired of the same people making the same decisions on behalf of the most diverse country in the world?
Is that a problem? Because it feels more like an opportunity, to me: a chance to exercise this cache of personal agency I’ve been sitting on, agency I didn’t realize I had or needed as I waited for America to work. It feels like an opportunity to try something else.
More powerful than having nothing to lose is cultivating that which can’t be taken. Grace. Clarity. Purpose. The stuff that isn’t Amazon Prime-able. These are the indoor plants of our being; only you can feed them and grow them and expose them to the light. It’s a lot of responsibility, and the work involved is often unglamorous. Some people think they never have to learn to care for these things because they have the means to outsource what they wish: their plants are alive on paper though they don’t know the how or why of it. And besides, can’t you see they’re a little busy trying to colonize Mars?
A respectable goal, though I might suggest to anyone faced with the choice to try taking on the inner self before jumping ahead to outer space. There’s more to unearth in there than you might think, and we need more people to understand the potential of their own organic material. We need people who appreciate the slow growth of nothing into something, who drink up the sunlight and make the air a little more breathable than before.
Because that’s it, for most of us. That’s how we build power. That’s how we, a generation of janitors for the American dream, put our trust in something real: each other. We stop trying to control the world in our heads and in the headlines, and we start controlling ourselves. We sleep. We go to the doctor. We log off. We talk about our problems. We water our plants. We collect our neighbor’s mail when they’re out of town. We take a deep breath before reacting in anger, and question whether this particular battle is worth our energy. It’s not. Why were we fighting again? We volunteer. We water our plants. We focus on ourselves so we can eventually focus on others — in a real way, in a non-transactional way, in a way that slowly but authentically strengthens our fellow weak links. We don’t wait for permission. We get over ourselves; we stop demanding perfection; we start. We water our plants. And on weekends, we play soccer. | https://medium.com/s/trustissues/my-so-called-millennial-entitlement-9be84343c713 | ['Stephanie Georgopulos'] | 2019-09-07 00:23:38.820000+00:00 | ['Trust Issues', 'Millennials', 'Society', 'Politics', 'Privacy'] |
3 Tips to Overcome the XY Problem | 3 Tips to Overcome the XY Problem
Stop trying to figure out solution Y, when you don’t understand problem X
Photo by Sergey Pesterev on Unsplash
You’ve had problems with files. Probably due to their file extensions. Let’s say you wanted the last three characters to determine the file type. You ask some people questions about it.
You search for code to find the last three characters. Your coworkers probably have some suggestions, so you ask them as well.
You’re stuck on your solution, without looking back at the problem.
You’ve got into the XY problem. Let’s go into the details of it. | https://medium.com/better-programming/3-tips-to-overcome-the-xy-problem-19abc9e5b693 | ['Živković Miloš'] | 2020-12-16 16:00:50.748000+00:00 | ['Software Development', 'Business', 'Business Development', 'Software Engineering', 'Soft Skills'] |
Where to Scale Your Workloads | Where to Scale Your Workloads
Season of Scale
“Season of Scale” is a blog and video series to help enterprises and developers build scale and resilience into your design patterns. In this series we plan on walking you through some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises.
In Season 1, we’re covering Infrastructure Automation and High Availability:
In this article I’ll walk you through the compute options to scale your workloads.
Check out the video
Review
In the last article we learned about immutable infrastructure. It’s a key operational best practice for creating scalable environments that don’t put you at risk of configuration drift. If you are rolling out changes to your operating system, application, etc., you want to make sure your changes are tracked, rolled out through a CICD pipeline, and will be applied with certainty to fresh, immutable instances. Now Critter Junction is facing a new dilemma: deciding where to run their new game app on Google Cloud. While each application has its own infrastructure and scaling requirements, GCP offers a multitude of compute options. But how do you balance performance, flexibility, cost efficiency, and language support?
The Layout App
With more users than ever, and with their new immutable infrastructure, Critter Junction was in a great place to launch a new app. They wanted to launch a companion game that used the same database, but let you build and share house layouts using your inventory, codenamed The Layout App.
With a new app launch, scale and unpredictability is always a big challenge. You may never know exactly how many players will show up on launch day. They needed to find a solution on Google Cloud that could automatically handle any amount of load. The team already knew they wanted the layout app to scale independently from their web servers, but they had lots of options.
One solution is to run the layout app on a separate set of VMs. Using Google’s global load balancer, you can send traffic to backend managed instance groups (MIGs). These can automatically scale multiple, identical machines based on metrics like CPU utilization, so they could easily handle more traffic across regions.
Containerization
But it turned out the Layout App didn’t need to run with access to the operating system. And the team wanted their apps to be lightweight and portable across environments, so they decided to containerize it. Containers are basically isolated packages for just running an application. It’s a great way to build a cloud-native app, but you still need to serve it.
Two choices for serving containers would be managed instance groups running container-optimized OS, or Google Kubernetes Engine. Both solutions automatically scale up as demand increases, adding new machines as needed and removing them when demand is low. This can increase availability at peak times while keeping your costs manageable.
Either would work to run their app and meet their unpredictable scaling needs, but the team wanted further abstraction from the infrastructure. They decided they didn’t want up front provisioning and wanted the ability to scale down to zero resources.
While you must have a minimum of 1 instance for MIGs, you can specify a minimum of zero nodes for GKE (an idle node pool can scale down completely). But, at least one node must always be available in the cluster to run system Pods.
Serverless
It was time for the team to look at serverless technology. These technologies almost completely abstract away the infrastructure and orchestration. Unlike the previous options, these handle automatic resource scaling with virtually no configuration or server provisioning. Serverless keys in on running your application, so you can focus on your code. Google Cloud has three main options for running serverless.
Cloud Functions
First up, Cloud Functions is great for snippets of code and small, single-purpose applications. Functions scale up by creating new instances as demand rises, and each function handles a single request. You can scale down to zero and limit the maximum number of instances for full flexibility. It’s perfect for gluing together different cloud services like calling an API based on a schedule or sending notifications — but it was too simple for running the layout app.
App Engine
So they looked to App Engine, which offers the ability to run containers and custom web applications in a serverless way, automatically creating and shutting down instances as traffic fluctuates. You can configure the settings for automatic scaling based on your app’s needs. For example, scaling can be based on CPU utilization, throughput utilization, or maximum concurrent requests.
This could run their containerized app without a problem, but the team wanted to make sure they could run their app in multiple clouds down the road and didn’t want to re-architect.
Cloud Run
Finally, there’s Cloud Run, which is built on a project called Knative. Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. With Cloud Run, Kubernetes management is taken care of behind the scenes. Each deployment is automatically scaled to the number of instances needed to handle all incoming requests, and it can scale from zero to your specified max instances. Because the app team expected traffic spikes, Cloud Run allowed them to keep a number of idle instances to minimize cold starts. If they wanted to later port to multiple clouds, they would be able to use the same application by leveraging Knative.
Given their needs for container portability, fast scaling, and minimal infrastructure overhead, the team decided on Cloud Run and was able to launch faster than they thought possible! While each compute option I covered has autoscaling capabilities built in, you can decide which of these solutions works best for the requirements of your app, giving you the power to choose and scale.
Stay tuned for our next piece on autoscaling web services. And remember, always be architecting.
Next steps and references: | https://medium.com/google-cloud/where-to-scale-your-workloads-6420150bf825 | ['Stephanie Wong'] | 2020-09-01 23:24:18.737000+00:00 | ['Software Development', 'Serverless', 'Scalability', 'Google Cloud Platform', 'Cloud Computing'] |
“Our unity is our strength, and our diversity is our power.” | More from Momentum Follow
Momentum is a blog that captures and reflects the moment we find ourselves in, one where rampant anti-Black racism is leading to violence, trauma, protest, reflection, sorrow, and more. Momentum doesn’t look away when the news cycle shifts. | https://momentum.medium.com/our-unity-is-our-strength-and-our-diversity-is-our-power-e82fca1cd08d | ['Jada Gomez'] | 2020-12-02 06:32:55.071000+00:00 | ['Society', 'Kamala Harris', 'Quotes', 'Black Lives Matter', 'Racism'] |
Reversing Bits in C | Reversing Bits in C
A small performance investigation into innocent-looking code.
Written by Charles Nicholson.
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog
Thanks to readers rjs, meteorfox in the comments, and reader lnxturtle over on the Reddit thread (!) for correcting my poor choice of words. I incorrectly used the phrase Cache Coherence when the correct phrase was (Spatial / Temporal) Cache Locality. Your attention to detail is appreciated!
The Setting
Interesting lessons can come from unexpected places! I was pleasantly surprised at how something as “simple” as reversing bits in a byte could lead me on an unexpectedly deep exploration: operation vs instruction count, memory access patterns and cache behavior, and low-level CPU instructions. It’s often very easy to make assumptions about the performance of code that we write, and I hope that this article serves as a reminder that the map is never the territory, and that the only way to understand what’s happening inside your code is by observing and measuring it.
While doing some investigations into one of our core iOS libraries, some code jumped out at me:
unsigned char ReverseBitsInByte(unsigned char v)
{
return (v * 0x0202020202ULL & 0x010884422010ULL) % 1023;
}
Since I’m not a cyborg wizard ninja with the ability to do 64-bit multiplication and bit-twiddling in my head, I decided to try and figure out what this code was doing. A simple search on one of the constant numbers led me immediately to the famous Stanford Bit Hacks page. This, in retrospect, should have been obvious.
The description of the algorithm is: “**Reverse the bits in a byte with 3 operations (64-bit multiply and modulus division)**”. This confused me a little more, since all current iOS devices have 32-bit ARM CPUs. Was this code actually efficient? It has the fewest “operations”, which is probably why it was originally chosen.
Being curious, I decided to take a few minutes and look at all of the approaches and learn how they actually performed on real-world hardware. This code, like all code, runs on an actual CPU. How do the different approaches perform on real hardware here in the real world?
The Landscape
Here are all of the algorithms listed on the Bit Hacks page, with their advertised “operation counts”:
The “obvious” way (O(N) where N is the index of the highest set bit):
unsigned char ReverseBitsObvious(unsigned char v)
{
unsigned char r = v;
int s = sizeof(v) * CHAR_BIT - 1;
for (v >>= 1; v; v >>= 1) {
r <<= 1; r |= v & 1; s--;
}
return r << s;
}
Lookup table (O(1)):
#define R2(n) n, n + 2*64, n + 1*64, n + 3*64
#define R4(n) R2(n), R2(n + 2*16), R2(n + 1*16), R2(n + 3*16)
#define R6(n) R4(n), R4(n + 2*4 ), R4(n + 1*4 ), R4(n + 3*4 ) static const unsigned char BitReverseTable256[256] =
{
R6(0), R6(2), R6(1), R6(3)
}; unsigned char ReverseBitsLookupTable(unsigned char v)
{
return BitReverseTable256[v];
}
3 operations (O(1)) (64-bit multiply and modulus division):
unsigned char ReverseBits3ops64bit(unsigned char v)
{
return (v * 0x0202020202ULL & 0x010884422010ULL) % 1023;
}
4 operations (O(1)) (64-bit multiply, no division):
unsigned char ReverseBits4ops64bit(unsigned char v)
{
return ((v * 0x80200802ULL) & 0x0884422110ULL) * 0x0101010101ULL >> 32;
}
7 operations (O(1)) (32-bit):
unsigned char ReverseBits7ops32bit(unsigned char v)
{
return ((v * 0x0802LU & 0x22110LU) |
(v * 0x8020LU & 0x88440LU)) * 0x10101LU >> 16;
}
Parallel bitwise approach (O(5log(N)) where N is the number of bits):
unsigned char ReverseBits5logNOps(unsigned char v)
{
v = ((v >> 1) & 0x55) | ((v & 0x55) << 1);
v = ((v >> 2) & 0x33) | ((v & 0x33) << 2);
v = ((v >> 4) & 0x0F) | ((v & 0x0F) << 4);
return v;
}
“Cheating”
Let’s take a step back here. We’re running this code on a specific family of CPUs, the ARMv6 and greater (available in all iPhones). ARMv6 and greater CPUs have an instruction called RBIT. This single instruction does exactly what we need: it reverses the bits in a 32-bit register. Unfortunately, while a lot of CMSIS implementations provide an RBIT intrinsic, it doesn’t look like one is provided with Xcode. It’s easy enough to drop down into inline assembly, though, and call the instruction ourselves:
unsigned char ReverseBitsRBIT(unsigned char v)
{
uint32_t input = v;
uint32_t output;
__asm__("rbit %0, %1
" : "=r"(output) : "r"(input));
return output >> 24;
}
Let’s add this approach to our collection and see how it performs.
Note: Intel x86/x64 processors don’t have this instruction, so this is definitely not a portable solution. (That’s why I call it cheating!)
The Shoot-out
Here are the timings for reversing the bits in a byte, performed 50 million times each.
Starting tests...
ReverseBitsLookupTable... 10.058261ns per function call
ReverseBitsRBIT... 10.123462ns per function call
ReverseBits7ops32bit... 17.453080ns per function call
ReverseBits5logNOps... 20.054218ns per function call
ReverseBits4ops64bit... 21.203815ns per function call
ReverseBitsObvious... 65.809257ns per function call
ReverseBits3ops64bit... 509.621657ns per function call
Observations of interest:
* ReverseBits3ops64bit was 50x slower than the fastest 2 algorithms, despite taking O(1) time!
* ReverseBitsObvious (O(N), remember) was only 3–5x slower than the O(1) solutions.
* ReverseBits5logNOps (O(logN)) was slightly faster than ReverseBits4ops64bit O(1).
* ReverseBitsLookupTable is ever-so-slightly faster than ReverseBitsRBIT? Why?
The Lesson
So what’s the takeaway here? What have we learned from this experiment?
Choosing an algorithm based on the number of “operations” is a nonsensical approach. ReverseBits3ops64bit was the clear loser because not only did it have to do some large 64-bit math, it also needed to do a 64-bit modulo with a non-power-of-2 divisor. That’s one mathematical operation, but a large number of CPU instructions. CPU instructions are what matter here, though, as we see, not as much as cache coherency.
Asymptotic Big-O analysis doesn’t tell the whole story. It’s very easy to simplify and select algorithms based on their asymptotic behavior. Unfortunately, many algorithms aren’t actually dominated by their asymptotic behavior until N gets very large. In many cases, and certainly in this one we’re studying, the algorithms are dominated largely by instruction count and cache access patterns. ReverseBits5logNOps runs in O(logN) time, and yet is faster than the O(1) ReverseBits4ops64bit implementation, which doesn’t even do any expensive (non-power-of-2 modulo) math! Asymptotic behavior analysis is crucial when dealing with huge data sets, but is misleading and incorrect here.
Cache locality is crucial. ReverseBitsLookupTable was tied for first place because the 256-byte lookup table fits entirely in D-cache. If we were to evict the table from the cache between each iteration, we would pay the time penalty for the cache miss and subsequent reload.
Take advantage of your specific CPU! RBIT will win in the general case here because it’s a single instruction with relatively low latency and throughput. It does no memory access, so we don’t have to worry about cache misses. It requires no extra storage, like the ReverseBitsLookupTable solution does. If you ever need to reverse a full 32-bit word, RBIT will take the same amount of time. In fact, our ReverseBitsRBIT function would be even faster when reversing a full register-sized 32-bit word, since the final right shift could be eliminated.
Unfortunately, I wasn’t able to figure out a way to get Apple’s LLVM compiler to optimize a C loop down into a single RBIT command. This isn’t surprising, as it’s a fairly complex algorithm for an optimizing compiler to match and apply a strength reduction against. Compilers are getting smarter every day, but in the meantime, it’s always good to know how to do this kind of manual work when necessary.
If you’re curious about this and want to dive in and run your own experiments, I’ve posted the test code I used to gather my results (as well as the cleaned-up assembly output emitted by the various functions), all on my public GitHub space. Fork away, and let me know what you discover! | https://medium.com/square-corner-blog/reversing-bits-in-c-48a772dc02d7 | ['Square Engineering'] | 2019-04-18 23:36:26.575000+00:00 | ['Programming', 'Cpp', 'Engineering'] |
Questions for BRCA1+ Trans-Feminine Youth | Questions for BRCA1+ Trans-Feminine Youth
A new case study this week in the journal LGBT Health explores the story of a trans-feminine youth identified as BRCA1+ at the onset of hormone therapy. Little is known about best practices for BRCA1+ trans youth, even though many physical and hormonal considerations exist.
Image Source: Coursera
In the latest issue of LGBT Health, a medical team in New England presents a case study and ethical opinion pieceabout a trans-feminine youth identified as BRCA1+, meaning they possess a genetic mutation known to increase cancer risk, especially in breast and ovarian tissue. Although BRCA1 testing is not generally recommended in youth, there is theoretical concern for BRCA1+trans-masculine and -feminine youth seeking gender affirming procedures, however little evidence is presently available. It is possible that full breast-tissue removal and hysterectomies are effective cancer risk-reduction strategies in BRCA1+ trans-masculine youth. Likewise, it is possible that feminizing hormones increase the risk and/or rate of onset of some cancers in BRCA1+ trans-feminine youth.
The individual and their family in this particular case study originally presented to the medical team physician when the youth was 14 years old. The youth, born male, was interested in starting puberty-suppressing hormones as part of treatment for gender dysphoria. In the initial visit, the youth’s mother self-reported that she was BRCA1+ and a two-time survivor of breast cancer. It was the physician’s suggestion, with this information, that the youth be tested for BRCA1 before starting feminizing hormones, which could theoretically increase breast cancer risk by promoting breast tissue development. After a year on pubertal blockers, the youth requested to begin estrogen therapy.
‘‘I know I can’t stay on pubertal blockers forever. I have to pick one side or the other and I want to pick the girl side.’’
The medical team genetic counselor met with the family and disclosed the BRCA1+ finding, as well as future cancer screening recommendations. They suggested the family follow those put forth for cisgender women with early BRCA1+ detection (i.e., more frequent and earlier screening), which are known to increase stress and anxiety. The counselor also recommended an oncology consult before starting feminizing hormones. Two referrals were denied on bases of lack of expertise.
“The risk in an XY woman has to be less than the risk in an XX woman. And anyway, I’d rather live a shorter life as a woman than a longer life as a man.”
Through additional outside consultation, the medical team decided that the autonomy of the youth and their family in the decision-making process to start feminizing hormones was to be respected fully. It is well known that hormone replacement therapy can profoundly impact the quality of life of trans youth experiencing gender dysphoria. Therefore, the case was presented to the family as such:
Stop pubertal blockers and allow the youth to experience puberty in their assigned gender (unacceptable risks of gender dysphoria)
Continue pubertal blockers indefinitely, with surgical gender transition (unacceptable risks of osteoporosis)
Continue pubertal blockers until the youth reaches age of consent (risk of delay in puberty and uncertain utility)
Proceed with feminizing hormones with recommendations for appropriate cancer screening (question increased risk of cancers)
The decision had been made more complicated by the mother’s deteriorating health condition. However, the family consented to starting feminizing hormones and the physician agreed to prescribe hormones when the family and therapist were ready. Sadly, the youth’s mom passed away shortly after the consent process. To date, the youth has not begun feminizing hormones and remains solely on pubertal blockers.
This case presents an emotional and personal justification for the visibility of trans lives in research and care recommendations. With profound implications for BRCA1+ trans youth, much more information is needed to understand the physical and hormonal considerations for youth and their families facing difficult decisions like the ones in this case. For medical professionals and families interested in learning more about the ethical considerations during this decision-making process, the authors put out an excellent concurrent piece, available here. | https://medium.com/qspaces/questions-for-brca1-trans-feminine-youth-a9e926b2fb70 | ['Cameron Mcconkey'] | 2018-06-11 02:58:50.343000+00:00 | ['Health', 'Cancer', 'LGBT', 'Transgender'] |
Generate Random Password using Python | Photo by Markus Spiske on Unsplash
Password can be define as ‘Secret word/words’ or may be a number. In simple words, it is a key to a lock. Google also suggests us randomly generated password. The purpose of this article is that know about how to generate password using Python language.
Requirements:
Nothing special requirements but you must have Python installed on your PC and have a little bit knowledge of computer programming. If you don’t have than no need to worry you can simply download it by clicking on Python.
First of all, import random and string package.
import random, string
than creates a function called password and we’ll write our randomly generated password code into it. Now pass the arguments into such as how much digit we want into password, length of the password and strength of the password. As the below code shows with docstring
def password(length, num=false,strength='weak'):
"""length of password, num if you want a number and strength(weak, strong,very)"""
The above git has whole code into it. So now I’ll explain it line by line.
#From line 5 to 10
lower = string.ascii_lowercase
upper = string.ascii_uppercase
letter = lower + upper
dig = string.digits
punct = string.punctuation
pwd = ' '
From line 5 to 10, we’re just creating variables for lowercase letters, uppercase letters, letters, digits, punctuation and password named lower , upper , letter, dig , punct , pwd respectively.
The letter variable consists combination of lowercase letters and uppercase letters which are going to be use in password. The string.ascii_lowercase and string.ascii_uppercase takes the ASCII value of the lowercase letters and uppercase letters. The string.punctuation function is used for making password clear from separators. The string.digits function is used for taking digits in string format.
if strength == 'weak':
if num :
length -= 2
for i in range(2):
pwd += random.choice(dig)
for i in range(length):
pwd += random.choice(lower)
From line 11 to 17, we’ve written conditional code. The condition is for ‘weak password’. In line 11, If we choose ‘weak password’ to use than this condition will run. In line 12, If num is True than the program will include 2 digits because we’re taking length variable equals to 2 for same condition. In line 14 and 15, the for loop is adding two digits one by one into password. In line 16 and 17 , again the for loop is adding lowercase letters in password.
elif strength == 'strong':
if num:
length -= 2
for i in range(2):
pwd += random.choice(dig)
for i in range(length):
pwd += random.choice(letter)
From line 19 to 17, we’ve written conditional code again. The condition is for ‘strong password’. In line 19, If we choose ‘strong password’ to use than this condition will run. In line 20, If num is True than the program will include 2 digits because we’re taking length variable equals to 2 for same condition. In line 22 and 23, the for loop is adding two digits one by one into password. In line 24 and 25 , again the for loop is adding letters which may be lowercase or uppercase in password instead of just lowercase as we’ve used for ‘weak password’ condition.
elif strength == 'very strong':
ran = random.randint((2,4)
if num:
length -= ran
for i in range(ran):
pwd += random.choice(dig)
length -= ran
for i in range(ran):
pwd = random.choice(punct)
for i in range(length):
pwd = random.choice(letter)
From line 26 to 36, we’ve written conditional code again. The condition is for ‘very strong password’. In line 26, If we choose ‘very strong password’ to use than this condition will run. In line 27, we’re using ran variable which is generating two to four digits randomly by using random.randint function.
In line 28, If num is True than the program will include 2 to 4 randomly generated digits because we’re taking length variable equals to ran in line 29 for same condition.
In line 30 and 31, the for loop is adding 2 to 4 randomly generated digits one by one into password. In line 33 and 34, the for loop is using punctuation. In line 35 and 36, which may be lowercase or uppercase in password instead of just lowercase as we’ve used for ‘weak password’ condition.
pwd = list(pwd)
random.shuffle(pwd)
return ''.join(pwd)
In line 38, we’re converting password into a list data type. In line 39, we’re shuffling letters and digits randomly. In line 40, all the keywords(letters and digits) are joined together by removing separators i.e. ‘ ’, “ ”,/ etc.
print(password(5, num=True))
print(password(10, num=True, strength = 'strong')) print(password(13, num=True, strength = 'very strong'))
In above lines, we’re calling function into action to perform various password condition that we coded. | https://medium.com/dataseries/generate-random-password-using-python-b90cf4784ea5 | ['Jitendra Singh Balla'] | 2020-11-12 09:32:42.126000+00:00 | ['Coding', 'Python', 'Programming', 'Software Development', 'Code'] |
Passing Dynamic Queries Between Client and Backend Server | To get the most out of this article, you will need a good understanding of creating Expressions with the .NET Framework. If you don’t already, then check out this article[^] before going any further.
During the development of our current ASP.NET Core web application, I had a particular set of forms where the client needed to dynamically query one particular dataset. The data being returned related to permission data. The data could be queried in a number of different ways e.g. the permissions for a particular user or the permissions for a particular service. Although the same data was being queried and returned in both cases, they would need to be implemented as two completely separate GET Requests.
All our queries are RESTful GET commands which invoke our ASP.NET Web API backend services. Each new query would involve creating a new controller, service-layer code, data-layer code etc. Much of this code would be very similar as it was effectively querying the same data and returning the same data structures (models).
This got me thinking. Instead of implementing each of these queries separately, that I instead create a dynamically queryable service instead. The client application passes a query to the service. The service executes this query against the data and returns the result back to the client. This would give the client the flexibility to query the data in any way as required.
I wasn’t even sure if this was possible. After much investigation, I came across some posts on Stackoverflow confirming that it was indeed possible. The client application would create an Expression tree. This would be serialised and sent to the ASP.NET Web API service where it would be de-serialised and executed.
The first problem would be serialising the Expression. It turns out that .NET Expression trees cannot be serialised / de-serialised. An Expression is not based on a static structure in the same way as a class, and therefore does not contain any definition for its structure. A class can be serialised because it contains type and structure meta data that can used by the serialiser. An Expression tree contains none of this meta data.
It turns out that there is a nuget package called Remote.Linq[^] that is able to handle the serialisation of Expressions. This handles all the serialisation / de-serialisation allowing your Expression to be passed to your backend service from the client application.
The first step is to add two package references to your project in Visual Studio.
1. Remote.Linq
2. Remote.Linq.Newtonsoft.Json
These will add the necessary extension methods and functionality needed to serialise / de-serialise your Expression trees.
You may need to create some helper functions similar to the ones below. These encapsulate the logic involved with serialising / de-serialising your Expression trees.
Hide Expand
Copy Code
/// <summary>
/// Deserialise a LINQ expression tree
/// </summary>
public Remote.Linq.Expressions.Expression DeserialiseRemoteExpression<TExpression>(string json) where TExpression : Remote.Linq.Expressions.Expression
{
JsonSerializerSettings serializerSettings = new JsonSerializerSettings().ConfigureRemoteLinq();
Remote.Linq.Expressions.Expression result = JsonConvert.DeserializeObject<TExpression>(json, serializerSettings);
return result;
} /// <summary>
/// Serialise a remote LINQ expression tree
/// </summary>
public string SerialiseRemoteExpression<TExpression>(TExpression expression) where TExpression : Remote.Linq.Expressions.Expression
{
JsonSerializerSettings serializerSettings = new JsonSerializerSettings().ConfigureRemoteLinq();
string json = JsonConvert.SerializeObject(expression, serializerSettings);
return json;
} /// <summary>
/// Convert the specified Remote.Linq Expression to a .NET Expression
/// </summary>
public System.Linq.Expressions.Expression<Func<T, TResult>> ToLinqExpression<T, TResult>(Remote.Linq.Expressions.LambdaExpression expression)
{
var exp = expression.ToLinqExpression();
var lambdaExpression = System.Linq.Expressions.Expression.Lambda<Func<T, TResult>>(exp.Body, exp.Parameters);
return lambdaExpression;
}
With those created, the first thing you will need to do is create your Expression and serialise it.
The examples I will use all relate to the scenario I described at the beginning of the article i.e. the ability to dynamically query permissions data.
Here’s a simple Expression that returns a list of permissions for the specified permission ID (in reality there would only ever be one permission returned for any given permission ID but for the purposes of this example let’s assume that one or more permissions will be returned by the Expression).
Hide Copy Code
const int permissionid = 1;
Expression<Func<PermissionEntities, List<PermissionEntity>>> expr1 = m => m.Permissions.FindAll(q => q.PermissionId == permissionid);
Next we need to convert the Expression into a Remote.Linq expression and serialise it.
Hide Copy Code
var serialised = SerializerManager().SerialiseRemoteExpression(expr1.ToRemoteLinqExpression());
The extension method ToRemoteLinqExpression() is provided by Remote.Linq and converts a .NET Expression into a Remote.Linq expression.
With our Expression now serialised into a string, we can pass it into a function to execute against our permission data. The function will need to perform the following actions.
1. De-serialise the Remote.Linq expression
2. Convert the Remote.Linq Expression into a .NET Expression
3. Invoke and execute the Expression against permissions data
Here’s an example of a function that accepts a serialised Remote.Linq expression and executes it against a permissions dataset.
Hide Expand
Copy Code
/// <summary>
/// Return the specified permissions from the remote expression
/// </summary>
public PermissionModels GetPermissionsDynamic(string payload)
{
if (string.IsNullOrEmpty(payload)) return null;
//create an empty default permission model
PermissionModels result = new PermissionModels();
//de-serialise back into a Remote.Linq Expression
Remote.Linq.Expressions.LambdaExpression expression =
SerializerManager().DeserialiseRemoteExpression<Remote.Linq.Expressions.LambdaExpression>(payload) as LambdaExpression; //convert the Remote.Linq Expression into a .NET Expression
var localexpression = SerializerManager().ToLinqExpression<PermissionEntities, List<PermissionEntity>>(expression); //grab all the permissions from the DB
PermissionEntities permissions = this.Data.GetPermissions();
//compile and invoke the expression
var compiled = localexpression.Compile();
var matches = compiled.Invoke(permissions); //if no matches were found then just return the default object with no items
if (matches == null || !matches.Any()) return result; return matches
}
Putting all the pieces together, here’s a simple unit test that demonstrates how to create an Expression and pass this to the above function to execute against actual data.
Hide Copy Code
[TestMethod]
public void GetPermissionsDynamicTests()
{
//Arrange
const int permissionid = 1;
PermissionsService service = new PermissionsService();
Expression<Func<PermissionEntities, List<PermissionEntity>>> expr1 = m => m.Permissions.FindAll(q => q.PermissionId == permissionid); var serialised = SerializerManager().SerialiseRemoteExpression(expr1.ToRemoteLinqExpression()); //Act
var results = service.GetPermissionsDynamic(serialised); //Assert
Assert.IsNotNull(results);
Assert.IsNotNull(results.Permissions);
Assert.IsTrue(results.Permissions.Any());
Assert.IsNotNull(results.Permissions.Find(q => q.PermissionId == permissionid));
}
To complete this we would need to write a controller method that invoked the function GetPermissionsDynamic(). It should be noted that although we are creating dynamic queries over HTTP, they will need to be implemented as POST rather than GET. The reason for this (as I found out) is because a GET querystring is limited in length. A serialised Expression will almost certainly break that limit. Therefore place the serialised Expression in the POST body of your Request. It is also more secure to pass your Expressions this way as they are less visible to prying eyes. You may want to consider encoding / encrypting the Expressions you pass from your client to your service for added security.
I wouldn’t use this pattern for every query. I would still create a single GET query for each Request for the majority of the queries I implement. However, when you have several queries all acting on the same data and returning the same data structures (models), then this pattern allows you to simply and easily implement those as dynamic queries. Like all patterns, it can solve a very specific problem if used as intended and its usage is clearly understood by the developer. | https://medium.com/swlh/passing-dynamic-queries-between-client-and-backend-server-48b806afdfcf | ['Dominic Burford'] | 2020-07-15 08:04:46.638000+00:00 | ['Design Patterns', 'Software Architecture', 'Software Development', 'Dotnet', 'Software'] |
How to organize a successful hackathon for your startup in 7 steps | How to organize a successful hackathon for your startup in 7 steps Nikolay Rodionov Follow Jun 16 · 10 min read
As a startup, Streamroot organized a hackathon for five consecutive years, and we were eager to keep this tradition alive following our acquisition by CenturyLink last September. After another successful edition in February 2020, we’d like to share with you our toolbox and takeaways on how to organize a successful hackathon.
Step 1: Nominate an organizational committee
Organizing a hackathon requires focus and energy. To start, select a team of two or three people to become your organizational committee. We try to choose a new team every year to have new ideas and allow more members of the team to learn from the experience.
Step 2: Determine the budget and plan the logistics
Before starting anything, know how much you are ready to invest in the event. The main questions you’ll want to answer are:
1. Duration
A hackathon is a competition, and as such, it should be restricted in time. There are different options depending on how much time you are able to allocate outside of your day-to-day roadmap, and how intense you want the event to be.
24 hours : This very intense timeframe has little impact on day-to-day operations. However, it can be too short to build a meaningful proof-of-concept without teams pulling all-nighters; remember you will need at least two hours to bootstrap the project and two hours dedicated to the presentations.
: This very intense timeframe has little impact on day-to-day operations. However, it can be too short to build a meaningful proof-of-concept without teams pulling all-nighters; remember you will need at least two hours to bootstrap the project and two hours dedicated to the presentations. 48 hours : Our sweet spot. 48 hours is short enough to not impact your roadmap or operations and to keep a high intensity but also long enough to build really awesome projects.
: Our sweet spot. 48 hours is short enough to not impact your roadmap or operations and to keep a high intensity but also long enough to build really awesome projects. Weekend: This can be controversial (and could also potentially violate local labor laws). We tried several hackathons over the weekend or have let teams finish up their projects over the weekend by doing the presentations the next Monday morning. The advantage is that you give teams the opportunity to go the extra mile if they are really motivated. In the end, we stopped doing weekend hackathons because it was too advantageous to teams who worked over the weekend, and there was no clear end to the event (and we really enjoyed organizing our post-hackathon celebrations).
2. Location
You have many options that all have pros and cons. We tried pretty much everything:
In a team member’s countryside house : It’s free and usually provides a nice change of scenery from your day-to-day work. However, for that you need to have a founder or an employee who can host the whole team (almost impossible when you grow past 15 people!), and more importantly, you rarely have a good IT & network connection at a vacation home. It’s also more difficult to focus when you’re looking out onto the beach…
: It’s free and usually provides a nice change of scenery from your day-to-day work. However, for that you need to have a founder or an employee who can host the whole team (almost impossible when you grow past 15 people!), and more importantly, you rarely have a good IT & network connection at a vacation home. It’s also more difficult to focus when you’re looking out onto the beach… Renting a conference room outside of your workplace : This again provides a change of context that helps the teams to go all-in and fully focus on their work without day-to-day distractions. At the same time, it can be expensive, and again it’s not easy to find a place with good IT & network connectivity. We organized a Hackathon during our 2017 offsite in Portugal in which we ended up in a small hotel conference room with slow internet and poor lighting, which complicated the experience for everyone.
: This again provides a change of context that helps the teams to go all-in and fully focus on their work without day-to-day distractions. At the same time, it can be expensive, and again it’s not easy to find a place with good IT & network connectivity. We organized a Hackathon during our 2017 offsite in Portugal in which we ended up in a small hotel conference room with slow internet and poor lighting, which complicated the experience for everyone. At the office: This is our preferred option now. It’s free, and you can use all the connectivity and additional screens to boost productivity. It doesn’t really break the routine, but you can easily overcome this by creating new team spaces for the duration of the hackathon by repurposing your meeting rooms or just rearranging your desks.
Beach Hackathon 2016
Underground Hotel Hackathon 2017
Office Hackathon 2020
3. Budget
You have a lot of flexibility here; everything depends on how much you can invest in the event. We started with a €0 budget, and our latest hackathon was around €2,000.
Meals : You can go from paying nothing to picking up the bill for all meals and snacks day and night. We usually land somewhere in between, organizing a big lunch all together every day to encourage sharing between teams and paying for a pizza for the teams that want to work into the evening.
: You can go from paying nothing to picking up the bill for all meals and snacks day and night. We usually land somewhere in between, organizing a big lunch all together every day to encourage sharing between teams and paying for a pizza for the teams that want to work into the evening. Prizes : Again the budget can vary a lot here. The first year, we cobbled together a few nominal prizes and a recycled drone won at another hackathon… This year had an overall prize pool of €1,000, and the grand prize was a ULM flight for all members of the winning team. Side note: it’s always nice to offer real cups and medals — you can get customized trophies for around €20 online.
: Again the budget can vary a lot here. The first year, we cobbled together a few nominal prizes and a recycled drone won at another hackathon… This year had an overall prize pool of €1,000, and the grand prize was a ULM flight for all members of the winning team. Side note: it’s always nice to offer real cups and medals — you can get customized trophies for around €20 online. Space rental : As mentioned above, if you organize the event outside your office, venue rental takes up a big chunk of your budget.
: As mentioned above, if you organize the event outside your office, venue rental takes up a big chunk of your budget. Extra: Do you want to organize an after party? A healthy kick-off breakfast? Adapt your hackathon to your startup!
You can organize an awesome event with almost no cash if you are creative enough. Our first two hackathons were all personally funded and didn’t cost anything to the company. That said, it’s always good to know what you can spend in advance to better adapt everything to your budget.
A typical French lunch
Our 2020 prizes
4. Participants
Should this be a mandatory event?
If you make the event mandatory for the entire team, ensure that everyone can focus on their projects during the whole hackathon. This can be challenging or even impossible for sales and support roles. On top of that, you may need to convince the non-developer teams that the time spent on the hackathon is useful for them and for sales reps, more useful than time spent on leads. This requires much more vision.
We chose to make the event mandatory for everyone, with a few exceptions for customer support and sales representatives that needed to stay in touch with customers. We clearly differentiated however between participants that were “all-in” and those who would occasionally help. We didn’t count the occasional helpers as full team members to make sure we had teams that were 100% focused on their projects.
Should it be just dev & product teams, or sales & marketing too?
If you want to include non dev teams, make sure everyone’s skills can be used fully and that the project scopes accommodate this. We always make sure to have subjects that are more sales, marketing and HR oriented, and balance team composition to have all the right skills in each group.
How do you include your remote teams?
We have to admit we weren’t at our best in our last edition. We usually try to bring our remote team members on site for the event, but that wasn’t possible. The difficulty is not in having individuals working remotely but in having disparity; for instance, if four team members are on site and one remote, the latter can easily feel left out and miss important conversations.
There are many tools that can help integrate remote teams (Discord, virtual whiteboards, etc.), and having an always-open video chat helps a lot!
Step 3: Define the rules and structure of your Hackathon
This is the most important part of the organization, as it will help guide your creative minds. Do you want to select a specific theme or topic for all the projects, or let everyone work on whatever they want? On what basis will you select the winners?
This year, for instance, we wanted to focus on projects that either improve our lives at work or represent innovations for our product. Here are the rules that we had for our 2020 projects and the criteria for project selection:
Step 4: Set up a Call for Projects
Once the rules are clearly defined, employees can submit ideas. On our side, we ask each proposal to be sufficiently detailed as to include the following:
Short description of the project
How can it be useful to the company? (this was criteria #1)
How do you plan to do it?
Techs or skills required in the team
We then give everyone a few weeks to brainstorm and mature their ideas until the end of the project submission deadline. This time period also gives those submitting projects the opportunity to campaign for their ideas.
Step 5: Organize a project presentation and voting
Two weeks after the call for projects, we organize a presentation session where each person who submits a project explains and defends his or her idea in less than three minutes. At the end of this session, we let everyone vote for the project they would like to work on.
As in every selection process, the perfect matching between projects and people is almost impossible. You will always have some team members who are not 100% satisfied with where they end up.
Our process was built upon several iterations. Everyone votes for their top three projects in order of preference; the committee then makes a final project selection taking into consideration:
The size of the team : We banned solo teams to encourage teamwork and try to have teams with 5 or fewer people to keep them lean.
: We banned solo teams to encourage teamwork and try to have teams with 5 or fewer people to keep them lean. Skills required: We aim to balance teams so that at least one person masters each required skill/language. We encourage learning new skills, but it’s very hard to get concrete results in 48 hours if you’re trying to build an app with a team in which no one has ever done iOS coding…
We aim to balance teams so that at least one person masters each required skill/language. We encourage learning new skills, but it’s very hard to get concrete results in 48 hours if you’re trying to build an app with a team in which no one has ever done iOS coding… Participant preferences: We try to fit as many people as possible into their number 1 or 2 project choice.
This process is far from perfect, but it fits with the goals we set up for the hackathon: teamwork and building useful projects that will help the company to work better.
Inevitably not everyone will have their first choice, so you can always do a few team iterations to make sure the participants are happy before starting.
Step 6: Launch the hackathon
Now that the projects are ready and teams are done, it’s time to start coding! After a quick kick-off announcing the rules and the contest criteria, the teams set off on their project marathon.
We won’t go through this phase in detail, but here are some takeaways we’ve learned over the years:
Every team should have their own team space to work. Buy extra whiteboards and markers if you don’t have enough!
If you can, take care of the meals. During intense hacking phases, participants are happy not to worry about what to eat. Also, having lunches and dinners in a common space is an excellent way for the teams to connect and share with each other.
Ask for short and dynamic presentations. After 48 hours of work, no one wants to spend 3 hours listening to slides! Three minutes of context and two minutes of demo worked perfectly for us.
Make it feel like a real competition by enforcing the timing and the rules strictly. No late submissions or presentations that take forever, as it causes frustration and dampers the dynamic.
competition by enforcing the timing and the rules strictly. No late submissions or presentations that take forever, as it causes frustration and dampers the dynamic. Have your hackathon committee deliberate and announce the winners right after the presentations to keep the momentum going.
Celebrate the winners and the losers! Post-hackathon parties always left unforgettable memories for our teams :)
Post-hackathon party 2020
Step 7: Follow up and integrate the best projects into your roadmap!
You will see that your teams are capable of building incredible things in just 48 hours. But of course, even if projects work, they won’t be production-ready from day one. It’s very important to keep hackathon projects from rotting on a shelf by quickly adding the most promising ones into your roadmap and company life.
Almost every year, more than half of our projects are later transformed into new features or new products; some lead to significant technical leapfrogging that would not have been possible without breaking from our routine and taking the time to work intensively on the subject.
To sum it up
We hope this post will motivate other teams to organise internal hackathons. They are an incredible tool for many reasons. They can boost team morale and allow them to express their creativity and ability to innovate. They can allow teams to learn new things and work with new people. And finally, they can jumpstart innovation within your company with surprising innovations and insightful projects you could have never imagined in your day-to-day routine.
PS: Our teams are always growing, so if you would like to know more about Streamroot and CenturyLink, check out our careers page, and don’t hesitate to contact me directly at nikolay.rodionov@centurylink.com.
_________
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user. | https://medium.com/streamroot-developers-blog/how-to-organize-a-successful-hackathon-for-your-startup-in-7-steps-4eadefe37ce6 | ['Nikolay Rodionov'] | 2020-06-18 09:29:35.509000+00:00 | ['Hackathons', 'Team Building', 'Teamwork', 'Software Development', 'Startup'] |
3 Common Git Scenarios and How to Deal With Them | Overwrite Git Commit History
I have had to overwrite my remote branch commit history a few times in the past. The main reason I had to do this was because of unused or bad commits I accidentally or unknowingly had made, only to realise after that those commits weren’t needed.
This can also happen when the requirements change, which I can only assume happens a lot anywhere, anytime. As developers, we like to keep things tidy and clean, so we tend to remove what is not necessary (at least I do that).
Without any further ado, let’s see how we can clean up our commit history.
To start with, let’s pretend that we have only one commit (the latest one) we want to erase from the history.
Reset tree to the commit prior to latest
We can rewind history to the commit prior to the latest commit that is currently in local and remote:
$ git reset HEAD^
Note that this is known as the soft reset, which means the changes in the latest commit we want to remove from the history are preserved. This is useful in case you want to use some of the changes.
If you do not want to use any of the changes from the latest commit at all, then you can do what is called a hard reset. Please be warned that a hard reset does not preserve the changes from the latest commit. In other words, they’re gone for good.
$ git reset --hard HEAD^
Remove the latest commit in remote
What we’ve done so far is just reset or remove the latest commit locally. If the latest commit was already pushed remotely, we want to remove that as well:
$ git push -f
The -f option is also known as git push --force , which means it will copy all the commits from the local branch and push them all to the remote counterpart, removing all commits that are not in the local branch.
In other situations, we might have multiple commits we want to remove from the history. The steps are quite similar to removing just the latest commit. Let’s go through them.
Reset tree to a specific commit
Git logs all commits that have been made and each of them has a unique commit ID. To get the list of commits, we can do this:
$ git log
The output looks something like this:
Author: billydh <
Date: Mon Mar 23 20:58:10 2020 +1100 commit e1cc9ff850d36adc59a50c30685fb1d9414fd9e4 (HEAD -> master, origin/master)Author: billydh < someemail@gmail.com Date: Mon Mar 23 20:58:10 2020 +1100 use then instead of flatMap when commiting
Author: billydh <
Date: Mon Mar 2 21:05:00 2020 +1100 commit 8d4edb1b952cca32bebfefacb90307250a6c9891Author: billydh < someemail@gmail.com Date: Mon Mar 2 21:05:00 2020 +1100 update readme
Author: billydh <
Date: Mon Mar 2 21:01:06 2020 +1100 commit 80985dd486b6af62a649b89626891ac571258146Author: billydh < someemail@gmail.com Date: Mon Mar 2 21:01:06 2020 +1100 restructure project
Let’s say we want to remove the last two commits. This is how we can do that:
$ git reset 80985dd486b6af62a649b89626891ac571258146
A reminder here is that git reset by default does the soft reset, meaning all the changes are kept. If you want to delete them altogether, you can always add the --hard option to the git reset command.
Here’s another way to remove the last two commits or n commits:
$ git reset HEAD~2 # replace 2 with `n` last commits you want to remove
Remove the latest commit in remote
Finally, we also need to do the git push force to overwrite history in the remote branch:
$ git push -f
That’s all you’ve got to do. | https://medium.com/better-programming/3-common-git-scenarios-and-how-to-deal-with-them-ee83c1c1b31e | [] | 2020-11-23 16:09:23.270000+00:00 | ['Git', 'Software Development', 'Programming', 'Github', 'Software Engineering'] |
Pair Programming Interviews | Pair Programming Interviews
An Intern’s Interview Experience
Written by Parth Upadhyay.
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog
About Me
My name is Parth Upadhyay and I’m a rising senior at the University of Texas at Austin, majoring in Computer Science. This summer, I interned at Square on the Register team.
Interviewing At Square
When I first started exploring internship opportunities at Square, I didn’t know a lot about the engineering culture at Square — that is until I came in for pair programming interviews. Previously I had only experienced a more standard interview process. Someone would sit across the table from me and go through the motions of grilling me on my resume and asking me pre-baked data structures questions whose answers had become rote review. These interviews felt like tests, and more than that, they felt orthogonal to the work that I would actually end up doing.
Square’s interview, however, was really different. The first thing we did during the interview was spend time setting up a development environment on the pairing machines. My interviewer asked me to take my time and set up the machine to my liking.
I was confused. I thought I was going to sit across the table from someone and return rehearsed, stock answers to their standard interview questions. Instead I sat next to someone on a real computer and wrote real code that ran. On top of that they actually wanted me to spend time setting up my own development environment.
Sensing my confusion towards this foreign interview style, he explained that he wanted me to be comfortable coding on the machine so we could actually focus on the problem we would solve, and not have to worry about the cruft surrounding it. He was completely right! Coding in an unfamiliar environment is kind of like running a race with your shoes untied; you’re spending so much time tripping over yourself that it’s hard to even show off what you’re capable of.
After setting up the environment, I did 3 pairing interviews with 3 different Square engineers. They would usually start off the problem by providing some of the boilerplate code, (sometimes even write tests), and then we would work through it together.
These interviews stood out because I was coding just as I would in the real world. I got to sit at a computer and just program — working through the problem, bouncing ideas off and asking question of the engineer who was interviewing me, googling syntax and APIs that I had forgotten, and actually running, testing, and debugging the code until it worked. I didn’t even have time to be stressed during the interview process because I was having fun doing what I loved: coding.
This unique interview experience gave me insight into the engineering culture at Square, where collaboration is core. Working at Square has been everything, and more, than I thought it might be based on my interview experience. It’s been an incredible summer working side-by-side with people who are pushing the boundaries, both in tech and in commerce, and I’ve learned so much from my team — not only about programming but about workflow, time management, product design, everything. And I got my first taste of all of it throughout the interview process. | https://medium.com/square-corner-blog/pair-programming-interviews-7a34168e43eb | ['Square Engineering'] | 2019-04-18 21:52:39.726000+00:00 | ['Engineering', 'Interview', 'Pair Programming'] |
Power Trip’s “Executioners Tax (Live)” | Power Trip’s “Executioners Tax (Live)”
Power Trip’s crushing live track nominated for Best Metal Performance Grammy
Photo Courtesy of Angela Owens
The first time I ever saw Power Trip was on March 17th, 2017 at Once Ballroom in Somerville(R.I.P.) it was the beginning of their nationwide tour on the heels of their most recent release: Nightmare Logic.
We all knew Power Trip was a band that wielded an enormous amount of energy and, for lack of a better word, power in both their live shows and their records. Going to the see them on this tour felt like nothing I’d ever experienced at a hardcore show or a metal show. It felt like you were standing at the foot of a volcano and could feel the tremors as it came to erupt. When it came time for them to hit the stage, my mind drifted back to listening to that record for the first time the week before… and I remember a slight terror overcoming me. That record created a feeling of not knowing what was going to happen next at this show.
The speakers began playing the rumble, the sounds that remind you of a distant nuclear explosion before it wiped you from this earth. I now felt alone in that sold out room of about 300 people, it was as though I were being led to my own beheading and then out came my executioners.
Power Trip playing live sounded just as strong and mountainous as they did in their recordings, they were insanely tight and locked in with each other. Uncompromising in each performance the crowd was always overcome with excitement, their music igniting that anger, energy and spontaneity that comes from these shows.
Riley Gale was wielding the microphone as though it were a weapon, chopping heads off when he approached to scream into the microphone. Being at the front allowed me the chance to behold him as the band blasted through the opening song “Soul Sacrifice.” Seeing this track performed live dealt a devastating blow to any non-believers of Power Trip’s skill, if they weren’t fans before they certainly were now.
Their second song followed the albums order properly: “Executioners Tax.” The track was one of my personal favorites off their new album and hearing it live cemented its legacy in my mind forever. Guitarist Blake Ibanez leading the charge with the chugging intro riffs, the rhythm section of Nick Stewart, Chris Whetzel and Chris Ulsh hyping everyone in the crowd up. Finally the verses, which we’d already all become familiar with began as Riley grasped his Excalibur, nay, his battle-axe to sing into: “The Executioners here and he’s ready to make you pay.”
Suddenly, the microphone was in my face as Riley scanned the crowd with his face contorted with determination as though leading a cavalry charge. “SWING OF THE AXE!” I screamed into it without thinking, without rehearsal, my only prompt was the mic in my face, everything else was just muscle memory. Riley reclaimed his weapon and continued on. This tradition in hardcore shows of giving the crowd the microphone wordlessly saying “go on kid, you take this one” is one of the foundations of underground music, it reminds us that these bands are no bigger than we are and that maybe someday we’ll give a kid in the crowd the mic and tell them to sing for us as well. Riley passed that tradition over to me that night and it’s a moment that I will never forget.
On August 24th, 2020 at the age of 34 Power Trip broke the news that vocalist Riley Gale had died. 3 years, 5 months and 17 days since I last witnessed them. The only time I ever witnessed them live. In that time they climbed their way to become one of the biggest new metal bands in America all on the strength of Nightmare Logic. Those people who filled Once Ballroom that cold March evening witnessed a new beginning for the seasoned band formed in 2008. Everyone who saw them on that tour that year witnessed a band on the brink of propelling itself to the heights they would later achieve. It was something special and serves as testament to their legacy and Riley’s legacy as a stunning vocalist.
They played that show as they played any other, they crushed every second of it and left us all wanting more by the time the show was over. Unfortunately all good things do come to an end and it hurts that Riley is no longer with us. Since we won’t be getting any more from this lineup we just have the memories of those tracks being played live and the moments we shared with the members of the band and the people in the crowd. Live recordings of this lineup are the documents for the generations to see the tremendous excitement Power Trip filled a room with during this time and many other times before.
Power Trip’s Grammy nomination for the track “Executioners Tax” off their recent live album serves as a tribute for Riley and the affect Power Trip had on so many people in the scene. They came so far from that small club I saw them play in and the thousands of others I’m certain they played in before, so have many of their fellow headbangers nominated in the Best Metal Performance category. This live track’s Grammy nomination brings the band the acclaim they deserve, that they worked tirelessly to achieve in their years rising up from the Texas hardcore and metal scene and into the hearts of headbangers, moshers and thrashers worldwide. While the Grammy’s have been repeatedly dismissive towards metal music and all but neglected hardcore music, the attention a Grammy nom brings is what Power Trip deserves, that recognition that could only be marked higher if it were received while Riley was still with us. For now though let’s cheer them on and remember the times we shared with this band over the years as they crushed the scene with their incredible music. | https://medium.com/clocked-in-magazine/power-trips-executioners-tax-live-2ae6bee063d3 | ["Ryan O'Connor"] | 2020-12-05 17:51:53.827000+00:00 | ['Magazine', 'Metal', 'Music', 'Music Review', 'Grammys'] |
Flutter’da Temel Widgetlar ile Bir Uygulama Tasarımı Örneği | fatihhcan/noteApp
Note App with Flutter. Contribute to fatihhcan/noteApp development by creating an account on GitHub. | https://medium.com/hardwareandro/flutterda-temel-widgetlar-ile-bir-uygulama-tasar%C4%B1m%C4%B1-%C3%B6rne%C4%9Fi-d92d72866739 | ['Fatih Can'] | 2020-06-11 20:22:39.316000+00:00 | ['Mobile App Development', 'Software Development', 'Flutter', 'Flutter Türkiye', 'Flutter Widget'] |
Building Pin stats | Challenges
There were two main problems we wanted to solve with Pin stats.
Near-real-time insights: Businesses have told us they’d like to see how well their Pins are doing within the first few hours of publishing them. One of the biggest challenges in building Pin stats was processing tens of billions of events in a way that would allow us to cut down the analytics delivery time to two hours, an 18x decrease. Canonicalization: Every time someone saves a Pin to Pinterest, we log it as a separate instance. Before, we only gave businesses analytics for their instance of that Pin. This meant businesses never got the full view of how their content was performing. With Pin stats, now all the different instances of a Pin are aggregated into a canonical stat so businesses can see the full impact of their content on Pinterest.
Implementation
Logging
The first part of the project involved real-time logging of all events on Pins originally owned by businesses and sending them to Apache Kafka — the log transport layer used at Pinterest. This was challenging because we get hundreds of thousands of events of all kinds every second.
In our case, we only wanted to log events pertaining to businesses (i.e. only log impressions on a Pin originally created by a business). Due to our strict requirement of surfacing Pin stats in under two hours, we not only had to log events in real-time but also filter them before logging. We couldn’t afford to filter events offline, because it would take hours to sift all events and extract only those related to businesses.
At the same time, online filtering is extremely expensive because it can involve various network calls and increase the latency of the front-end logging endpoint. We implemented various optimizations and heuristics to minimize the number of network calls necessary. This ensured we only made network calls after we were fairly positive the event belonged to a Pin originally created by a business. This reduced the burden on our front-end logging endpoint by several factors.
Processing
The high volume of events meant we also needed to process them in an extremely efficient way. That’s why we segment the new Pin stats by three different time range aggregations — hourly (sliding 24 hour window), last seven days and last 30 days.
For the hourly segment, we needed to process the data separately in order to surface it to businesses as soon as possible. We achieved this by having two different pipelines emanating from our Kafka topic, one handling hourly data and and the other handling daily data. At the same time, we ensure both pipelines are reliable and consistent with each other through various rules to avoid data inconsistencies. The former is a data ingestion pipeline that creates hourly tables of logged events in the past hour (approximately four billion events/hour), which we then process and aggregate using efficient MapReduce jobs. This means we’re able to persist data to our storage every hour and have data workflows that run by the hour to aggregate analytics on a very granular time-level. The latter pipeline generates a daily table (approximately 100 billion events/day) that’s processed and verified more thoroughly due to the longer SLA.
Storage
After logging, processing, verifying and aggregating tens of billions of events in a short span of time, we needed a low-latency storage solution capable of handling our extremely large data sets. We decided to use Terrapin — our in-house low-latency serving system for handling large data sets. It met our requirements of being elastic, fault tolerant and able to ingest data directly from Amazon S3.
Lessons
During the process, we learned many invaluable lessons. One of the main challenges was to build a data pipeline that can support a real-time stream of hundreds of thousands of events every second in a way that’s reliable and scalable as Pinterest grows. Particularly, it was difficult to have our divergent pipelines work in a way that they are both consistent with data and reliable.
Another big challenge was filtering the high volume of events each second so we don’t populate our pipelines with data we’ll never process. This was complicated, because any sort of filtering usually requires network calls that ultimately slow down logging. To solve for this, we use various signals beforehand to ensure a network call is necessary.
Finally, we learned a lot while choosing the approach — whether to spend months building a truly real-time pipeline or creating a system that can serve data in a under two hours. We prototyped a real-time analytics service to see if it was possible given the current infrastructure, and ultimately decided on a near-real time system in to order to ship the experience more quickly.
Next steps
Our next steps will be to build a truly real-time system that surfaces Pin stats within seconds of a Pin being uploaded. We also hope to provide additional metrics so that businesses can create better and more actionable Pins.
Acknowledgements: This project would not have been possible without the help and support of engineers across different teams at Pinterest. In particular, I would like to extend special thanks to Ryan Shih, Andrew Chun, Derek Tia, David West, Daniel Mejia, David Temple, Gordon Chen, Jian Fang, Jon Parise, Rajesh Bhatia, Sam Meder, Shawn Nguyen, Shirley Gaw, Tamara Louie, Tian Li, Tiffany Black, Weiran Liu, and Yining Wang. | https://medium.com/pinterest-engineering/building-pin-stats-25ec8460e924 | ['Pinterest Engineering'] | 2018-05-08 22:50:15.320000+00:00 | ['Infrastructure', 'Data', 'Engineering', 'Analytics', 'Apache Kafka'] |
Code review chopped up. Many approaches can be taken during the… | Key code reviewers
Knowing the architecture and the specifics of the project is the responsibility of most involved and experienced engineers. And they are the key code reviewers. They can easily find low cohesion, violation of SOLID rules, layers nomenclature leaks, and other possible mistakes that may bite our tongue in the future. They can propose splitting, extracting, reusing, or moving parts of the code for easier maintenance and readability. And the whole rest of the benefits it can give us. Foreseeing consequences of the proposed solution, considering possible code misplacement or duplication is what they can do best.
Those are the things that should be at the top of our minds when reviewing the code.
For sure architecture violations bring the worst consequences. Worst than a couple of arguments in one method or too many lines of the class.
Three levels of code review
Considering all of the mentioned rules and ways of doing code review brought to my mind splitting code review in a hierarchical order based on its importance and effect.
Architecture
First and the most important should be an architecture focused code review. Here we should consider project specifics and agreements we made before. This level of code review should be done mainly by the most experienced programmers who understand the architecture of the project. Juniors can try to do this level code review as well and it can be a good exercise. However, they don’t have to stretch their minds too much if they don’t feel strong enough to do this.
At this stage, it’s very important to recognize possible nomenclature leaks and violations. There may be a rule that is clear to you because you know the project, but not necessarily everyone is aware of this or it may be transparent for them. This is a good point to start a discussion and figure out the motivation of the author.
Architects and engineers on the architectural level code review should also keep in mind the needs of scaling and reusability of a proposed pull request. Knowing this can unconditionally change the implementation.
Application security is also one of the key factors to keep the focus on here.
There are other architectural things that we can consider doing a code review. I think you’ve already known how valuable it can be to find an issue on this level.
Code clearance
In the second phase of code review, we can dive deeper into the code itself and be more detail-focused. It can be called a code clearance review.
Now we can argue on cohesion levels and class composition. It’s beneficial to discuss all of the doubts and confusions to better understand the thinking process behind the code. We always should ask first before blaming someone. It’s maybe something as clever that we couldn’t understand it.
Speaking of “clean code” (claps to Uncle Bob Marting and his “clean code” book) we can’t forget about SOLID rules and testing. If any of these rules are violated, we should quickly figure out why. There could be a reason why an engineer decided to not follow this rule, but it could be a mistake as well. Ah, good catch!
One of the hardest parts of programming is naming. It’s difficult to name variables, methods, functions, classes, and data structures in a proper way. Some names may be so obvious to you and totally unacceptable for others. Even that we should always try to find the best “names”, second-level code review is a place to start discussing it.
Polishing on the surface
Third and the last phase would be polishing on the surface code review. Here we can point out things that usually can be automatically caught by linters or compilers as code formatting, missing semicolons, and other trivial issues.
It’s also time for small sorting/grouping updates, complaining about used data structures, whether it’s a List<> or Set<> and so on. | https://medium.com/netvise-software/software-architecture-and-code-review-882d779decf | ['Patryk Studniak'] | 2020-05-01 09:26:05.583000+00:00 | ['Software Architecture', 'Software Development', 'Code Review', 'Software Engineering', 'Software'] |
The Difference Between Poetry and Prose | Poetry is written in lines, and sometimes involves rhyme, rhythm, and other musical devices. It focuses on the euphony, the sound of the words, as much as it focuses on the literal meaning of the words. Liberties can be taken with pretty much all of what we consider the rules of language and clear communication.
What is being communicated is not as important as the way it is being communicated.
Most people have a general idea of what poetry looks like. Even from a young age, kids are taught nursery rhymes, are taught to put together acrostics and “haiku”. We have a visual idea of what poetry looks like on the page, what it sounds like when it’s read aloud.
But prose is so much less specific. Is it just everything else?
Pretty much.
Because I am a word nerd, I love looking at word etymologies. Words don’t pop up in a vacuum, not even newfangled words like yeet, which can be tenuously traced back to Proto-Indo-European if you’re feeling salty enough to scold a Boomer who gripes about “kids these days”.
And the etymology for prose is particularly illuminating: basically, it’s a truncation of the Latin phrase prosa oratorio — straightforward or direct speech.
That’s a good way to think of it. Prose is going to be straightforward. It’s more interested in succinctly providing information, telling a story, explaining something, than it is in the artistry of the language used to do so.
Of course, all of this exists on a sliding scale. If you’re writing a novel, you’re going to put a varying amount of focus on making the language itself beautiful — even if it never quite crosses the line into being poetic. On the other hand, if you’re drafting a business contract, you’re focusing entirely on clarity and literal meaning. The artistry of the language is the least of the considerations.
Somewhere on this spectrum, we draw a line. Everything to one side falls under the umbrella of poetry, on the other, prose.
A rough illustration, courtesy of the author.
There is quite a lot of room for flexibility and artistry in between the two terms. Yes, even for infusing prose into poetry, which is a whole sub-genre of poetics worth checking out.
Your novel shouldn’t be pedantic and dry. If it reads like a legal arrangement, there’s something wrong going on. It needs an infusion of poetic devices, of figurative language.
Likewise, your academic research shouldn’t read like a bodice-ripper. Yes, even if you’re talking about sex. Those things are pretty much designed to knock you out — and to provide valuable information, without mincing words or getting too artsy about it. | https://medium.com/ninja-writers/the-difference-between-poetry-and-prose-83cabd9a8270 | ['Zach J. Payne'] | 2020-08-12 19:51:27.273000+00:00 | ['Poetry', 'Creativity', 'Learning', 'Art', 'Prose'] |
JSON Web Tokens With Python APIs. Part 1. Creating the Token | JSON Web Tokens With Python APIs
Part 1. Creating the Token
Photo by Ilya Pavlov on Unsplash
When designing and building an Application Programming Interface (API), security is something that the developer must have in mind. Without security, a malicious user could potentially gain access to sensitive data. That’s where JWTs or session cookies come in to play. Today, we will be using JSON Web Tokens. Essentially, these are compacted JSON objects that securely transmit data. For this tutorial, we will build an API that authenticates a user and creates a token.
Let’s Get Building
Before we can get started writing our Python code, we need to figure out what the database will look like. Below is some SQL code to help us get started.
The first table we create is the Users table. It holds basic user information as well as the hashed password and the salt value used for the password. Next, we create the Claims table. This table will hold a list of applications and their corresponding Guid’s. If you are looking at the JWT documentation, these claims will be known as private claims within the payload part of the token. The final table we will create is the UserClaims table that ties claims to a user.
With our authentication tables created, it’s time to write some Python code. The API will be written using Flask and SQLAlchemy. I’m not going to go into detail about setting these up in your environment, but here are some resources to help get started:
For this API, we will have two different endpoints. The first will be a POST request that adds a user to the database. The second will be a GET request that creates a JWT for the user.
Now that we have some endpoints, we can add some meat and potatoes to them. Looking at the “adduser” endpoint, we need to have code that will create a salt value to be used for when we hash the users password. For those who may not know, a salt is a unique value that we add to the end of the password which is then hashed using some sort of encryption method. Once we have the salt, the hashed password and the rest of the users basic information, we need to add it to the database.
The function above will achieve most of what we are looking for. It is by no means perfect, but it does accomplish some basic error handling. In the future, it would be nice to have it check for an existing email address and handle that issue accordingly. There are two helper functions left that are needed to complete the “addusers” endpoint.
Disecting the “DetermineSalt” function, it uses the “os” module to generate a string that is 32 random bytes longs. Analyzing the “HashPassword” function, we pass the users password and the newly created salt to it. Importing the “hashlib” module, we use the sha256 hashing module to hash the combined password and salt.
With the “adduser” endpoint built, we can move on to the endpoint that will login the user (basically create the jwt).
Like the “AddUser” function, the “UserLogin” function performs some basic error handling. In a nutshell, this function checks the database for the users email, then hashes the provided password with the stored salt to see if it matches the password hash in the database. There is one more helper function that is needed for this function to be complete.
As you probably guessed, this function does exactly what it’s name is. It creates the JWT. The function also takes a claims parameter that will be defined in Part 2. In order to create the token, we use a secret string (this can be whatever you want it to be). The token is encoded using HS256 which is HMAC with SHA-256. One final note is that the “jwt” module needs to be imported in order to encode the token. To install this module, use pip.
pip3 install pyjwt
Testing the Endpoints
Now that the code writing is complete, it’s time to see if our hard work has paid off. To test the API endpoints, we will use Postman. To test our “adduser” endpoint, we will add a JSON object to the raw body in Postman. It will look a little something like this.
{
"firstName": "Bob",
"lastName": "Smith",
"email": "mytestmemail@gmail.com"
"password": "testPassword!" }
When you add your hostname and the “adduser” URL to Postman, you should get results similar to this when you send the POST request.
Testing the “userlogin” endpoint will be a little similar except the JSON object we send will look like this.
{
"email": "mytestmemail@gmail.com"
"password": "testPassword!" }
After changing the URL in Postman to the “userlogin” endpoint and sending the request, your result will look something like this.
Just for fun, try out changing the password for the same email. If it worked, the results should return as this.
Something really cool worth checking out, is heading over to the website jwt.io and plugging in the token we received when we successfully requested a token. This site will decode the token and break down the different parts of it for you. It’s also a good way to tell if you have a valid token.
Final Thoughts
To sum things up, this API is by no means complete (nor perfect). There is still quite a bit of work to be done in order to use the token for authentication. A few of those todo’s are that we need to implement the token claims portion as well as implementing the authentication for the token. However, all that will be handled in Part 2. Therefore, until we meet again cheers! | https://medium.com/python-in-plain-english/json-web-tokens-with-python-apis-part-1-creating-the-token-ce2cfe22b7d6 | ['Mike Wolfe'] | 2020-11-29 09:06:44.547000+00:00 | ['Jwt', 'API', 'Web Development', 'Software Development', 'Python'] |
Life at 102 | My mother turned 102 on Sunday. She arrived at her party dressed in a powder blue wool jacket, her face animated by a smile she could not control. She didn’t even try.
She basked in the glory of the day, aware that she had made it to another milestone, unlike any family member before. With a small group of friends and family gathered around her, she agreed that she was getting “kind of old.”
What’s it like to be 102? As in all of life, much depends on your state of mind. I see my mother daily and can report that she takes pleasure in each new day.
“Having a sense of humor is key to old age,” she announced after eating cake at her party. “And never hold a grudge.”
Her eyesight has almost completely failed but she sees sunshine when she looks out the window, even when it is rain. It used to bother me when I was younger that my mother rarely acknowledged any emotion that wasn’t upbeat. Now I see that her positivity allows her to savor life while it boosts the spirits of her elderly neighbors in the assisted living home. I find peace myself when I follow her lead.
My mother made up her mind decades ago that she would never complain. She doesn’t even grimace when she stands up, despite the back pain from three compacted vertebrae.
“What difference would it make if I talk about my aches and pains? No one listens anyway,” she often says with a chuckle.
Her short-term memory loss can be another blessing. She may feel a shooting pain from time-to-time but when she is settled in a comfortable chair or talking to friends, she doesn’t remember that she ever had any discomfort.
My mother isn’t looking for sympathy. Having someone to talk to is at the top of her list. The social skills they taught her at Simmons College back in the 1930s make it easy for her to break the ice with strangers. They are drawn to this tiny little lady who puts two hands on the top of her walker and bends her head back to look into their eyes. Who could walk away without greeting her in return?
Longevity doesn’t just come from a positive attitude, however. If it did, we wouldn’t lose good folks at a younger age.
Genetics play a role and the ones you get are often the luck of the draw. My mother had three sisters, one died at age 66, another at 72 and the third in her mid-80s. It’s hard to say what genetics factors are at play in mother’s long life. Scientists have only begun to explore the relationship between genes and health. It’s an exciting field but one with miles of work to be done.
Another factor in my mother’s longevity is the amount of physical activity she took on later in life. In her 60s, she spent more time gardening, and continued to tend a huge perennial flower garden into her mid-80s. She never asked for help to lug around soil or dig up roots. Being on her hands and knees seemed to give her a boost.
Taking a walk in the woods at age 93. Photo by the author.
Once she moved into an apartment, walking became her activity of choice. All year round she would travel on foot, up and down hills and around town. Drivers occasionally stopped their cars to offer her a ride. She always declined.
As a result, when my mother fell at age 99 and fractured her pelvis, the physical therapist was impressed by how quickly her strong little legs could rebound. She was hospitalized twice in the past year with pneumonia and the doctor’s prognosis was always grim. She surprised them both times and rebounded, heading home after two days.
My mother can still get around under her own power, with a walker, but I can see her strength is starting to wan. She spends more time in her recliner, says she is tired more often than before. Sometimes she mumbles about people who are no longer living and I wonder if she is peering into the other side.
A friend of mine who is a psychic says she has seen the spirits of my father and my grandmother visiting my mother’s room.
The day after her birthday party we were talking about who attended and what we did. Because of her memory loss she has a hard time getting a picture, so I tried to describe as many details as I could, hoping one may ring a bell.
After a few minutes, she asked the date and, after telling her, I added “only 364 days til your 103rd birthday!”
She grinned, tickled by the challenge of living another year.
Read more about life as we age at Crow’s Feet. | https://medium.com/crows-feet/life-at-102-f3d475b93c18 | ['Nancy Peckenham'] | 2019-10-31 11:26:20.862000+00:00 | ['Family', 'Aging', 'Health', 'Longevity', 'Positive Thinking'] |
The Sad Reason the Housing Market Is Booming During a Recession | The Sad Reason the Housing Market Is Booming During a Recession
The wealth gap is getting bigger
Designed by Dooder / Freepik
During economic recessions, house prices tend to go down. The reason is quite simple; personal income is one of the most significant factors driving home prices. The more money people make, the higher price they can afford to pay for a house.
During a recession, jobs and income levels decline, which means people have less money and can’t afford to pay as much money for a house.
In 2020 we are experiencing the most severe recession since the great depression (yes, it’s worse than the financial crisis,) but housing prices are at an all-time high.
In this article, I’ll explain how that is possible and why it is bad news.
The 2020 recession is more intense than the financial crisis
We now have data for the first three quarts of real U.S GDP expressed on an annualized basis.
Q1: -3%
Q2: -37.8%
Q3: +30%
Those are some pretty extreme numbers, and when you add them all up, we find that U.S real GDP is currently down 3.5% compared to where it was at the end of 2019.
This wild ride is illustrated in the following chart.
Putting this recession into proper context
At first glance, you might be thinking that a 3.5% decline in real GDP is no big deal. Which highlights how useless economic data is without the proper context.
Consider that the peak to trough decline in real U.S GDP during the financial crisis was 3.97%.
So, we can summarize the severity of this recession as follows.
Even after a 30% annualized Q2 growth in GDP, and trillions of dollars in monetary and fiscal stimulus, the U.S economy is roughly in the same position it was during the darkest days of the financial crisis.
The 2008–2009 recession was the most severe since the great depression, and that is roughly where we are at today.
Yet, Housing prices are at an all-time high
According to data from the St.Louis Federal Reserve, from August 2019 to August 2020, U.S home prices rose by approximately 5.6%.
That is represented in the following graph, where the official start of the recession is measured in the yellow shaded area.
By the same measure, home prices were down nearly 22% from January 2007 to February 2010.
It’s all about income
So, why are home prices rising to all-time highs in 2020 when they fell sharply during the financial crisis?
Remember, at the beginning of this article, I mentioned that personal income is one of the most important factors that drive housing prices.
During the financial crisis, governments responded with monetary stimulus but not nearly enough fiscal stimulus.
During 2020, governments around the world have provided unprecedented levels of fiscal stimulus. Put simply, much of this fiscal stimulus has replaced the income of people who lost their job during the recession.
So, even though this recession is even more severe than the financial crisis, personal incomes have not been impacted in the same way due to the fiscal stimulus.
Which is a good thing.
However, the fiscal stimulus does not tell the whole story as to why house prices are rising. What really impacts the price of housing is the income of people who are in the market to buy a house.
Which is where the sad part of the story comes into play.
Low-income earners have been hit hardest in 2020
As illustrated in a research letter from the San Fransisco Federal Reserve, workers in the bottom 25% of earnings made up half of the job losses during this recession. While in comparison, high-income workers were less impacted.
Low-income workers have been hit hardest, but that would have little impact on the demand for houses because low-income earners can’t afford to buy houses; They rent.
It is the middle to high-income earners who buy houses, and their incomes were less impacted. Couple that with the fiscal stimulus and rock bottom interest rates, those higher income earners that were lucky enough to keep their job have been bidding up the price of houses.
Wealth inequality is on the rise
Consider the following facts.
Housing prices, the largest component of personal wealth, is at an all-time high.
The stock market is also at an all-time high.
Many high-income earners have been lucky enough to work from home and keep their jobs.
This allows them to invest more in housing and the stock market.
The lowest income earners have been hit the hardest.
Low-income earners are also much less likely to own their home or have any money invested in the stock market. This means they have not benefited from the rebound in these asset prices.
All of this adds up to an increase in the wealth gap.
Future fiscal stimulus needs to be aimed at low-income earners
This story is not unique to the U.S. We see the same thing playing out in Canada and other OECD countries; The types of jobs that have been lost during this recession have disproportionately hit those with the lowest-income and lowest ability to weather a job loss.
This should provide a clear roadmap to governments; the easiest way to spur economic growth is for income supports to those with the lowest income.
Not only is it the right thing to do (helping those who need it the most.) It’s also the fastest way to spur economic growth.
Any fiscal stimulus received by people like me who have been lucky enough to be (financially) unaffected by this recession will only serve to drive up asset prices even further.
If the government sends me a check, I can tell you 100% of that is getting invested in the stock market. That is because I don’t need the money right now.
If the government sends a check to a low-income earner or someone who lost their job, that money will be spent on groceries and other living expenses.
That is what spurs economic growth and can help prevent inequality from growing at the pace it has been. | https://medium.com/makingofamillionaire/the-sad-reason-the-housing-market-is-booming-during-a-recession-294242066c4f | ['Ben Le Fort'] | 2020-12-07 16:40:10.899000+00:00 | ['Politics', 'Society', 'Inequality', 'Money', 'Economics'] |
Stripping Strings in Python | In computer science, the string data type is defined by a sequence of characters. Strings are typically comprised of characters, words, sentences, and/or numerical information. In python, string objects have access to several methods that enable operations such as text stripping, sanitation, searching and much more. Having a good understanding of these methods is fundamental to any data scientist’s natural language processing toolkit. In this post, we will discuss how to use strip methods, available to string objects in python, in order to remove unwanted characters and text.
Let’s get started!
Suppose we wanted to remove unwanted characters, such as whitespace or even corrupted text, from the beginning, end or start of a string. Let’s define an example string with unwanted whitespace. We will take a quote from the author of the python programming language, Guido van Rossum:
string1 = ' Python is an experiment in how much freedom programmers need.
'
We can use the ‘strip()’ method to remove the unwanted whitespace and new line, ‘
’. Let’s print before and after applying the ‘strip()’ method:
print(string1)
print(string1.strip())
If we simply want to strip unwanted characters at the beginning of the string, we can use ‘lstrip()’. Let’s take a look at another string from Guido:
string2 = " Too much freedom and nobody can read another's code; too little and expressiveness is endangered.
"
Let’s use ‘lstrip()’ to remove unwanted whitespace on the left:
print(string2)
print(string2.lstrip())
We can also remove the new lines on the right using ‘rstrip()’:
print(string2)
print(string2.lstrip())
print(string2.rstrip())
We see in the last string the three new lines have been removed. We can also use these methods to strip unwanted characters. Consider the following string containing the unwanted ‘#’ and ‘&’ characters:
string3 = "#####Too much freedom and nobody can read another's code; too little and expressiveness is endangered.&&&&"
If we want to remove the ‘#’ characters on the left of the string we can use ‘lstrip()’:
print(string3)
print(string3.lstrip('#'))
We can also remove the ‘&’ character using ‘rstrip()’:
print(string3)
print(string3.lstrip('#'))
print(string3.rstrip('&'))
We can strip both characters using the ‘strip()’ method:
print(string3)
print(string3.lstrip('#'))
print(string3.rstrip('&'))
print(string3.strip('#&'))
It is worth noting that the strip method does not apply to any text in the middle of the string. Consider the following string:
string4 = "&&&&&&&Too much freedom and nobody can read another's code; &&&&&&& too little and expressiveness is endangered.&&&&&&&"
If we apply the ‘srtip()’ method passing in the ‘&’ as our argument, it will only remove them on the left and right:
print(string4)
print(string4.strip('&'))
We see that the unwanted ‘&’ remains in the middle of the string. If we want to remove unwanted characters found in the middle of text, we can use the ‘replace()’ method:
print(string4)
print(string4.replace('&', ''))
I’ll stop here but I encourage you to play around with the code yourself.
CONCLUSIONS
To summarize, in this post we discussed how to remove unwanted text and characters from strings in python. We showed how to use ‘lstrip()’ and ‘rstrip()’ to remove unwanted characters on the left and right of strings respectively. We also showed how to remove multiple unwanted characters found on the left or right using ‘strip()’. Finally, we showed how to use the ‘replace()’ method to remove unwanted text found in the middle of strings. I hope you found this post useful/interesting. The code in this post is available on GitHub. Thank you for reading! | https://towardsdatascience.com/stripping-python-strings-6635cbc1b501 | ['Sadrach Pierre'] | 2020-05-08 03:53:26.326000+00:00 | ['Software Development', 'Programming', 'Data Science', 'Python', 'Technology'] |
Do All Hollywood Actors Know Each Other? Breadth-First Search in “Action” | Photo by Vincentas Liskauskas on Unsplash
The task is straightforward: given two actor names, find the connection between them through their movie casts. Such as Emma Watson knows Jennifer Lawrence through Daniel Radcliffe (Harry Potter), who knows James McAvoy (Victor Frankenstein), who finally knows Jennifer Lawrence (Dark Phoenix). GitHub.
$ python degrees.py large
Loading data...
Data loaded.
Name: Emma Watson
Name: Jennifer Lawrence
3 degrees of separation.
1: Emma Watson and Daniel Radcliffe starred in Harry Potter and the Chamber of Secrets
2: Daniel Radcliffe and James McAvoy starred in Victor Frankenstein
3: James McAvoy and Jennifer Lawrence starred in Dark Phoenix
The task is about solving a search problem, such as a maze, in which we are given an initial state (Emma Watson) and a goal state (Jennifer Lawrence). We need to find the shortest path between the initial state and the goal state, which means that we ought to take advantage of Breadth-First Search that is based on using a queue-frontier. By using a queue-frontier, we can gradually check all the available neighboring nodes from the initial state, meaning that we will always arrive at the shortest path because we will exhaust all states until we reach the shortest solution.
First, we need to see what data is available to us. The data is provided by IMDb and includes three tables: movies.csv with the list of titles, years, and ids; stars.csv with the list of person_ids and their movie_ids; and people.csv with the list of ids, names, and birth years.
Second, we need to establish what the node will store. In our case, the node class is an object with the state , parent , and action attributes.
class Node():
def __init__(self, state, parent, action):
self.state = state # the current actors id
self.parent = parent # the previous actors id
self.action = action # the current movie id
Storing parent would prove really handy once we reach the final state and need to trace the path that we did. See it as a Singly Linked List where we have the pointer pointing at the next node (though, in our case, it points to the previous node).
So when it comes to Breadth-First Search, we should use a queue to store all nodes that we need to visit. Using a queue is essential to BFS as we need to traverse all nodes on the same level sequentially to find the shortest path. There is a nice medium article on BFS.
Having said that, now we can begin our shortest_path function that takes source_id and target_id for the attributes. We first initialize the frontier , then add the first node that stores source_id for the state and None for the parent and action. It is then important to keep the explored set in order to avoid checking the same people and their movies.
frontier = QueueFrontier()
frontier.add(Node(state=source_id, parent=None, action=None))
explored = set()
We then traverse until we either find the solution or exhaust all possible nodes from the frontier. If the frontier is empty, we can say there is no connection. If it is not empty, we extract the node from the queue, specifically the one we put there first. If the node’s state is equal to the target_id , it means that we have reached the solution, and we need to determine the path we did. This is where the node's parent comes in handy (yay!). We traverse back using the node and save all the nodes that we find in the path . Once we put all the nodes in the list, we reverse it, and return it.
while True:
if frontier.empty(): # No more available nodes to check against target_id
raise Exception("No solution")
node = frontier.remove() # Take out the first element from the queue
if node.state == target_id: # We found the final state!
path = []
while node.parent is not None:
path.append((node.action, node.state))
node = node.parent
# Don't forget to reverse it because we were moving from the final state to the initial
path.reverse()
return path
It is not over yet. Bear with me. If the node’s state is not equal to the target_id , then we add it to the explored set and add all of its neighbors to the frontier.
explored.add((node.action, node.state)) # Record that we've already seen this actor and movie
for action, state in neighbors_for_person(node.state):
if not frontier.contains_state(state) and (action, state) not in explored:
# New node for the next actor and movie
child = Node(state=state, parent=node, action=action)
frontier.add(child) # By adding to the frontier, we put it in the end of the queue
You might be wondering what is the neighbors_for_person function doing? Well, it simply returns all the neighboring actors that starred in the same movie. For example, given Emma Watson as the initial state, we extract all the ids of her movies (all Harry Potter movies probably) and then go over them and the people who took part in those movies as well. We finally return the set of all those people's ids and the movie ids.
movie_ids = people[person_id]["movies"]
neighbors = set()
for movie_id in movie_ids:
for person_id in movies[movie_id]["stars"]:
neighbors.add((movie_id, person_id))
return neighbors
The algorithm, in general, is certainly quite slow. The shortest_path function has a runtime complexity of around O(n⁴) which is terribly bad, but the point of the project was rather about applying BFS in real-world data. I hope you enjoyed reading this post. Please note that this project is a part of my coursework towards Harvard’s CS50 Artificial Intelligence; you can find more about it here. Please see my GitHub repository as well.
Another example: | https://medium.com/analytics-vidhya/do-all-hollywood-actors-know-each-other-breadth-first-search-in-action-1b37df515928 | ['Damir Temir'] | 2020-12-22 16:39:32.185000+00:00 | ['Data Science', 'Database', 'Algorithms', 'AI'] |
1904: The Year St. Louis, Missouri Was the Most Important City in the World | 1904: The Year St. Louis, Missouri Was the Most Important City in the World
Imagine the traffic situation if your city hosted the World’s Fair and the Summer Olympics concurrently
The Government Building at the 1904 World’s Fair, David R. Francis/ Public domain via Wikimedia Commons
There’s a great scene in the 1944 movie, Meet Me in St. Louis. The main character, played by the iconic Judy Garland, stands on an ornate bridge gazing around at the lights of the World’s Fair. She turns to her family and gushes about the spectacular scenery, “I can’t believe it. Right here where we live. Right here in St. Louis.”
Looking a St. Louis today, a delightful, but not quite world-class city, some may wonder how such an insignificant place hosted such an important event.
Palace of Electricity/ stereoscope card, Public Domain/ Wikimedia Commons
From fur trading to manufacturing might
Founded in 1764 as a trading post for French fur traders, St. Louis grew and became a melting pot of cultures. German beer barons perfected lager-style beer, Irish immigrants erected soaring Catholic churches, and Italians living on “the Hill” created delicious recipes still enjoyed today.
St. Louis reached its peak popularity at the turn of the 20th century. By 1904, trolleys and horse-drawn carriages jostled each other on the roads. Steamboats, though becoming a rarer sight, transported passengers on the mighty Mississippi. Railroads carried goods in and out of the city, allowing industry to take off.
The opulent mansions on Lindell Boulevard also sported the latest inventions. Gas lighting replaced dangerous candles. Alexander Graham Bell’s innovative communication device, the telephone, gave St. Louisans the ability to converse with friends across the nation.
Typical of the time period, the best venue to showcase all of these new ideas would be a grand Expo. But St. Louis had another reason to celebrate. The 100 year anniversary of the Louisiana Purchase was coming up, and they wanted to show the world how far they’d come.
A Forest of a Park
St. Louis overcame their first hurdle to hosting the World’s Fair in 1898 when they successfully beat out Kansas City, their neighbors across the state.
Construction began on the 1,200 acres of what is today known as Forest Park. In the span of six years, builders erected 1,500 buildings and laid out 75 miles of connecting roads and pathways.
The majority of fairgoers would never realize that these palatial structures were a clever fake. Constructed of wooden frames coated with a special moldable plaster, these edifices were never meant to be permanent.
The Grand Basin/ Stereoscope Card/ Public Domain, Wikimedia Commons
Wonders and sights
Many marvelous sights greeted fairgoers on opening day April 30, 1904.
The Grand Basin, pleasure boats cruising its tranquil waters, served as the fair’s centerpiece. Over a dozen buildings flanked this artificial lake, including the Palaces of Electricity, Transportation, and Industry.
Inside these educational buildings, fairgoers were introduced to such innovations as the X-ray machine, the electric streetcar, and a wireless version of the telephone. However, the most popular new invention was undoubtedly the personal automobile. Models powered by steam, gasoline, and even electricity were displayed.
Invention is the mother of necessity, and park designers also developed a new technique to clean the city’s water supply. Most fairgoers probably weren’t even aware of this crucial project.
The crime of the decade
Another lifesaving device was on display at the fair. But this exhibit was marred by greed, ignorance, and inhumanity.
The field of neonatology had made little progress by the turn of the century. Though invented in 1888, incubators were not routinely used because hospitals did not expect premature babies to survive. Instead, preemies toured the country in sideshows. People would pay to gawk at these tiny weaklings struggling to survive in their incubators.
Surprisingly, these sideshow freaks often stood a better chance at survival than their counterparts left at home. The benefits of the incubators, warmth, and a more sterile environment, plus routine care by nurses meant that they often grew up to lead normal lives. Though profit was the motive for keeping these babies alive, the outcome was admirable.
Their impressive 85% survival rate all changed during the 1904 World’s Fair.
Instead of using the reliable contractors who normally ran these incubator exhibits, fair officials chose a well-connected St. Louisan. This local man, Edward Bayliss, brought in an inexperienced doctor to run the attraction.
The situation soon turned grim. Unsanitary conditions, unsuitable diet, and malfunctioning incubators meant that a heartbreaking 39 of the 43 babies on display died. The public responded by sending letters of condemnation to the fair’s president. Newspaper headlines shouted that the crime of the decade was being committed at the World’s Fair.
In response, a new doctor was brought on to manage the exhibit. Conditions improved, but the damage was done. Having witnessed the devastating scene in St. Louis, hospitals avoided using these lifesaving incubators for many years.
Transferring the Liberty Bell 1905/ Public Domain/Wikimedia Commons
An important guest arrives
The Liberty Bell arrived in St. Louis on June 8, the farthest trip it had ever made.
From the train station, the bell toured the streets in style. It sat atop a specially designed float pulled by 13 horses representing the original states. Upon reaching the fairgrounds, the Liberty Bell was displayed in the Pennsylvania Building.
The Pike/ Stereoscope Card/Public Domain, Wikimedia Commons
Coming down the Pike
Promising both amusement and education, the mile-long Pike was the place to see and be seen.
Here guests would popularize foods like the waffle-style ice cream cone, cotton candy, and hamburgers. Dr. Pepper made its debut at the fair, as did puffed wheat cereal.
Exotic foods were also available for the adventurous to sample. Such delicacies as kumquats and olives became instant hits.
A cultural hub as well, the Pike showcased foreign places most St. Louisans could only dream of. Guests could visit Cairo, tour a Japanese village, and step back in time to Old St. Louis.
The Pike was also the place for the more scandalous exhibits that often appeared after dark. Belly dancing, palm reading, and taking in a show at the moving picture theater were all popular nighttime attractions.
Education or exploitation?
Though the theme of the World’s Fair was education, both technological and cultural, fair organizers couldn’t pass up the opportunity to promote a political agenda.
Imperialism, the concept of manifest destiny, and colonialism were scattered amidst the hubbub of rides and exhibits.
The end of the Spanish-American war saw the United States gain control over several new territories, including Guam, Puerto Rico, and the Philippines. People from these lands were subsequently brought to the fair and displayed in their “natural” surroundings.
Other exhibits included “savages” from Native American tribes, the Ainu people of Japan, Pygmies from South Africa, and Patagonian tribes.
“People were organized in a way we would recognize as racist. In our eyes today it is appalling. For 1904 eyes, it was an opportunity to try to understand other cultures.” said Adam Kloppe, public historian at the Missouri Historical Society.
Though the indigenous people were instructed to behave like savages, this didn’t stop interracial dates from occurring between Filipino men and white “society” ladies. The scandal nearly resulted in a riot.
1904 Olympic Games Poster/ Public domain, via Wikimedia Commons
Chicago’s loss, St. Louis’ gain
With all the excitement surrounding the fair, it seems unimaginable that St. Louis would be willing and able to host another gigantic event.
The city of Chicago originally won the bid to host the 1904 Summer Olympics. However, they lost out due to a technicality. World’s Fair organizers wouldn’t allow another international venue to occur at the same time. Hence, they decided to hold the Olympics in St. Louis from August 29 until September 3.
Boxing, freestyle wrestling, and the decathlon all debuted at the 1904 Summer Olympics. Notable athletes included gymnast George Eyser who, although hindered by a wooden leg, managed to win six medals. Frank Kugler only won four medals, but they were in three different sports — freestyle wrestling, weightlifting, and the tug of war. This distinction made Kugler the only competitor to win a medal in three different categories at the same Olympic Games.
A cultural disaster
In order to tie the World’s Fair to the Olympics, organizers held Anthropology Days on August 12th and 13th. This event included participants from the indigenous tribes.
Day one featured mostly European-style sports — the shot-put, high jump, and marathon. With little time to learn the rules and even less time to practice, most of the events went poorly.
The organizers had higher hopes for the second day’s more “savage-friendly” contests of archery, tree-climbing, and mud throwing. But the participants performed just as badly, and viewer turnout remained low.
Thankfully, the Olympic Games organizers never repeated another Anthropology Days, but there was still another scandal to emerge.
Race to the finish
The marathon is always a popular event. A simple concept, a footrace that challenges participants both physically and mentally. No special equipment needed, and nearly impossible to cheat. Until Fred Lorz decided to hitch a ride back to the stadium in a car.
He rode for 11 miles, waving to both spectators and runners until the car broke down. Newly refreshed, Lorz ran the rest of the way and passed the finish line first.
Naturally, his exploits were revealed, and his gold medal stripped away.
His response? He only finished as a joke. The organizers were not amused and banned him from competing for a year. However, he came back strong in 1905 and won the Boston Marathon.
Tom Hicks, Marathon Olympic Champion and his supporters at the marathon. St. Louis Olympic Games, 1904/ Public Domain/Wikimedia
Gatorade or strychnine?
The actual winner, Thomas Hicks, also had an interesting time. Ten miles from the finish and at the end of his rope, Hicks wanted nothing more than to lay down and quit.
His trainers had other ideas though. They administered several doses of the poisonous stimulant strychnine mixed with brandy to keep him going.
This concoction gave Hicks the edge he needed to finish, though by the end of the race he was hallucinating and barely able to walk. His trainers supported him as he crossed the finish line and immediately handed him over to medical personnel.
End of an era
On December 2, 1904, the lights in the fair went out for the final time. Shortly after, demolition crews broke apart the Palaces, the Ferris Wheel, and tore up the Pike.
St. Louis would also enter a period of decline, beginning in the 1950s as manufacturing went overseas and the city’s population migrated to the suburbs.
Today, Forest Park holds only a few remnants from the fair.
The magnificent Palace of Fine Art now houses the Saint Louis Art Museum.
Originally built by the Smithsonian Institution to showcase rare species of birds, the Flight Cage was meant to return to Washington DC after the fair. However, St. Louisans rallied and fought to keep the monument. It became the centerpiece for the Saint Louis Zoo and remains one of the most popular exhibits.
All St. Louisans still remember that for one year in 1904, their city was the most important in the world. The World’s Fair and the Summer Olympics happened. And they happened right here. Right here in St. Louis. | https://medium.com/history-of-yesterday/1904-the-year-st-louis-missouri-was-the-most-important-city-in-the-world-1fb3d1792577 | ['Jennifer Mittler-Lee'] | 2020-12-26 12:01:08.951000+00:00 | ['History', 'Culture', 'Life', 'People', 'Ideas'] |
How to Create Your Own BigInt Class in Java — Data Structures and Algorithms Practice | There are a couple ways to create a BigInt class — the most common being using a String or Integer Array. I will be using a String.
Setup
Here I am storing the value as a String and created some constructors, getters and setters. I give the option to set the value or create the object with an int or String.
I need a way to compare the values, so I am implementing the comparable interface and creating a compareTo() method.
Now we can start working on operations. Each of the following methods work just as they would when operating on numbers by hand.
Addition
For addition, we need to add the numbers digit by digit, right to left, while carrying over for any sum larger than 10. I push each sum to a StringBuilder and then reverse() it. The overflow (or the number being carried over) can only ever be 0 or 1, so I append a “1” to the number if it has an overflow on the last digit added. I also created a static helper method equalLengths() that gives me an array of both the operands as equal length strings. If one string is shorter than another, zeroes are added to the front of the string. I do this to make the character arrays parallel so that every digit lines up with the other in my for loop.
Note that I am not returning the result, but setting the current instance’s value to this result. If you want the result, you need to use the getter.
Multiplication
Multiplication is similar. We need to multiply each digit of the multiplier, starting from the right, by every digit of the multiplicand. On paper, every time I move over a digit, I write the product one step left. So for the method implementation, we need to keep track of the current power of 10, because every time we move one digit left, that product should be one power of 10 higher. Here is the methods flow:
the times method breaks the multiplier down to a char array
the last multiplier digit is sent to multiplyByInt()
Overflow is calculated each time by dividing by 10
The current product is pushed onto the StringBuilder
the digit of the multiplicand being multiplied is dropped
eventually an empty string ends up being sent to the function — then the exit condition will reverse the string and append zeroes(the total of this being the product of the current multiplier digit times the entire multiplicand)
it comes back to the times method, where it uses plus() to add the product to ‘this’.
this repeats for every digit of the multiplier, and the power of 10 increases on when each digit to the left is sent. This way, every time it comes back to the times() method, it can be added directly to ‘this’ (the same way that we shift each product left one digit when summing).
If I wanted a method that just multiplied a BigInt by an int, it would be a lot simpler, because we could just multiply the entire multiplier by each multiplicand digit. In fact, multiplyByInt() will work just fine if you give it any valid integer. You can do the same by hand in this manner.
Subtraction
Subtraction is done almost the same as addition. The difference here is that the “overflow” is taken away from numbers if they aren’t large enough to be subtracted from. So I created an overflow variable that is either 0 or -1.
If a number takes from another, it is set to -1, and then the next number has this digit taken away from it. We know if this needs to happen if ‘thisInt’ is less than ‘thisInt’. This method can result in some trailing zeroes, so I use a while loop to get rid of them.
Division
Division is tricky. To keep the blog post simple, I only allowed division by in integer. Division goes left to right. The overflow works just as we would do it on paper.
Factorial
Now we can implement a factorial method using our multiplication method. This is doing the same thing as the original solution, but using our newly made class instead of integers. Note that this is a static method. Give it an integer, and it will return a BigInt with a value of factorial(n).
Static Methods
Instead of having to deal with the objects directly when operating on large numbers, we can create some static methods. This way we can just send in the number strings and get strings back.
I also added a calculate() method. We can send it strings of the form ‘operand operator operator’ and it will return the resulting string.
Testing
Now we can test everything out, using the calculate method:
Which gives the correct output:
And we can also use it to calculate some giant numbers: | https://medium.com/swlh/how-to-create-your-own-bigint-class-in-java-data-structures-and-algorithms-practice-d71e0e4ba91d | ['Steve Pesce'] | 2020-12-13 06:49:57.362000+00:00 | ['Bigint', 'Large Numbers', 'Java', 'Data Structures', 'Algorithms'] |
Creating A Live Dashboard with Bokeh | Recently, I completed a project creating an analytics dashboard for a client. I used Bokeh for the visualizations and wanted to share my experiences.
Overview
I’d like to describe the dashboard I created and then talk about some of the steps it took to create. You can see the finished product below:
In the top left corner we have statistics for three servers with line graphs representing CPU, memory, and disk usage. If any graph has a reading above 75% it, its title, and its plot turn red like this. When the readings drop, things go back to normal. Moving downward, we have two line graphs plotting users currently connected to our system and how fast our system is processing their messages over time. At the bottom is a live Google Map of users in the field that are transmitting, with colors representing different user states.
On the top right we have various tables showing running processes, versioning, jar info, uptime, and open file statistics (when available). Below that, we have an alerts area displaying faults, overflows, and server errors. The figures change to red when they exceed certain parameters and the drop downs are shown as needed. Finally, we have the ‘firehose’ on the bottom right that’s fed from tails of various logs from two boxes, colored coded to keep it straight. It’s not shown in the photo, but the log scrolls fairly like a terminal, albeit quickly (my client said they wanted it ‘for peace of mind’, even though the logs fly by at light speed).
Close Ups
You can see some of the elements close up in the images below:
Full Sized Shot
And here is a shot showing the dashboard updating over time:
Development Strategy
My game plan for the project was to create each module grouping (server stats, long line graphs, GPS, tables firehose) separately and then fit them together in whatever way made the most visual sense. I felt this approach allowed me to pay closer attention to the details of each module and adjust the over all app layout as needed without much issue. All modules are comprised of only two Bokeh elements, HTML divs (including the Google Map) and line graphs. Periodic callbacks update everything, except the Google Map that hits a local Flask API periodically via Javascript instead. I did this because at the time of writing there were some issues with Bokeh’s GMapPlot (but that should be resolved in the next release). Error handling was mostly concentrated to handling data from the various Bash commands used to source server stats and dealing with their output, expected or otherwise.
Back End Setup
I tried to stay with the “server app” design similar to those shown here and kept my folder structure as prescribed here to try and keep things simple, with all my Python in a common folder and a HTML file in a “templates” folder. This was done so Bokeh’s Tornado server would find the HTML file and use it as a template for the app. I also used threads and unlocked callbacks for pretty much every element in the app to keep things responsive. The data for all modules is sourced from three daemonized threads, responsible for information from SSH, HTTP, and SQL. They are started by server lifecycle hooks described here, init when the server is launched, gather data from their respective source, and make it available to the dashboard. And since the dashboard gets data from the a general cache, N users are able to share info cached from a single database, SSH, and HTTP connection. Awesome. This also allows the dashboard to scale nicely, especially with Tornado’s help.
Next Steps
I didn’t have enough time to give the dashboard a memory, so when you log in you can’t see anything from previous runs. My client didn’t see this as a deal breaker, having dedicated plasma screens and projectors for this kind of thing (with some instances showing months of cached data). But it’s kind of problematic if anything should happen to those systems, or even the browsers on them.To remedy this, I’d like to save data using locally to a SQlite database and initialize the dashboard with cached data from it to show a week or more of data.
Author: Josh Usry | https://medium.com/bokeh/creating-a-live-dashboard-with-bokeh-7564fc9ee07 | [] | 2020-07-06 22:33:57.118000+00:00 | ['Python', 'Open Source', 'Data Science', 'Bokeh', 'Data Visualization'] |
Top 16 Types of Chart in Data Visualization | Top 16 Types of Chart in Data Visualization
Look at the following dashboard, do you know how many types of chart are there?
From FineReport
In the era of information explosion, more and more data piles up. However, these dense data are unfocused and less readable. So we need data visualization to help data to be easily understood and accepted. By contrast, visualization is more intuitive and meaningful, and it is very important to use appropriate charts to visualize data.
In this post, I will introduce the top 16 types of chart in data visualization, and analyze their application scenarios to help you quickly select the type of chart that shows the characteristics of your data.
NOTE: All the charts in the article are taken from the data visualization tool FineReport, and the personal download is completely free.
1. Column Chart
Column charts use vertical columns to show numerical comparisons between categories, and the number of columns should not be too large (the labels of the axis may appear incomplete if there are too many columns).
From FineReport
The column chart takes advantage of the height of the column to reflect the difference in the data, and the human eye is sensitive to height differences. The limitation is that it is only suitable for small and medium-sized data sets.
From FineReport
Application Scenario: comparison of classified data
2. Bar Chart
Bar charts are similar to column charts, but the number of bars can be relatively large. Compared with the column chart, the positions of its two axes are changed.
From FineReport
From FineReport
Application Scenario: comparison of data (the category name can be longer because there is more space on the Y axis)
3. Line Chart
A line chart is used to show the change of data over a continuous time interval or time span. It is characterized by a tendency to reflect things as they change over time or ordered categories.
It should be noted that the number of data records of the line graph should be greater than 2, which can be used for trend comparison of large data volume. And it is better not to exceed 5 polylines on the same graph.
From FineReport
From FineReport
Application Scenario: trend of data volume over time, comparison of series trends
4. Area Chart
The area chart is formed on the basis of the line chart. It fills the area between the polyline and the axis in the line chart with color. The filling of the color can better highlight the trend information.
The fill color of the area chart should have a certain transparency. The transparency can help the user to observe the overlapping relationship between different series. The area without transparency will cause the different series to cover each other.
From FineReport
From FineReport
Application Scenario: series ratio, time trend ratio
5. Pie Chart
Pie charts are widely used in various fields to represent the proportion of different classifications, and to compare various classifications by the arc.
The pie chart is not suitable for multiple series of data, because as the series increase, each slice becomes smaller, and finally the size distinction is not obvious.
From FineReport
A pie chart can also be made into a multi-layer pie chart, showing the proportion of different categorical data, while also reflecting the hierarchical relationship.
From FineReport
Application Scenario: series ratio, series size comparison (rose diagram)
6. Scatter Plot
The scatter plot shows two variables in the form of points on a rectangular coordinate system. The position of the point is determined by the value of the variable. By observing the distribution of the data points, we can infer the correlation between the variables.
Making a scatter plot requires a lot of data, otherwise the correlation is not obvious.
From FineReport
From FineReport
Application Scenario: correlation analysis, data distribution
7. Bubble Chart
A bubble chart is a multivariatechart that is a variant of a scatter plot. Except for the values of the variables represented by the X and Y axes, the area of each bubble represents the third value.
We should note that the size of the bubble is limited, and too many bubbles will make the chart difficult to read.
From FineReport
Application Scenario: comparison of classified data, correlation analysis
8. Gauge
A gauge in data visualization is a kind of materialized chart. The scale represents the metric, the pointer represents the dimension, and the pointer angle represents the value. It can visually represent the progress or actual situation of an indicator.
The gauge is suitable for comparison between intervals.
From FineReport
It can also be made into a ring or a tube type, indicating the ratio.
From FineReport
Application Scenario: clock, ratio display
9. Radar Chart
Radar charts are used to compare multiple quantized variables, such as seeing which variables have similar values, or if there are extreme values. They also help to observe which variables in the data set have higher or lower values. Radar charts are suitable for demonstrating job performance.
From FineReport
The radar chart also has a stacked column style that can be used for two-way comparison between classification and series, while also representing the proportion.
From FineReport
Application Scenario: dimension analysis, series comparison, series weight analysis
10. Frame Diagram
The frame diagram is a visual means of presenting the hierarchy in the form of a tree structure, which clearly shows the hierarchical relationship.
From FineReport
Application Scenario: hierarchy display, process display
11. Rectangular Tree Diagram
The rectangular tree diagramis suitable for presenting data with hierarchical relationships, which can visually reflect the comparison between the same levels. Compared with the traditional tree structure diagram, the rectangular tree diagram makes more efficient use of space and has the function of showing the proportion.
Rectangular tree diagrams are suitable for showing the hierarchy with weight relationships. If it is not necessary to reflect the proportion, the frame diagram may be clearer.
From FineReport
Application Scenario: weighted tree data, proportion of tree data
12. Funnel Chart
The funnel chart shows the proportion of each stage and visually reflects the size of each module. It’s suitable for comparing rankings.
From FineReport
At the same time, the funnel chart can also be used for comparison. We arrange multiple funnel charts horizontally and the data contrast is also very clear.
From FineReport
Application Scenario: data ranking, ratio, standard value comparison
13. Word Cloud Chart
The word cloud is a visual representation of text data. It is a cloud-like color graphic composed of vocabulary. It is used to display a large amount of text data and can quickly help users to perceive the most prominent text.
The word cloud chart requires a large amount of data, and the degree of discrimination of the data is relatively large, otherwise the effect is not obvious. And it is not suitable for accurate analysis.
From FineReport
Application Scenario: keyword search
14. Gantt Chart
The Gantt chart visually shows the timing of the mission, the actual progress and the comparison with the requirements. So managers can easily understand the progress of a task (project).
From FineReport
Application Scenario: project progress, state changes over time, project process
15. Map
The map is divided into three types: regional map, point map, and flow map.
(1) Regional Map
A regional map is a map that uses color to represent the distribution of a certain range of values on a map partition.
From FineReport
Application Scenario: comparison and distribution of data
(2) Point Map
A point map is a method of representing the geographical distribution of data by plotting points of the same size on a geographical background.
The distribution of points makes it easy to grasp the overall distribution of data, but it is not suitable when you need to observe a single specific data.
From FineReport
Application Scenario: distribution of data
But if you replace the point with the bubble, then the point map can not only show the distribution but also roughlycompare the size of the data in each region.
From FineReport
(3) Flow Map
The flow map displays the interaction data between the outflow area and the inflow area. It is usually expressed by the line connecting the geometric centers of gravity of the spatial elements. The width or color of the line indicates the flow value.
Flow maps help to illustrate the distribution of geographic migration, and the use of dynamic flow lines reduces visual clutter.
From FineReport
Application Scenario: flow, distribution and comparison of data
16. Heatmap
The heatmap is used to indicate the weight of each point in the geographic area. In addition to the map as the background layer, you can also use other images. And Color in a heatmap usually refers to density.
From FineReport
Application Scenario: regional visits, heat distribution, distribution of various things
At Last
All of the above are the 16 frequently used types of chart in data visualization. If you want to get started with data visualization, I suggest you start by learning to make these basic charts and practice with an easy-to-use tool like FineReport.
Some people may think that the basic charts are too simple and primitive, and they tend to use more complicated charts. However, the simpler the chart, the easier it is to help people quickly understand the data. Isn’t that the most important purpose of data visualization? So please don’t underestimate these basic charts. Because users are most familiar with them. They should be considered for priority as long as they are applicable.
You might also be interested in…
How Can Beginners Design Cool Data Visualizations?
A Beginner’s Guide to Business Dashboards
4 Uses of Data Maps in Business Analysis | https://towardsdatascience.com/top-16-types-of-chart-in-data-visualization-196a76b54b62 | ['Lewis Chou'] | 2019-07-23 05:42:45.208000+00:00 | ['Data Visualization', 'Data Analysis', 'Design', 'Data Science', 'Charts'] |
Data Collaboratives as an enabling infrastructure for AI for Good | The “AI for Social Good” conference that recently took place at the Qatar Computing Research Institute examined the potential of Artificial Intelligence (AI) for good. It was widely agreed that the potential is real, and that AI could help jumpstart economic development and support humanitarian causes when used responsibly.
Yet equally, it was evident to all that increasing the adoption of AI faces certain challenges and constraints. In particular, AI (and the associated methods of machine learning, deep learning, data science, etc.) relies on access to vast amounts of data that can help train and develop new systems. Not only is this data often unavailable in emerging economies, but the relevant stakeholders may also lack the capacity (technical and otherwise) to make use of it.
This is where Data Collaboratives come in. Data collaboratives can help mitigate some of these challenges by providing stakeholders with more and non-traditional sources of data. At the GovLab, we have done substantial research on the potential offered by Data Collaboratives toward solving complex and seemingly intractable public problems.
(Examples of Data Collaboratives here, while my Presentation to the AI for Social Good conference is here).
We have been focused on using privately held data for the public good, and our research makes clear that there is an exponential return to the cross-organizational alignment of goals and pooling of resources that result from greater collaboration and partnerships. Collaboration also increases trust and ethics in the way data is handled and, importantly, the perceived legitimacy of such efforts. In other words, data collaboration is the key to unlocking the potential of data, data science, and artificial intelligence while limiting its risks and potential harms.
But setting up data collaboratives comes with its own challenges and difficulties — especially given how new the concept is, and how fledgling the field. In this post, we establish a set of steps to make Data Collaboratives an enabling infrastructure for AI for Social Good by making them more systematic, sustainable and responsible.
What are Data Collaboratives?
First, though, it may be worth addressing the question: What is a Data Collaborative?
The term “data collaborative,” introduced by the GovLab in 2015, refers to an emergent form of public-private partnership in which actors from different sectors exchange and analyze data (and/or provide data science insights and expertise) to create new public value and generate new insights. As evidenced by the 150+ case studies included in our Data Collaboratives Explorer, data collaborative have been used with increasing frequency, across sectors ranging from agriculture to telecoms to government, in a growing number of countries around the world.
Data collaboratives also come in many different forms. They include data pools, which agglomerate data from various sources and sectors; challenges and prizes, in which corporations and other data holders make data available to third parties who compete to develop new apps or discover innovative uses for the data; APIs, which allow developers and others to directly access private sector data; intelligence products, where shared data is used to build a tool, dashboard, report or other platform that can help support another organization’s objective; and a trusted intermediaries model, where corporations share data on a limited basis with known partners, typically for the purposes of data analysis and modeling. Data collaboratives can also take the form of more traditional research or business partnerships that allow organizations to share information and expertise.
The value of data collaboratives stems from the fact that the supply of and demand for data are generally widely dispersed — spread across government, the private sector, and civil society — and often poorly matched. This failure (a form of “market failure”) results in tremendous inefficiencies and lost potential. Much data that is released is never used. And much data that is actually needed is never made accessible to those who could productively put it to use.
Data collaboratives, when designed responsibly, are the key to addressing this shortcoming. They draw together otherwise siloed data and a dispersed range of expertise, helping match supply and demand, and ensuring that the correct institutions and individuals are using and analyzing data in ways that maximize the possibility of new, innovative social solutions.
Roadmap for Data Collaboratives
Despite their clear potential, the evidence base for data collaboratives is thin. There’s an absence of a systemic, structured framework that can be replicated across projects and geographies, and there’s a lack of clear understanding about what works, what doesn’t, and how best to maximize the potential of data collaboratives.
At the GovLab, we’ve been working to address these information shortcomings. For emerging economies considering the use of data collaboratives, whether in pursuit of Artificial Intelligence or other solutions, we present six steps that can be considered in order to create data collaborative that are more systematic, sustainable, and responsible.
The need for making Data Collaboratives Systematic, Sustainable and Responsible
Increase Evidence and Awareness
As mentioned, the fledgling and ill-defined nature of the field poses challenges to the adoption of data collaboratives. Simply put, if organizations and individuals don’t know about the potential of data and collaboration, then their actual use and impact will be limited. In order to spur greater adoption of data and the data collaboration model, we may need to create and document evidence regarding the value of collaboration and raise awareness among key target communities.
Increase Readiness and Capacity
For all the evident benefits of data, many organizations continue to display a certain reluctance or reticence. This is true when it comes to using data in general, and especially true when it comes to data collaboration, where mistrust, misaligned incentives or priorities continue to impede progress. A lack of technical capacity is also a major obstacle to greater uptake of data, especially for smaller organizations that lack the type of specialized knowledge often necessary to store and analyze data.
For all these reasons, there is an urgent need to develop new capacities and a new readiness within and across organizations. This can be done with more emphasis on training and skill-building, as well as with greater cross-sectoral collaboration (so that data specialists in the private sector, for instance, may contribute some of their skills to civil society organizations). It’s also worth mentioning that some of these goals (though by no means all) may be aided by the awareness building discussed above.
Address Data Supply and Demand Inefficiencies and Uncertainties
There are two sides to the data collaboration equation: supply and demand. To amplify the benefits of collaboration we need to address inefficiencies and “market failures” on both sides. This involves pro-actively reaching out to potential supply-side organizations, especially in the private sector, and working with them to minimize concerns over competitiveness or reputation, and to define data responsibility approaches taking into account both the value proposition and potential risks of collaboration.
Equally, understanding the demand for data is a vital part of establishing a responsible and impactful data collaboration process. Yet here too, as on the supply side, there are a number of inefficiencies limiting impact. Many of these are related to readiness, capacity and awareness — issues which we have addressed above.
Establish a New “Data Stewards” Function
We also believe that certain institutional changes are required to maximize the potential of data collaboration. In particular, the GovLab has proposed the establishment of a new role that would be embedded in any organization dealing or considering dealing with data: data stewards.
Data stewards would be the individual (or individuals) within an organization responsible for setting data policy and for steering and encouraging collaborative approaches. Data stewards also play a central role in ensuring that data is shared and handled responsibly, to address some of the inherent risks of open data and data collaboration. We have written extensively about data stewards elsewhere. Essentially, they would function as the linchpin of a new, more systemic framework for responsible data collaboration.
Develop and strengthen policies and governance practices for data collaboration
Establishing the new role of data stewards is but one aspect of a broader task: developing a clearer and better-defined governance framework for data collaboration in the social sector. There is ample evidence across sectors that clearly articulated governance mechanisms, along with auditing and accountability mechanisms, enhance trust and the usage of data. While some policies and mechanisms are universal and can be transposed, others may require sectoral adaptation and changes. The overall goal is to develop a well-defined set of guidelines and principles that cover the entire data lifecycle.
Strengthen the Ecosystem
Few successful initiatives (technical or otherwise) emerge in isolation. They are always supported, incubated and nurtured by a well-developed ecology that includes thought-leaders, funding, institutional and regulatory support, as well as a variety of other components. One of the challenges confronting data collaboratives is that the current ecosystem is weak, perhaps especially in emerging economies. There is a need to build capacities and networks of association where knowledge can be shared and funding sources identified. Likewise, there is a need for a robust body of evidence (case studies, examples, etc.) that can provide lessons and best practices, and that can serve as the foundation for new projects and initiatives.
Put together, these six steps amount to a roadmap for establishing successful data collaboratives — whether the ultimate goal is fostering Artificial Intelligence solutions, or some other desired outcome. Data collaboratives, if made systematic, sustainable and responsible, offer a unique opportunity to harness the potential of data and AI to help solve important and complex public problems in a responsible manner.
(Reposted from the Qatar Center for Artificial Intelligence Blog) | https://medium.com/data-stewards-network/data-collaboratives-as-an-enabling-infrastructure-for-ai-for-good-99aeb1192c10 | ['Stefaan G. Verhulst'] | 2019-04-16 13:18:47.149000+00:00 | ['Data', 'Data Collaborative', 'Data Steward', 'AI', 'Data Science'] |
Explore and Visualize a Dataset with Python | Let’s start exploring:
Overall conversion rate development:
Conversion rate development over time
It certainly seems like things went downhill in early 2017. After checking with the chief sales officer, it turns out that a competitor entered the market around that time. That’s good to know, but nothing we can do here now.
We use an underscore _ as a temporary variable. I would typically do that for disposable variables that I am not going to use again later on. We used pd.DateTimeIndex on order_leads.Date and set the result as the index, this allows us to use pd.Grouped(freq='D') to group our data by day. Alternatively, you could change frequency to W, M, Q or Y (for week, month, quarter, or year) We calculate the mean of “Converted” for every day, which is going to give us the conversion rate for orders on that day. We used .rolling(60) with .mean() to get the 60 days rolling average. We then format the yticklables such that they show a percentage sign.
Conversion rate across sales reps:
It looks like there is quite some variability across sales reps. Let’s investigate this a little bit more.
Not much new here in terms of functionalities being used. But note how we use sns.distplot to plot the data to the axis.
If we recall the sales_team data, we remember that not all sales reps have the same number of accounts, this could certainly have an impact! Let’s check.
Distributions of conversion rates by number of assigned accounts
We can see that the conversion rate numbers seem to be decreasing inversely proportional to the number of accounts assigned to a sales rep. Those decreasing conversion rates make sense. After all, the more accounts a rep has, the less time he can spend on each one.
Here we first create a helper function that will map the vertical line into each subplot and annotate the line with the mean and standard deviation of the data. We then set some seaborn plotting defaults, like larger font_scale and whitegrid as style.
Effect of meals:
sample meals data
It looks like we’ve got a date and time for the meals, let’s take a quick look at the time distribution:
out:
07:00:00 5536
08:00:00 5613
09:00:00 5473
12:00:00 5614
13:00:00 5412
14:00:00 5633
20:00:00 5528
21:00:00 5534
22:00:00 5647
It looks like we can summarize those:
Note how we are using pd.cut here to assign categories to our numeric data, which makes sense because after all, it probably does not matter if a breakfast starts at 8 o’clock or 9 o’clock.
Additionally, note how we use .dt.hour, we can only do this because we converted invoices['Date of Meal'] to datetime before. .dt is a so-called accessor, there are three of those cat, str, dt . If your data has the correct type, you can use those accessors and their methods for straightforward manipulation (computationally efficient and concise).
invoices['Participants'] , unfortunately, is a string we have to convert that first into legitimate JSON so that we can extract the number of participants.
Now let’s combine the data. To do this, we first left-join all invoices by Company Id on order_leads. Merging the data, however, leads to all meals joined to all orders. Also ancient meals to more recent orders. To mitigate that we calculate the time difference between meal and order and only consider meals that were five days around the order.
There are still some orders that have multiple meals assigned to them. This can happen when there were two orders around the same time and also two meals. Then both meals would get assigned to both order leads. To remove those duplicates, we only keep the meal closest to that order.
part of the combined data frame
I created a plot bar function that already includes some styling. The plotting via the function makes visual inspection much faster. We are going to use it in a second.
Impact of type of meal:
Wow! That is quite a significant difference in conversion rates between orders that had a meal associated with them and the ones without meals. It looks like lunch has a slightly lower conversion rate than dinner or breakfast, though.
Impact of timing (i.e., did meal happen before or after the order lead):
A negative number for 'Days of meal before order' means that the meal took place after the order lead came in. We can see that there seems to be a positive effect on conversion rate if the meal happens before the order lead comes in. It looks like the prior knowledge of the order is giving our sales reps an advantage here.
Combining it all:
We’ll use a heatmap to visualize multiple dimensions of the data at the same time now. For this lets first create a helper function.
Then we apply some final data wrangling to additionally consider meal price in relation to order value and bucket our lead times into Before Order, Around Order, After Order instead of days from negative four to positive four because that would be somewhat busy to interpret.
Running the following snippet will then produce a multi-dimensional heatmap.
Heatmap to visualize four dimensions in one graphic
The heatmap is certainly pretty, although a little hard to read at first. So let’s go over it. The chart summarizes the effect of 4 different dimensions:
Timing of the meal: After Order, Around Order, Before Order (outer rows)
Type of meal: breakfast, dinner, lunch (outer columns)
Meal Price / Order Values: Least Expensive, Less Expensive, Proportional, More Expensive, Most Expensive (inner rows)
Number Participants: 1,2,3,4,5 (inner columns)
It certainly seems like the colors are darker/higher towards the bottom part of the chart, which indicates that | https://towardsdatascience.com/how-to-explore-and-visualize-a-dataset-with-python-7da5024900ef | ['Fabian Bosler'] | 2019-08-31 18:42:42.145000+00:00 | ['Programming', 'Analytics', 'Data Science', 'Data Visualization', 'Python'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.