title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
5 Habits I Gave Up to Begin Healing From Borderline Personality Disorder | Recently, I received an… interesting comment on one of my stories about life with borderline personality disorder (BPD). It was a little bit ironic, even. The reader who said, “It’s so strange that now just being an @@@hole has a disease attached to it,” closed their commentary by claiming, “I’m sure I will be hated for this.”
The irony of the comment, for me, was that the reader clearly sought negative attention by being something of an asshole themselves. They lumped all of the symptoms of BPD into being an asshole, and declared that it’s not even a mental illness but a choice.
What made the reader need that kind of attention, anyway?
To be honest, this is the kind of compulsion I used to battle in my worst episodes of BPD. We're not the only people who've been known to act out for attention. In fact, I was already planning to write about the habits I gave up to start healing from BPD when this comment caught my attention and reminded me.
To be fair, not every person with borderline has these same habits. And there are plenty of people without BPD who do these things too.
It’s important to recognize, though, the ways we self-sabotage and stunt our own growth.
1. I quit looking for attention.
At the heart of BPD is a desire to be loved. It’s terribly all-consuming. I would sometimes talk a bit more loudly than necessary, or draw negative attention to myself, simply because I wanted to know I wasn’t invisible.
As much as I didn’t want to admit it, attention from other people felt something like a drug to me. Attention made it just a little easier to survive.
To begin my healing, I had to quit looking for some feel good drag from others.
2. I stopped chasing men who didn’t really care about me.
Everyone has heard the phrase, “looking for love in all the wrong places.” For many BPD sufferers, that’s a huge problem. I never witnessed healthy romantic relationships as a child. Practically everything I learned about love came from television or movies. It was, of course, all wrong.
For a long time, I chased guys who were no good for me. And I didn’t disengage when I realized they didn’t truly care about me because I was already fully invested.
I kept thinking that "love" was my lifeline but I wasn’t honest with the fact that my idea of love was toxic and twisted. Once I stopped chasing the wrong guys, I began to heal and reexamine my definition of love.
3. I went on a selfie hiatus.
I’ve written about the benefits of selfies before, so please hear me out. I don’t mean that selfies are inherently bad. They’re not.
It’s just that for some folks battling mental health issues, and for some of us with borderline personality disorder, taking selfies can be excessive and problematic.
It can be difficult to get yourself away from a compulsive need for attention when you’re stuck on taking (and posting) selfies. Healing from BPD really does require you to cool it with seeking approval from other people.
Not only that, but people with BPD typically struggle significantly with their self-esteem. As our symptoms flare up, the last thing we need to do is over analyze our features and appearance.
4. I changed my story and quit feeling sorry for myself.
For the longest time, every story I told myself about my life was a sad one. I told myself that I was worthless and ugly.
Like so many other borderline folks, I had a bad habit of telling myself that everybody rejected me. I didn’t just fear rejection; it was my entire identity.
During the first couple years of motherhood, even though I was doing well with putting my baby first, I still carried my sob stories. I was alone. I was abandoned. I was unlovable.
I had to finally let go of that ugly narrative, because I had grown so comfortable with my pain. Pain was more familiar to me than hope. And I couldn’t heal as long as I let myself stay stuck in that narrative.
5. I quit expecting other people to bring meaning into my life.
By far, one of the worst things about BPD is this chromic emptiness. It’s like a permanent storm cloud above your head or some sort of splinter in your soul that wakes you up at the worst possible moments.
That emptiness has a way of eating up everything and making you believe that you will never really know who you are.
And you try so hard to fill it with different things. Maybe with stuff. But more often, with people.
I had to quit looking to other people and expecting them to give me value. I had stop trying to use them to bring meaning into my life. I had to learn how to see my own value.
But people who wish to heal from trauma, which is a huge part of BPD, can’t only drop their bad habits. They have to pick up new and healthier habits too.
1. I began writing about my struggles instead of stewing over them.
For me, writing has been my real “lifeline.” Not my old definitions of love. And I’m not talking about writing in my journal, or crying out “woe is me.”
Writing for myself and others has made an incredible difference in my mental health. I get to work out some of my demons, and figure out what I really feel, but the fact that I’m not just writing for me forces me to find more distance and clarity.
What do I wish I had known five or ten years ago? What did I really need someone to say? What stories could have changed my entire life?
These are the things I think about when I write. And it’s helped me to finally process my pain. If I didn’t write essays about my life in an effort to help others, I would likely still be stewing over all of my hurt and trauma.
2. I started looking for silver linings.
Again, writing certainly helps. I’ve developed this habit of looking for the good in the worst scenarios, including many of my fears.
These days, I now choose to see adversity and unexpected problems as opportunities for growth and further healing.
It’s not always easy, but it’s not too different from flexing any other creative muscle. Over time, I’ve gotten better at finding silver linings, so even when I face a deep depressive episode, I am eventually able to find my way back to a less biased story. I can now see how my depression always lies and tries to keep me stuck.
3. I met my own needs.
For much of my life, I had this chip on my shoulder about being on my own. It became ingrained in me when I was 18 and my sister went to prison. Our whole family dynamic changed and I found myself increasingly alone.
My mother quit celebrating holidays a few years later when my sister’s children were taken away and sent to live with their paternal grandmother in Missouri. She suddenly started telling me to make my own plans for Thanksgiving and Christmas because she had nothing to celebrate.
I grew very bitter that my mother didn’t seem to think I was family worth keeping. I grew reclusive and resented that I would have to put my own gift under my tree.
As I began to heal, however, I quit whining to myself about being on my own and I started to take pride in meeting my own needs. That was a game changer for me. No more pain over being “forgotten” or alone. Meeting my own needs made me feel strong and much more secure in myself.
4. I stopped reading into every relationship and started to chill out.
There’s this thing that many people do. It’s not just those of us with BPD. People often gravitate to toxic relationships or poison their healthy ones by reading into every little thing.
I’ve learned the hard way that reading into everything somebody says or does is a recipe for disaster. We wind up doing one of two things: we either get led by our fears and worry about everything we over analyze, or we see the person in an unrealistic light.
Seeing only those things you want to see in a relationship is a shortcut to destruction. It’s important to relax enough to be honest not just with them, but with yourself.
Healthy love is chill, yet it doesn’t ignore red flags in favor of a fantasy. But it takes a lot of practice to get there and it’s important to know that old habits do die hard. I have to frequently remind myself to be realistic and relaxed.
5. I leaned into my fear of being alone.
If you’ve ever battled obsessive compulsive disorder (OCD), you have most likely heard of exposure therapy. I never really set out to use exposure therapy for myself in my treatment for BPD. It was more like a happy accident brought about by my circumstances as a single mom.
I used to be so frightened to be romantically alone. Although I’ve always been an introvert who likes to work alone, I believed that I couldn’t be happy without a partner.
The idea of dying alone and spending holidays alone horrified me. Like many others with BPD, I sometimes went to extremes to avoid being abandoned. I also looked for extra attention and unfairly tested my lovers.
The only thing that got me past my fear of being alone was to finally lean into my loneliness and learn how to manage it in a healthier way.
It took years, but I finally discovered that I like being alone. There are perks to going at life on your own terms. It’s not that I never feel sad, alone, or abandoned anymore. But now that I’ve faced my fears of loneliness head on, those feelings no longer rule me. | https://medium.com/honestly-yours/5-habits-i-gave-up-to-begin-healing-from-borderline-personality-disorder-49858c9bed9d | ['Shannon Ashley'] | 2019-12-30 14:36:30.471000+00:00 | ['Life Lessons', 'Mental Health', 'Personal Development', 'Self Improvement', 'Self'] |
Advantages of using NumPy for numerical operations. Speed gains and additional features offered by NumPy. | Advantages of using NumPy over Python Lists
Features and performance gains of using NumPy for numerical operations
In this article, I will show a few neat tricks that come with NumPy, yet are must faster than vanilla python code.
Photo by Alex Chambers on Unsplash
Memory usage
The most important gain is the memory usage. This comes in handy when we implement complex algorithms and in research work.
array = list(range(10**7))
np_array = np.array(array)
I found the following code from a blog. I will be using this code snippet to compute the size of the objects in this article.
get_size(array) ====> 370000108 bytes ~ 352.85MB
get_size(np_array) => 80000160 bytes ~ 76.29MB
This is because NumPy arrays are fixed-length arrays, while vanilla python has lists that are extensible.
Speed
Speed is, in fact, a very important property in data structures. Why does it take much less time to use NumPy operations over vanilla python? Let’s have a look at a few examples.
Matrix Multiplication
In this example, we will look at a scenario where we multiply two square matrices.
from time import time
import numpy as np def matmul(A, B):
N = len(A)
product = [[0 for x in range(N)] for y in range(N)] for i in range(N):
for j in range(N):
for k in range(N):
product[i][j] += matrix1[i][k] * matrix2[k][j]
return product matrix1 = np.random.rand(1000, 1000)
matrix2 = np.random.rand(1000, 1000) t = time()
prod = matmul(matrix1, matrix1)
print("Normal", time() - t)
t = time()
np_prod = np.matmul(matrix1, matrix2)
print("Numpy", time() - t)
The times will be observed as follows;
Normal 7.604596138000488
Numpy 0.0007512569427490234
We can see that the NumPy implementation is almost 10,000 times faster. Why? Because NumPy uses under-the-hood optimizations such as transposing and chunked multiplications. Furthermore, the operations are vectorized so that the looped operations are performed much faster. The NumPy library uses the BLAS (Basic Linear Algebra Subroutines) library under in its backend. Hence, it is important to install NumPy properly to compile the binaries to fit the hardware architecture.
More Vectorized Operations
Vectorized operations are simply scenarios that we run operations on vectors including dot product, transpose and other matrix operations, on the entire array at once. Let’s have a look at the following example that we compute the element-wise product.
vec_1 = np.random.rand(5000000)
vec_2 = np.random.rand(5000000) t = time()
dot = [float(x*y) for x, y in zip(vec_1, vec_2)]
print("Normal", time() - t)
t = time()
np_dot = vec_1 * vec_2
print("Numpy", time() - t)
The timings on each operation will be;
Normal 2.0582966804504395
Numpy 0.02198004722595215
We can see that the implementation of NumPy gives a much faster vectorized operation.
Broadcast Operations
Numpy vectorized operations also provide much faster operations on arrays. These are called broadcast operations. This is because the operations are broadcasted over the entire array using Intel Vectorized instructions (Intel AVX).
vec = np.random.rand(5000000) t = time()
mul = [float(x) * 5 for x in vec]
print("Normal", time() - t)
t = time()
np_mul = 5 * vec
print("Numpy", time() - t)
Let’s see how the running times look;
Normal 1.3156049251556396
Numpy 0.01950979232788086
Almost 100 times!
Filtering
Filtering includes scenarios where you only pick a few items from an array, based on a condition. This is integrated into the NumPy indexed access. Let me show you a simple practical example.
X = np.array(DATA)
Y = np.array(LABELS) Y_red = Y[Y=='red'] # obtain all Y values with RED
X_red = X[Y=='red'] # feed Y=='red' indices and filter X
Let’s compare this against the vanilla python implementation.
X = np.random.rand(5000000)
Y = np.int64(10 * np.random.rand(5000000)) t = time()
Y_even = [int(y) for y in Y if y%2==0]
X_even = [float(X[i]) for i, y in enumerate(Y) if y%2==0]
print("Normal", time() - t)
t = time()
np_Y_even = Y[Y%2==0]
np_X_even = X[Y%2==0]
print("Numpy", time() - t)
The running times are as follows;
Normal 6.341982841491699
Numpy 0.2538008689880371
This is a pretty handy trick when you want to separate data based on some condition or the label. It is very useful in data analytics and machine learning.
Finally, let’s have a look at np.where which enables you to transform a NumPy array with a condition.
X = np.int64(10 * np.random.rand(5000000))
X_even_or_zeros = np.where(X%2==0, 1, 0)
This returns an array where even-numbered slots are replaced with ones and others with zeros.
These are a few vital operations and I hope the read was worth the time. I always use NumPy with huge numeric datasets and find the performance very satisfying. NumPy has really helped the research community to stick with python without levelling down to C/C++ to gain numeric computation speeds. Room for improvements still exists!
Cheers! | https://medium.com/swlh/why-use-numpy-d06c573fbcda | ['Anuradha Wickramarachchi'] | 2020-08-15 05:20:02.962000+00:00 | ['Programming', 'Python', 'Computer Science', 'Data Science', 'Machine Learning'] |
Marketing in Crypto: Crash & Burn or Build & Prosper? | 1) Identity
All is simple: let’s think of crypto community members and newcomers as a customers first. Then open CoinMarketCap and count the number of projects: 1586 cryptocurrencies are competing for your existing and potential customers. It has been 10 years since Bitcoin was born and cryptocurrencies now become a more common definition in our lives. If we consider cryptocurrency as a product, brand management should take a place in a marketing story.
Identity can be a magic spell. Color palette, typography, imagery, tone of voice, even icons set and online elements are able to communicate beyond the words. Build your project a “face” to deliver a message.
Take seriously external materials, such as brand book, press-kit, one pager and different media templates. Creative materials guidelines are necessary to keep the holistic image persistence. Do the same for internal documents. Why? Your employees are internal customers and potential brand advocates. Have you seen team members standing up for honor of a project? The goal is to make it happen and keep it constant.
One day a newcomer brand will be able to sell an image and trust itself to ensure customers they hold a right cryptocurrency in their hands (and they are in right hands themselves).
Think about forming the first impression, scientific approach and even brand archetypes. Prepare necessary project documentation and marketing materials. | https://medium.com/hackernoon/marketing-in-crypto-crash-burn-or-build-prosper-f1de5b58afe7 | ['Katoshi'] | 2018-08-22 11:00:50.079000+00:00 | ['Bitcoin', 'Marketing', 'Cryptocurrency', 'Marketing In Crypto', 'Marketing Strategies'] |
Review: DRN — Dilated Residual Networks (Image Classification & Semantic Segmentation) | Review: DRN — Dilated Residual Networks (Image Classification & Semantic Segmentation)
Using Dilated Convolution, Improved ResNet, for Image Classification, Image Localization & Semantic Segmentation
In this story, DRN (Dilated Residual Networks), from Princeton University and Intel Labs, is reviewed. After publishing DilatedNet in 2016 ICML for semantic segmentation, authors invented the DRN which can improve not only semantic segmentation, but also image classification, without increasing the model’s depth or complexity. It is published in 2017 CVPR with over 100 citations. (Sik-Ho Tsang @ Medium) | https://towardsdatascience.com/review-drn-dilated-residual-networks-image-classification-semantic-segmentation-d527e1a8fb5 | ['Sik-Ho Tsang'] | 2019-03-20 16:07:37.825000+00:00 | ['Image Classification', 'Deep Learning', 'Artificial Intelligence', 'Semantic Segmentation', 'Machine Learning'] |
Richard Sherman knows content, bro | Screenshot of Richard Sherman’s multimedia hub on The Players’ Tribune.
I’m a Richard Sherman fan. “You Mad Bro?” was my Facebook cover photo for a minute. My best pal, who’s also my #1 rival in all athletic competitions, called Sherman an asshole following his infamous Michael Crabtree interview. I loved it. And my best pal being grumpy, well that was gravy.
Still, I didn’t bother reading Sherman’s weekly column. I get it in my email inbox but I never opened it.
Then the kerfuffles happened with our local Seattle media, and Sherman skipped a press conference. Mainstream media outlets got up in arms, too. This got me thinking. About press conferences.
I’ve attended a few of them plus I watched The West Wing in the late 1990's, so I know what a press conference is: A bunch of journalists crammed into a room asking questions to a person(s) who’s in front of a microphone. Then, those journalists all rush like mad to take the exact same comments from the exact same person(s) and crank out a story that’s … unique.
Except it often doesn’t work out that way. Instead, press conferences frequently yield the same story, one that gets published many times across many different media outlets. Also, press conferences are boring, leaving reporters desperate to glean a moment — any moment — that rises above the mundane. Ironically, when that “moment” occurs, frequently it involves an athlete being irritated by a press conference — like Cam after the Super Bowl; or Iverson following the death of his close friend.
I wasn’t mad (bro) when Sherman skipped his press conference. This isn’t the late 1990’s and media has changed a bit since The West Wing. Heck, even presidential candidates and president elects rarely do press conferences anymore.
Then, Sherman gave one of my favorite quotes since “You mad bro?” when he Tweeted this:
If I have something to say then I will write it myself.
He’s right. Athletes are their own storytellers. This is old news — social media and platforms like The Player’s Tribune have been around for a minute, giving athletes the ability to bring their content directly to the public. When Kobe announced his retirement, he didn’t hold a press conference. He published a personal note.
Sherman is taking the athlete storyteller to the next level — the athlete as journalist. He Tweeted as much when he said:
I understand that I can write my own story
Right again. Sherman is smart and articulate. He has a platform, The Players Tribune, with millions of readers. He has millions of social media followers. That combination of eloquence plus audience scares some journalists. Because no one can get the inside story on an athlete, like an athlete himself (or herself).
I went back and read several of Sherman’s columns. They are good. Some of his columns are really good, like when he takes the NFL to task. Sherman calls out the NFL for throwing a flag on Antonio Brown for “sexually suggestive” twerking, while cheerleaders gyrate through similar moves every week. He notes the double standard of a penalty on Josh Norman for “shooting” a bow and arrow, while Norman’s team — the Redskins — won’t change its name despite offending many people.
Sherman also provides a nuanced takedown of the “poopfest” that is Thursday night football, with this classic quote:
I’d like to put Roger Goodell in pads for a late game on a Sunday, in December, in Green Bay, on the frozen tundra — then see what time he gets to the office on Monday morning, knowing that he would have to suit up again on Thursday.
Hell, yes.
I read this recent article which says that first-person athlete stories are heavily edited:
Athletes — who have dedicated their lives to their sport of choice, not writing — often rely on behind-the-scenes help when it comes to public communications. If you see your favorite athlete or celebrity’s byline on a surprisingly eloquent piece, well, there’s probably a reason why it’s so well-written.
Good writing and good editing go hand in hand. That’s why reputable media outlets employ copy editors. Writing is a craft that one hones over time, just like any skill. I don’t know how much editing goes into Sherman’s columns but, his spoken words are similar to his written ones — clear and concise.
The NFL would do well to empower more of its athletes to be storytellers. NFL ratings are down this year, which suggests that people are growing tired of the standard storylines. Embracing new content platforms isn’t enough. The league should embrace new content creators. And some of the most interesting creators are the people who actually play the games.
One straightforward way to inject more life into NFL storytelling is: Allow players to post to social media during the games. Yes, I said it. Let them Tweet, Snap, Insta, etc. while a game is happening. It won’t ruin the games — the teams and players will police themselves against Tweeting from the huddle or Snapping from the bottom of a pile. But Instagramming one’s self after a touchdown, that would be some cool content. Imagine a selfie of Zeke from inside the Salvation Army kettle.
Players and coaches already give in-game interviews. And those are extremely boring. So why not change it up a little?
The press conference is a dinosaur. By now, we all know that the only reason a player shows up to a press conference is…say it with me…“So I won’t get fined.”
I’m not mad (bro) at Sherman for skipping a press conference. I’m not mad, either, at a reporter who asks an athlete tough questions. There will always be a push-pull between athletes and reporters to tell meaningful stories. That because many good stories require a level of exposure that’s not always comfortable.
Technology has lowered the barrier for content creation, empowering more people than ever to be their own publisher. Which means, among other things, that high-quality storytelling is more precious today than ever before. Those who excel at storytelling will find their audience, without needing to hold a press conference. | https://medium.com/loseatfantasy/richard-sherman-knows-content-bro-5b7b4b3205cf | ['Mike Harms'] | 2017-02-07 03:07:20.474000+00:00 | ['Seahawks', 'Richard Sherman', 'Content Marketing', 'NFL', 'Journalism'] |
8 Reasons You Should Join A Local Writer’s Group | Photo by Dylan Gillis on Unsplash
Five years ago, at the dinner table, my father told my sister and me that he found an interesting article in the newspaper. (Yes, the newspaper.)
There was an advertisement for a writer’s group at my local library. My dad doesn’t know exactly what a writer’s group is, but because my sister and I write, he thought it might be useful.
So, we went to the initial meeting. Five years later, we’re still part of the same group meeting once a month at our library.
My sister and I are the only “veterans” of the group. The group has changed a lot over the years. People have come and gone (including our original group facilitator). However, including myself and my sister, there are five steady people and we have a new interest in the group every month from people. It’s still effective and fun.
I sometimes wonder how it’s already been five years. We skip a month here and there (our group usually takes December off due to the holidays) but it’s weird to me how we’ve been doing this regularly for five straight years.
Honestly, when I first joined the group, I wondered how long it would last. It seemed too good to be true. I’m not complaining though.
I’ve learned a lot in these past five years and I’m looking forward to more. Joining a local writer’s group was one of the best decisions for a few reasons.
1. Writing and editing skills
This is a given. When joining a writing group, you’re going to be writing and critiquing. You write your own piece, critique others, and then improve upon your own piece again. Even though it’s a little bit each month, it’s still something. You’re routinely working on your own writing.
2. Time
Speaking of routine, writers excuse their lack of writing all the time by saying they “have no time.” Joining a writing group that meets once a month or once every two weeks or something, allows you to carve in writing time no matter what.
You need to write your next chapter or piece to submit to your group. Then you have to find the time to read and critique their work. Then you meet in person to discuss your piece and others. It gives your particular work one-on-one attention from you (and others).
Then, you rinse and repeat.
3. Motivation and inspiration
Speaking of time, it can be hard to be motivated to write. I’m sure we all procrastinate in some shape or form and I’m sure writing is no different.
Seeing work from fellow writers motivates you to keep going with your own. Not to mention, you have a deadline to meet. You need to submit your piece to your group by a certain time so everyone has ample time to read and critique it.
On the flip side, the group gives you the inspiration to keep going. If you’re stuck on a certain part of your novel, you can ask your group what they think could or should happen next. You may not even have to ask. If your group is anything like mine, they’ll let their imagination run wild and interpret certain pieces together right in front of you.
Sometimes they’re right, sometimes they're wrong. Other times, they give you a new idea you can expand upon.
4. Self-confidence and thick skin
It’s not easy to share your writing with others. Personally, I find it easier to share my work with strangers than with close family and friends. The members of your writing group will become your friends, but in the beginning, sharing your work will increase your self-confidence. As writers, we all hate sharing our work in the beginning because it never seems to be good enough.
On the flip side, it’ll toughen you up as well. You’ll gain the self-confidence to share your work and the thick skin to take any and all critique — positive or negative.
6. Promotion
Do you have a blog or an author website? Do you share your writing online? Have you gotten a piece published in a magazine or an anthology? Maybe you got a book deal?
Your group will be proud and help promote your work through their own social media and such. Like you would help promote their work, they’ll do the same for you. Who doesn’t want to brag that they know an author and helped get their book off the ground?
7. Connections
People come from everywhere. They have different jobs, different skills, and have different experiences. They know different people outside of the group. If you need to research a specific job, for example, someone in the group may work in that field or know someone who does. Maybe they can set up a date for you to interview them for research purposes.
Books and the internet are great, but it’s wonderful to talk to a real person who has hands-on experience.
Not to mention, writers typically have a job that pertains to writing. Maybe someone has connections on how you can market your book. You never know if you don’t ask.
8. Socialization
As writers, we’re not great at socializing. We don’t like to talk to people or go out and about. Being part of a writing group allows you to make some new friends. Aside from writing, you’ll find other things you have in common with them. Now only will you get out of the house for your group meeting, but you may get together some other times.
I now meet with members of my writing group twice a month — once for the writer’s group and once for Dungeons & Dragons.
Overall
Joining a writer’s group was one of the best decisions I’ve made. I’ve learned a lot with my writing, I’m consistent with it, and I’ve made some great friends.
Check with your local library or other nearby libraries or bookstores and see if there’s a writing group you can join. I promise you won’t regret it.
Happy writing! | https://medium.com/swlh/8-reasons-you-should-join-a-local-writers-group-109104bb28ba | ['Rachel Poli'] | 2020-06-15 13:51:09.136000+00:00 | ['Creative Writing', 'Writers On Writing', 'Writing', 'Writer', 'Writing Tips'] |
Color Your Life With Some Birds | Let’s take a walk
I take a bus and get off where the city vanishes away, and the pastoral serenity begins to show its freshness. I hit the muddy, dusty roads and gradually enter into a magical world.
No one is with me. It’s just me with myself.
The first thing I notice is the air — so fresh and full of life. I take a few deep breaths and feel relaxed. I deliberately walk at a steady pace because I don’t want to miss a thing on my way.
It’s just 11.00 am and a whole day ahead of me. The sun is up there to shine everything with its soft glow, the clear blue sky — cloudless and shiny. A gentle breeze is whispering to the leaves about something, and the leaves are nodding their heads.
I walk slowly — and gently observe what’s going around. On my right, there is an ocean of green, yellow, and white. On my left a tiny river just like a lifeline in the wilderness. I see some herons are enjoying a flight over the river. Their shadows are reflecting on the water, making each of them double.
I feel tremendous joy as I love the herons most. When they wait gently in the riverside to catch fish with their long legs and necks, they look like saints to me. Catching fish is like their way of having meditation. I love them — especially the white ones.
I take my time to watch them crossing the river. Then I dive deep into the green ocean of mystery. I am looking for birds. I approach carefully, not making any sound. After a few minutes, I encounter a few bee-eaters. I see them every time I go on a bird-watching adventure.
I drag myself close to the bee-eaters to capture the moment, but they somehow read my mind and fly away to another place.
Bee-eater. Photo captured by the author, Bokchar, Dhaka, 2019
I saw a few more at a distance. This time, I bend my knees and nearly stop my breathing. I move forward more cautiously like a cat and take my position to have a snap. I see a bee-eater is busking in the sun. I take a few shots quickly and silently observe what it does.
After a while, more bee-eaters come and start hunting some insects. They make noise with their wings, talk to each other, and finally go in different directions in search of food maybe. Oh, I love this beautiful green bird. And I love its dark, nearly invisible eyes.
Spending some time there, I walk ahead — I come across a field, and at the end of that, I see a few big trees.
Green ocean. Bokchar, Dhaka, 2019. Photo captured by the author.
Wait, a sharp sound is coming from there. It must be a woodpecker. I cross the field and go in the direction of that sharp sound. I keep my eyes on the tree-tops. I see a few big holes in some trees and become sure that there must be some woodpeckers.
I move forward with the camera ready in my hand. I hear the sound again. Where is it? I try to concentrate. Yes, I feel something is happening. I can sense it. I move my head to my right and see some leaves moving.
Yes, finally, I spot the lovely bird. It’s trying to find a suitable tree to make a new home or maybe in search of some tasty worm to have a feast.
The woodpecker. Bokchar, Dhaka, 2019. Photo captured by the author.
I take a few good photos with both my camera and my eyes. I see some finch and a few sparrows there as well.
I continue my journey to the unknown and go where my eyes take me. I walk and walk and keep my eyes open to observe the wilderness.
I spend some time in front of an agri-farm and hear the sound of growing vegetables — fresh and enchanting. I see different shades of green everywhere and the hide and seek games of sun and shadow. I hear the music of tranquility.
Now, it is midday, and the sun is getting warmer. I find a small village-market on my way and take a break. I sprinkle water on my face, wash my hands, and then eat my lunch. At this point, I talk to locals — a little chitchat to know where I am and where more birds are waiting for me.
I come to know that a few miles away, there is a nearly dead river, and on its bank, birds are in great numbers. So I take a new route, and after a few birds later, I find the river. It has lost its youth long ago. Now it is like a narrow canal that I must cross with a boat.
Horse in the field. Bokchar, Dhaka, 2019. Photo captured by the author.
I see an unfinished bridge there. Maybe the bridge killed the river, or I don’t know the reason behind the river’s untimely death. I cross the narrow river and land on a field full of yellow.
A horse grazing nearby catches my attention. I see the open field where birds are flying with a yellow backdrop. Oh, it’s love at first sight.
Let’s spend some time sitting on the riverbank watching some kingfishers and cormorants. They are the real hunters — swift and sharp.
Kingfisher is truly the king of fishing. The way it hunts is spectacular to watch. Sitting on the grass, I see them hunting fish and eating right away.
Ah, it seems an easy way to live a life. But I reckon they have their own society and competition like ours, as I have seen, in a lake near my house, seven herons competing for a particular place for fishing.
The hunting is on. Bokchar, Dhaka, 2019. Photo captured by the author.
I am convinced that having the ability to fly is no guarantee to achieve freedom. Birds have their community with a set of rules. Maybe, they have their own kind of hierarchy.
Do they have democracy? Or are they still in debate whether communism is better for them or not? I don’t know — but I am sure they have found peace, balance, and harmony.
Anyway, I end my bird-philosophy and see there are lots of doves walking through the fields. It’s hard to identify them on the ground as their colors are quite similar to earth.
Doves are very shy. When I come close to them, they fly away immediately. But the field is full of them. So, I capture some of their beautiful, decorative body.
Innocent Dove. Bokchar, Dhaka, 2019. Photo captured by the author.
I see a few thatched houses at a distance but no one in the field this afternoon. I am the only man here wandering for some birds. And it’s a different experience to walk absolutely alone in nature like this. It’s lovely and a little intimidating too.
Numerous little birds are with me all the time. They are spinning around to make me feel comfortable. When they are in the air, anyone may confuse them with bullets.
One moment they are near you, but in a second, they go somewhere else. Then again, return as a flock.
They inspect the entire field like a police battalion. And the insect-criminals have nowhere to hide from their eyes. They eat the bees from the mustard-plants. Then go for an aerobatic display up above.
I spend the rest of the day in this open field full of life. I breathe the green, yellow, and the entire experience. The experience — words often fail to contain.
I come to understand that I must take a break, explore nature, and spend time more with birds to have a deeper understanding of the world I live in.
After a day with birds, I feel that a day is well-spent, and I need to make more days like this. | https://medium.com/the-masterpiece/color-your-life-with-some-birds-ffae4fddde2 | ['S M Mamunur Rahman'] | 2020-12-28 12:41:55.530000+00:00 | ['Travel', 'Nature', 'The Masterpiece', 'Environment', 'Birds'] |
Web Scraping with Selenium IDE | What is Selenium?
Although Selenium is incredibly helpful with web scraping, this is not its explicit purpose; it’s actually a framework for testing web applications. It accomplishes this through two components, WebDriver and Selenium IDE.
The WebDriver accepts commands via programming code (in a variety of languages) and launches this code in your default web browser. Once the browser is launched, WebDriver will automate the commands, based on the scripted code, and simulate all possible user interactions with the page, including scrolling, clicking, and typing.
The Selenium IDE (integrated development environment) is a browser extension for Firefox and Chrome that allows you to record your interactions with a web page and edit those interactions to further customize your test. Via Selenium’s API, you can actually export the underlying code to a Python script, which can later be used in your Jupyter Notebook or text editor of choice.
How Selenium Helps with Web Scraping
As websites get more complex, simple scraping techniques and libraries (such as Beautiful Soup) might run into the following obstacles:
Java Script or CSS that obscure or transform the elements.
that obscure or transform the elements. Username and password authentication requirements that hide data on a web page fulfilled.
This is where the Selenium IDE shines. As it mimics a user and interacts with the front facing elements of a web page, it can easily identify the necessary Java Script or CSS code and provide the proper path to get you what you need. If a login pop-up box arrives, Selenium IDE can type in your credentials and move the process along.
The IDE can even assist you on your easier scraping tasks, by providing you the tags to any location you click on a page. Notice how tags is plural there? The IDE will give you a list of all possible tags for that link, providing you with multiple angles to attack your web scraping task.
Getting Started with the IDE
Coders of any skill level can get Selenium up and running, by simply starting with the IDE!
To use the IDE, you will need the extension with either Chrome or Firefox.
Selenium IDE on the Chrome Web Store
Selenium IDE on Firefox
Once you have the extension, open the IDE and select “Record a new test in a new project.”
Select a base URL for your project, in the next pop-up, and click “start recording.” This will open a new web browser and the IDE, which will track all of your actions. Click, scroll, type, and interact with the webpage in the manner that you choose. Note: it is beneficial that you click on the elements you want to scrape. We will cover this in depth later. When finished, click on the stop button in the IDE, in the upper right hand corner, and you are done.
Selenium IDE Features
When you are finished your output should look something like this: | https://medium.com/swlh/web-scraping-with-selenium-ide-4c16cea8329d | ['Jeremy Opacich'] | 2019-08-19 07:23:30.788000+00:00 | ['Web Scraping', 'Selenium', 'Python', 'Beautifulsoup', 'Selenium Ide'] |
Why You Should Send Your Next Email Newsletter to Fewer People | Why You Should Send Your Next Email Newsletter to Fewer People
And the exact steps to remove those who won’t open anyway
Photo by Matthew Fournier on Unsplash
Grow an email list.
That’s common advice here on Medium, and it’s good advice for a reason.
Email is a direct and easy way to communicate with your audience.
I see a lot of email campaigns in my day job working for one of the largest email service providers. Our users send billions of emails every month.
The biggest mistake I see our users making with their email campaigns is also one of the easiest to fix. Read on for a step-by-step guide on how to fix it.
You’re spamming people
Some of your subscribers are no longer interested in your emails. Maybe they’ve maximized all of the value you can provide them. Perhaps their interests and priorities have changed, or they’ve moved on to a new hobby.
That’s ok, you can’t please everyone. Desperately holding onto uninterested subscribers hurts your newsletter.
Inbox providers like Gmail and Yahoo care a lot about how users interact with your emails. It’s in their best interest to protect users from spam, and they’re ruthless about it. They’ve built complicated algorithms to detect spam and make sure it never reaches the inbox.
Gmail now prompts users to unsubscribe from emails they haven’t opened recently.
Image source: Mailjet
All that hard work building your subscriber list goes right out the window if the emails never make it to the inbox.
Respect your subscribers
On average there are 111 billion commercial emails sent every day. Your newsletter is just one email of hundreds that your subscribers receive every day.
You can easily tell which subscribers are still interested in your emails based on their engagement. Here are some positive engagement signals in the eyes of Gmail and Yahoo:
Opening your email
Clicking a link from your email
Replying
Forwarding your email to a friend
The average open rate is about 21%, and the average click rate is about 2.6%, according to Mailchimp.
Take a look at your most recent campaign and see how you compare.
Your secret weapon
Remove unengaged users from your email list.
It may seem heartbreaking to remove someone from your email list. You worked hard for that subscriber! Why take them out of your email list?
Inbox providers have a very low tolerance for spam. A spam rate below 0.1% is considered normal.
That means for every 1000 emails you send, only one can be marked as spam. Anything above that is too high in their eyes.
Once you get flagged with a high spam rate it’s very difficult to fix your reputation, so avoid it all costs.
My suggestion: remove users who haven’t opened your emails in 3 months.
Here’s exactly how to do that
Below is a 7-step guide for archiving unengaged users from Mailchimp. If you use another provider like Sendgrid, SendinBlue, or Mailjet, those links will take you to their specific guides.
Login to Mailchimp and navigate to Audience at the top. Select the audience you want to modify, then click View Contacts on the right. Next, click into Manage contacts above your contact list, then select Segments. Create a new segment for Campaign Activity that matches anyone who did not open all campaigns in the last 3 months. Click Preview Segment, then hit the arrow at the top of the list next to Email Address, and select all. Click Actions, then Remove contacts. Don’t worry, Mailchimp will ask you to confirm on the next page. You’ll actually be archiving them instead. Click Archive. This maintains their stats but prevents them from receiving your emails again. Later, you can build a re-engagement campaign to get them interested again! That’s it! I included a screenshot below of what your segment should look like.
Your segment should look like this. Screenshot by the author.
What do you have to lose?
Test it out and see what happens.
Set a reminder to periodically clean out your email list. Your unengaged users will thank you, and inbox providers will look at you more favorably too.
You just increased the chance of your future emails landing in the inbox. | https://medium.com/better-marketing/why-you-should-send-your-next-email-newsletter-to-fewer-people-d79eb2601dde | ['Nick Lafferty'] | 2020-01-30 01:23:43.493000+00:00 | ['Email', 'Newsletter', 'Marketing', 'Email Marketing', 'Technology'] |
Changes That’ll Make a Big Difference With Your Learning Programming For Kids : Any Kid Can Code | Here, we are going to use library named turtle and how:
import turtle
Then, type magical keywords and give suitable name to your turtle, I choose “jumper”.
jumper = turtle.pen() or see in case you have to use turtle.Pen() [CAPITAL P]
And suddenly we have the new window open or so called our own techie playground
Our Play area to make innovative figures
We have our own turtle
Most of you want to see turtle, lets give this arrow shape of turtle. Refer above image.
jumper.shape(“turtle”) or turtle.shape(“turtle”)
Now you are ready to move with turtle. Where you will move there will be line drawn. Thus you can create different images.
For basic understanding of the screen, we should know the X,Y axis. X axis is horizontal line and Y axis is vertical line. Coordinates (0,0) where X is 0 and Y is 0, is the position where our turtle is now. Try to Grasp it. If not, don’t worry we have commands to move turtle in playground.
2 dimensioal chart
Basic commands to play with Turtle
forward: to move forward in the direction of its mouth
left: to change the direction. Bit tricky, you have to have knowledge of angles.
up: When you move, it wont make any line, till you bring it down.
down: To bring it down, otherwise your turtle is flying turtle
shape, width, goto, color are the other commands to play with.
If you want to go more deeper, please go to the below link and take a lead:
https://docs.python.org/3.1/library/turtle.html
Lets make use of the above commands and make quadrilateral (square or rectangle)
I ran below commands in sequence and here we go, we have our first magic figure ready:
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
our first creation: Square
jumper.left(90), will change the direction of your turtle in left upto 90 degrees. And, if you want to be creative, change this from 90 to some other number and see the magic. You will have magical figures turning up.
Just one important thing to mention, when you type command, try to use up arrow key. It will bring the previous commands and you need not to type and give pain to your fingers.
Magic to master this is to practice and practice. So, try to make 5–10 different figures. And, challenge yourself by putting the color inside those figures. It is pretty simple. I will be coming up with next blogs to that where we will learn new things in programming like conditions, loop and their types, variables and their usage etc.
Name your turtle and move as you want. Code and don’t forget to have fun! Included Links below for further Learnings! | https://medium.com/swlh/simplistically-easy-any-kid-can-code-2294919a34e | ['Laxman Singh'] | 2020-12-22 09:44:36.454000+00:00 | ['Kids Programming', 'Python', 'Kids', 'Technology', 'Kids And Tech'] |
Addiction | Photo by Christopher Burns on Unsplash
Addiction
A poem
I see the monster with big pink lips and seven black eyes all lined with ink. I press my finger into his soft round skull, tap gently on his sharp white teeth. Wet your hands in his spittle. Let him leap at your face and rip into your arms. Let him tear the red flesh from your bones. He is here for your Budweiser. He has come for your weed. He bites your cans and pierces their guts, then sucks out their juices and foam. He stands on your bones and spits into rubber. He chews on your tight, sticky buds, then washes his mouth with your vodka and wine. Hide your pills. Tuck them neatly. Place them high on the shelf. Cover your arms, drape your shoulders, this monster has come for your Budweiser and weed, your children, your husband, your family, your wife. | https://jzkrebsbach.medium.com/addiction-fb5800862145 | ['Jessica Zeek Krebsbach'] | 2020-01-02 03:16:43.611000+00:00 | ['Addiction', 'Partying', 'Mental Health', 'Family', 'Poetry'] |
When to Fire Your Therapist | Have you kissed any good frogs lately?
Photo by Nik Shuliahin on Unsplash
I would never make it as a Scientologist, because I am a true believer in therapy. I think counseling is something almost everyone could benefit from. For those of us with rougher personal histories it’s mandatory. PTSD is the №1 threat to my life. Aside from the suicidal tendencies, PTSD ravages my physical health.
According to The American Institute of Stress:
Researchers reported that the number and severity of PTSD symptoms were significantly associated with deaths due to coronary heart disease as well as non-fatal heart attacks.
Unfortunately I have to fire my new therapist.
One of the saddest things I hear people say: “I went to therapy. It didn’t help.” That always comes down to one of two things: either the patient really didn’t want to do the work, or the therapist failed in their role — which many of them do.
So I’m here with some pro tips I’ve picked up since beginning therapy in the late 1980s. You can learn from my mistakes, work smarter at your wellness, and get better results. Self-care is the gift that keeps on giving.
Therapy: The emotional gym
The last person who told me therapy didn’t help her, when pressed, said she had only gone to two sessions. People often have unrealistic expectations for how therapy should work, what kind of benefits they can expect.
The issues we address in therapy have usually built up over a lifetime. I’ve found that a good therapist helps people learn how to balance and navigate from the lives they have to the lives they want. The therapist helps people find the healthier, happier, more balanced version of themselves. That’s why I go.
Using the gym analogy, I’ve recently lost over 120 pounds. For many years my weight and mental health both seemed insurmountable, intertwined and completely demoralizing. However in hindsight it has only taken me a year to lose about 35 percent of my body weight. Because I was emotionally ready, like I had hit bottom, it has been much quicker to correct the problem than to create it — which took decades of imbalance.
When I started at the gym I was 350 pounds. It took a good six months before I saw any difference except on the scale. But there is no scale for your therapy progress except the happiness in your life. What works for me is to trust the process and keep doing tiny amounts of it constantly.
The imposition of an infrastructure — a sustainable, reality-based understanding of my physical needs and how to consistently approach them — was what I needed to lose weight all along.
And that’s what I want from a therapist, too: a competent person to provide healthy guardrails for my life. People like me almost never graduate high school. Anyone who was able to become a therapist by definition had a better life with significantly more social support than I did.
Really excellent therapists enable me to leverage their stable childhoods by relating to them. They show me how a person with undamaged self-respect and healthy attachment would respond, often indirectly. A strong therapist is an ultimate role model.
For me a therapist is like a professional parent.
Key word there is professional: they’re not your actual parent. But they’re trained to behave like a highly appropriate, high-functioning parent.
At least that’s how I think of it, since neither of my parents was appropriate. Therapists give me a model of what it could look like without all of my parents’ poop injected into my thought process, a healthier way to reframe my life than I could imagine. Therapists don’t share your history, and thus can create a wonderfully neutral space.
I learned what good parenting is supposed to look like in therapy. A psychologist was the first person to tell me it’s not appropriate for parents to lean on their children for emotional support. I was horrified at the suggestion that I should stop being my mom’s #1 emotional supporter. It took me years of therapy to get it in my bones, which I now do.
After discussing these ideas many times with many therapists over the years, here’s what I believe a good parent thinks:
1.The child’s needs always come first.
2. The child may not necessarily like what’s best for them, but I do it anyway.
3. I may not like what’s best for the child, but I do it anyway.
Like a doctor, lawyer, or teacher, a good parent is an advocate who runs interference while providing guidance. You can’t do that when you are emotionally dependent on the child. That imbalanced relationship requires clearly defined duties and priorities. There’s a fiduciary responsibility. It is unethical for a parent, doctor, lawyer, or teacher to place their own personal needs —such as sex or money — above the needs of their charge.
The fiduciary duty doesn’t mean therapists can or should be abused. But in the session, the patient’s personal feelings count and the therapist’s don’t. That’s the only way it can be.
But a lot of problems are more subtle than theft or sexual abuse. Therapists are only human. It’s not always easy to see where interpersonal boundaries are or should be. Ultimately it’s your job as the patient to make sure you feel heard and respected, and if you don’t, kiss another frog.
Of course the relationship works both ways. I’ve never been fired by a therapist, but I know people who have. I don’t get close to anyone who confides that in me. I take it as a clinically educated red flag that this person has unsustainable issues. Therapists usually have to drum up their own work. They don’t just fire patients on a whim. If a doctor can’t work with them I probably can’t either.
My new therapist isn’t necessarily wrong. She’s just wrong for me.
I had a therapist that I really liked for the last couple of years, and my life improved immensely in that time. I moved out of my van and into an apartment with no help from anyone, which was Herculean. As I said, I’ve lost 120 pounds and counting. Neighbors who once ignored me now meet me in the parking lot for aerobics before dawn. People want to pay me to be their personal trainer. All of my problems are solving each other. My life got much better with her, and it felt so good.
So I was devastated when my therapist told me that she’d no longer be accepting insurance, any insurance, in 2020. She saw me on New Year’s Eve, twice that week. I know she hated to leave me without a new provider lined up. We both had to move on. Again, it’s a professional parent. I never lose sight of that. It doesn’t hurt me on a personal level, only the issue of kissing more frogs.
Importantly, though, it puts my entire therapeutic process back to Square 1. I’m heading into Month 6 without a new therapeutic relationship established despite my best efforts. For someone with my mental health issues this not sustainable. I’m keeping myself alive until I find a new doctor to help me survive. Unfortunately the last session with the latest provider, Kathy, was a deal-breaker.
Like having a baby, it’s never a convenient time to switch therapists.
I thought I was going to die when my second therapist, Raye, moved away.
I had fired the first-ever therapist after one visit. I told her that I was working with childhood sexual abuse, and I felt like forgiveness was a crucial part of my process. I needed to de-escalate emotionally. Forgiveness was so clear to me, like a lighthouse in my heart.
That initial therapist felt that for me to be thinking about forgiveness that early in the game was a cop-out, that I was letting people off the hook to avoid my anger. This was a complete misread of the situation and a foolish thing for her to say. She didn’t even explore what I meant by forgiveness.
My very first therapist didn’t have enough clinical experience to get that I first needed to forgive myself in order to truly believe I deserved therapy. I could not budge off that forgiveness vibe, nor should she have advised me to. I was so angry I couldn’t find any other safe way to approach it except with the goal of forgiving everyone unilaterally, because I can and that’s how I want to feel.
That first therapist was shocked and horrified when I told her I would need to switch providers because of the irreconcilable difference in strategy. The next person, Raye, was much more skilled. She saved my life.
THERAPY PRO TIP: When I fired that first therapist, she said that was fine but I’d have to come in for another session to discuss it. She said she needed to ensure that I wasn’t ditching her because she challenged me. I had already explained to her that we have irreconcilable differences in our goals. If anyone ever tells you you need another appointment to fire them, tell them no. The relationship is over when you say it is.
Not all therapists are created equal.
Like every other profession, therapists have their share of C students. Then there’s also the common thing of people simply not communicating well together. Some people don’t click. However it’s the therapist’s job to establish rapport with the patient, never the other way around.
I think my latest therapist, let’s call her Kathy, is both inexperienced and not especially skilled.
With the very first therapist I ever saw, our irreconcilable difference was, on the surface, how we understood the word “forgiveness.” But it was actually her inability to meet me where I was at. She expected me to get onto her page. And it’s the same thing with Kathy.
With Kathy I find myself in the session thinking I’m not doing it right, not meeting her expectations, that she’s not understanding me. I remember several times thinking, “Wow, she must think this is my first therapy session, Day 1 of my process.” I made another appointment with her for next week, because I wanted to make sure I’m not incorrect. I wanted to fire her several times during the session. But I felt like I needed to sleep on it to be sure it’s the right decision. Next time I will just pull the plug, even on a therapist.
People often want to fire a therapist who challenges them in an appropriate, even vital way.
Before I do the list of why to fire a therapist, the list of why not to is very short:
Don’t fire a therapist who tells you things you don’t want to hear, unless those things are abusive, or like the forgiveness thing, an irreconcilable difference. That’s why I didn’t fire Kathy mid-session. I need to make sure she didn’t have a valid point.
A good therapist must be able to put a boot up your ass if need be.
It goes back to the professionalism of the therapist. I need to work with someone whose clinical judgment I trust. I have to get that they’re my advocate, that they get me, that we’re understanding each other. If we’re not communicating in therapy something is bad wrong. Kathy has to go.
So here’s my quick list, reasons I’ve fired therapists:
Must-Fire ASAP №1: I’m not comfortable with this person or in this environment.
I once went to a session at the private office of a therapist who also worked for the county mental health department. He gave me the address, and I met him at a small office park near the county building.
When I pulled in on Saturday morning it was a little spooky, completely abandoned. It was four smaller buildings around a courtyard. As I was parking, a homeless guy came out from the courtyard and stood by the entry.
I sat in the van and smoked a bowl, hoping he’d wander off and I could go to my appointment. He didn’t look dangerous, just dirty and disheveled. I didn’t want to be panhandled or deal with him.
The guy didn’t leave after five minutes. I decided I could take him in a fight, and went to look for the office.
Even though he turned out to be the therapist, I did go into the office with him. I was amazed that he would show up in dirty sweatpants and a T-shirt, as though he had rolled out of bed, down Main Street, and through a mud puddle. In hindsight this simple lack of professionalism should have been an automatic deal-breaker.
His office was, of course, in the back of the building. We went in and sat down on opposing couches, in a room he clearly shared with other therapists. When he left the door open for the session I was glad he did. He gave me the creeps.
I sat inside his office looking at outer door and the base of the stairs. During the session a stranger walked into the middle of the room with us and asked us how to get to the architect’s office. I walked out behind him.
I shouldn’t have walked into that office, period. That guy skeeved me out, which is why I was glad he left the door open. If that’s the case, save time and trouble and just leave. You can’t do therapy with someone you don’t want to be alone with.
Never do a therapy session where your privacy isn’t guaranteed. Someone walked in on me with my last therapist and she looked like she wanted to gouge his eyes with a scissors. Even so she held her shit together and stayed in professional mode until my session was over, because I wanted to finish my thought. I hated losing her. When I get rich I’ll pay cash and go back to her, unless I kiss the lucky frog.
Must-Fire Moment №2: This visit is traumatic.
I once had a psychiatrist ask me a rapid-fire series of questions about my life, ticking boxes without considering either the questions or the answers.
A psychiatrist is not actually a therapist, but was a required part of my mental health assessment process. That appointment itself was incredibly traumatic, jumping from one painful aspect of my life to the next at a high rate of speed without regard for the questions’ impact on me.
That psychiatrist taught me to get up and walk out the door mid-session if need be. I almost did it with Kathy today. But because she’s an actual therapist I wanted to be sure I wasn’t reflexively rejecting constructive criticism, that she didn’t have an important point.
Take a tip from every Larry Nassar survivor:
“I hated it, but I didn’t know it was abuse.”
If you’re traumatized by the experience itself — unless it’s like a colonoscopy or a root canal, and you’ve been warned, and they’ve taken steps to minimize your stress level — something is wrong. I know psychiatry kind of sucks. But the first rule for doctors is “do no harm.” There’s a reasonable, empathetic way to do that process. That wasn’t it.
Must-Fire Moment №3: Can you hear me now?
I’ve been in therapy longer than many people have been in practice. So I have my own personal practice of therapy, over many years, with many different providers as described herein. I’ve got a certain style that may be challenging for the clinician, especially if their own game is not so strong.
I have to fire Kathy for a few reasons:
I felt judged. I was stunned when Kathy stopped me mid-sentence and accused me of ranting. This is how I’ve worked successfully with other therapists for years. But I gave her the benefit of the doubt. Then she didn’t have anything other than her belief that I was much angrier than I was. Annoying AF. Not good. I feel like it’s rude for a therapist to stop me short and say I’m ranting, especially if that’s all she has to say. I felt like I’m not able to be emotional with her. I can’t express myself freely with Kathy, game over. I felt belittled. Kathy asked me what I got out of that, and I told her how I work through my process, talking. I listed the reasons it’s important for me. Even so she described it as “venting” and told me that wasn’t therapeutic. My last few therapists — and the circumstances of my life , described above— would beg to differ.
Given a do-over I would ask her to rate my anger level on a 1 to 10. I would’ve put it at about 2–3. She seemed to think I was much angrier, even when I flipped it on demand. Again it’s her job to get on my page, not the other way around.
Must-fire ASAP: therapist who tells me how I feel. Especially when they’re incorrect.
PRO TIP: Next time I notice feeling bad about myself during a therapy session — not processing that self-blame but experiencing it anew — I will drop the therapist like she’s hot, THEN sleep on it and decide if I’ve made a mistake and want another appointment. Because that one rule, about not firing them for challenging you when you need to be challenged, is really crucial.
3. Kathy doesn’t understand the therapeutic relationship well enough. While we were discussing rape trolls on Medium and how I deal with them, Kathy said she didn’t think I was being an effective advocate because she could tell I was angry. I was shocked that she assumed I would speak to everyone the same way I do in therapy. Not everybody has a fiduciary relationship with my feelings, right? Especially rape trolls, eh? Super basic. She also assumed that the rape troll was conversing with me the same way people do with her in her office. Also obviously wrong. Those are two entirely different communications. If Kathy thinks everybody is at the same level of conversation in therapy as they are all the time, she’s very low-functioning.
4. It’s not the patient’s job to establish rapport or run the session. Kathy and I met on a video chat due to the lockdown, I brought up the awkward last session. I wanted to see if she had some point that I missed, and clearly she felt the same way. This is your red flag moment, folks. I shouldn’t be the one trying to make this work.
Kathy said, “If you were in my office I would’ve shut you right down.” First of all it’s very scoldy, fuck that. I’m sure I wasn’t inappropriate. I’ve never had a therapist suggest that I was before. That tone, especially after she used the offensive word “ranting,” was a good time to end the call. Other therapists have successfully set boundaries with me. Not often, but it can happen. There was a gentle, effective way to redirect me, but Kathy couldn’t find it.
It’s important to understand the therapeutic relationship, the sacred space that is created. That video conference was her virtual office. If she maintains lower professional standards because of the lockdown, because the patient is not physically present, then videoconferencing is unethical for her to participate in. To say, “If you were in my office…” implies that what we’re doing here is somehow now official, not what she actually does. If she wanted to “shut me down” and didn’t know how, and I also didn’t know what she wanted from me, then we have nothing to discuss. She’s simply unqualified.
PRO TIP: Don’t let anyone do you any special favors, things they wouldn’t normally allow but we’ll keep it between us. That’s a grooming technique.
At the end of that session Kathy showed me a printout that said, “You are not your thoughts,” which is not a deep or new thought, and also had nothing to do with anything we discussed. Kathy failed to demonstrate either insight or empathy, and she got fired.
So that’s my short list of reasons to look for a new provider. Fortunately there are more options than ever before. I found Kathy through an online service that has all kinds of virtual doctors. Like a dating site, I can pull up profiles, request appointments, and see how it goes.
On the one hand I see that I will need to be more cautious next time, taking the time to make the provider prove themselves before thinking I can continue working on my issues. Just like with dating, I shouldn’t have jumped right into bed with Kathy right away. When I got to know her I didn’t like her.
On the bright side, I can take my time and shop around, and find the perfect partner for my best life. And I know that I will. | https://medium.com/narrative/when-to-fire-your-therapist-5b276b45dc72 | ['Art Nunymiss'] | 2020-05-26 19:28:09.041000+00:00 | ['Life Lessons', 'Mental Health', 'PTSD', 'Therapy', 'Self'] |
Revised Guidelines For Brave&Inspired (11/2/19) | With all the new changes in Medium, it seems that a good game face is more important than ever. Each story needs to be readable and professional, whether it is a longer piece or a short poem. I now have tabs included on the top of the page, so to have your story go into the correct tab, they should be properly tagged.
Poetry — should be tagged poetry, and should have some theme of being inspired or behaving in a way that feels brave to you.
True Courage — is the tab for stories that happened to you. Please use This Happened to Me as one of your tags
Inspired — If your story is based on how someone else’s bravery inspired you, use “Inspiration” as your tag.
Stories may sometimes appear in more than one tab, that’s okay.
Self Promotion and Embeds
I am all for self-promotion and embeds, but your story should be mostly about what you are writing now. Excessive extras look messy and detract from your piece. Keep them reasonable.
Aside from that, all the standard stuff applies. Watch your grammar. I have Grammarly on my laptop, and if it catches something glaring, like a spelling error, I may fix that, but not much else. (I don’t “fix” UK spelling, even if Grammarly tells me to). Use a credited image with proper permission. Etc.
If you aren’t a writer on Brave&Inspired and feel you have the right kinds of stories to tell. Reply below to be added as a writer. Thanks! | https://medium.com/brave-inspired/revised-guidelines-for-brave-inspired-11-2-19-3920376cb40a | ['Gretchen Lee Bourquin', 'Pom-Poet'] | 2019-11-02 12:38:17.807000+00:00 | ['Brave', 'Inspired', 'Guidelines', 'Writing'] |
How to Find Device Metrics for Any Screen | How to Find Device Metrics for Any Screen
Calculate the right measurements for design across devices
New devices are always coming online, offering new formats, screen sizes, and pixel densities that you’ll want your product to accommodate. If Googling doesn’t give you the numbers you need, or you just want to show off your math skills, here’s a relatively easy way to determine the relevant metrics for your designs.
Do it yourself 🛠
There are six pieces of information you’ll want in the end: the screen diagonal measurement, screen dimensions, aspect ratio, pixel resolution, dp (or density-independent pixel, Android’s logical pixel) and the density bucket to which each device belongs. Some of those specifications can be found on each device’s product page, or through other sites that collect information about specific devices.
It’s easiest to visualize this information if we have an example. So, to get a feel for the process of finding these values, let’s start with the Pixel 4.
The screen diagonal, aspect ratio, and pixel resolution can all be found on the Pixel 4 product page. For devices that don’t have in-depth or easily accessible specification pages, sites like the GSMArena Phone Finder can be a good resource.
Finding Screen Dimensions 📐
After filling in the readily available info, we have three more specs to solve for. The first one is the screen’s dimensions in terms of width and height. The formulas for this — which I learned from an Omni Calculator page by Hanna Pamula — are fairly easy to use, and only require a diagonal measurement and an aspect ratio (AR).
Width = diagonal / √(AR²+1)
Height = (AR)×Width
So for our example, we know the screen diagonal is 5.7 and the aspect ratio is 19:9 (which we’ll write as 19/9 in the formula).
Width = 5.7 / √((19/9)²+1)
Width = 2.44”
And now that we know Width, we can solve for Height.
Height = (19/9)×2.44
Height = 5.15”
So our screen dimensions are 2.44×5.15”
Finding dp Resolution 📏
Density-independent pixels are Android’s logical pixel. Measuring in dp allows designers and developers to ensure that what they create appears at the same physical size across screens, no matter what density those screens happen to be. So knowing the dp resolution of a device can be really helpful in targeting that device with your design. You can easily set up artboards and assets that focus on specific form factors, and reliably reproduce your design across them.
For this formula (which you can find in the Android Developers documentation on pixel density), we need to know the screen’s pixel resolution, the dimensions we calculated before, and the screen’s ppi. The screen’s dpi (written as ppi) should be available at one of the sources mentioned above.
px = dp×(dpi/160)
In our example, we know the screen’s pixel resolution is 1080×2280px, and its physical dimensions are 2.44×5.15” so we can plug those values into the formula, starting again with width.
1080 = dp×(444/160)
1080 = dp×2.775
dp = 1080/2.775
dp = 389
Next we’ll do the same calculation for the screen’s height in density-independent pixels.
2280 = dp×2.775
dp = 2280/2.775
dp = 822
Finding the density bucket 🔍
The Android Developers documentation on pixel density also outlines the notion of “density qualifiers,” which Android uses to serve bitmap drawable resources in an app. If there are non-vector assets like photos or illustrations in your design, it can be useful to know which density buckets you’re targeting, serving the right asset to each device to speed up loading and to avoid distortion and “out of memory” errors.
Finding the density bucket is as easy as looking at the table in the documentation linked above and comparing it to our dpi value. For the Pixel 4’s ~444ppi, we would assign the XXHDPI density qualifier.
Putting it all together 🧩
Having worked through those calculations, we now have a complete set of device metrics for our example device, the Pixel 4.
For further reading on pixel density, layout, and how the two interact, see the Material Design guidance on Layout. | https://medium.com/google-design/how-to-find-device-metrics-for-any-screen-62b9ad84d097 | ['Liam Spradlin'] | 2020-05-19 15:22:47.032000+00:00 | ['How To', 'Design Process', 'Design', 'Material Design', 'Tutorial'] |
Neumorphism will NOT be a huge trend in 2020 | For it to work well you need to make sure that even blockframes of your objects with the background removed can be all identified as part of the same group. ⬜️ ⬜️ ⬜️
So in short — those cards will work if the interface can be just as good without them. Now that’s an awesome recommendation, is it? Especially when we take that Dieter Rams point about removing the “unnecessary” 😂
If you want a pre-made Sketch / Figma files with those shapes to play around go to www.Neumorphism.com
But it’s so fresh!
Remember Pantone Color of the year 2019? Let me refresh your memory on that amazing “new design trend of 2019” as it was proclaimed in January.
While of course there were initial examples of that “living coral” most of them were not relevant anymore by early February 2019.
I think by the same time we will exhaust all possible neumorphic combinations and come back to what worked well before.
Directions for 2020 🔥
That doesn’t mean Neumorphism is dead in the water though.
It just means that on it’s own it won’t be able to carry an entire product to success. Sure — a couple initial products done this way can be successful, but this will get more tiresome than even material design.
However mixing parts of this style with parts of other styles will definitely be a trend in 2020 and beyond.
Making a product that is both beautiful and functional means not going over the top in any direction. Even the currently popular soft-colorful shadow only works well if it’s done on buttons and/or icons. Imagine applying it to the entire product — exactly!
Dark mode
Dark mode is going to be a thing whether we like it or not. But not necessarily the desaturated grey-blue dark mode we’re seeing everywhere.
Since the introduction of OLED screens it’s obvious that pure-black doesn’t use much energy (if at all). So if the goal of a dark mode is to save battery we should start seeing more minimal, functional interfaces that are primarily black. Not dark-grey.
Eye-strain can be another reason for dark modes and if that’s the case those soft dark-modes definitely look better.
Many apps will make a light and dark versions of their interfaces.
Illustration and 3D
We definitely need a more diverse illustration landscape. Currently the most popular style with those slightly unproportional bodyparts and loose-lines is showing up everywhere. This gets boring quickly.
These all look good but way too similar.
However illustration is one of the best ways to stand out — but we need to experiment with it more not to fall into the trap of sameness.
3d on the other hand is a bit easier to make in a way that’s different. It’s also significantly harder to make, requiring more effort. That means that if that time was spent making a 3d render it’s likely to be better quality and on-brand.
Great example of an on-brand style is Pitch.com
Animation
Transitions and scene-builds will raise even more this year. One of the accellerants of it is the new, exciting JS libraries that allow for complex 2d and 3d transitions with relative ease.
👨🏻💻 Yes you can now code “cool stuff” much easier. Don’t overdo it!
We’re going to put that flat-design on surfaces and turn them around. Kind of like in that FEZ game ;)
Isometry?
In 2019 while building our cryptocurrency analytics platform I took the time to analyse the design of over 2000 crypto related websites (a long article on it is coming in the new year). That literally means I went to 2000 websites and gave each one a score based on quality, originality and consistency.
This is beautiful (if you’re the author reach out and I’ll tag you here). But at the same time seeing similar images EVERYWHERE can get very boring.
One thing that struck me the most is that almost 1/4 of them had some sort of isometric images on them. All done in different yet so familiar style that after a while I wasn’t sure they weren’t all from the same free library.
This trend can be done well, but if you plan to just replicate the popular ones in your designs — just don’t. Please. It’s been one of the most overused design things this year (right after colorful shadows).
Ultra minimalism for mindfullness?
This trend is just starting so it may not go outside a small niche. I jumped into digital-detox and using more minimal products this year and so did many people I know.
I ordered both. Using the Light Phone 2 now — quite refreshing.
Devices like the Mudita Pure or the Light Phone 2 deliver simple, black&white, super-minimal interfaces. If we consider the apps we use as tools that must serve a specific function, a minimal interface makes some sense. Of course not apps will work in that style (imagine a text-only Instagram 🤣)
Voice interfaces?
On one of the conferences I attended (and lectured at) this year I’ve heard this quote:
Don’t learn UI. We will be using mostly voice recognition in the near future — talking to our devices.
While this seems futuristic and makes sense in some scenarios (while driving or exercising) there are two main problems that I think won’t push voice UI’s into a dominant position (yet).
There are huge concerns about privacy and “creepy” AI. Not long ago Alexa advised its user to kill themselves as it’s the best way to stop global warming and save the planet. While this may logically be true it’s definitely a clickbait-y headline that will make many people scratch their heads and say — yeah I don’t like those smart speakers listening to our every word and secretly building the next skynet. In many cases it’s really weird to talk to your phone (especially in public). A few quick taps are more private and faster. So until we get those brain-computer interfaces talking to your phone to write a text message on a bus won’t be a thing.
So what’s it gonna be?
The only right answer here can be — I don’t know. I may be wrong and we’ll be living in the extruded soft plastic future, or we may get extruding glass on phones and make it even more real.
Add all those trends on top of one another and see what happens ;-)
But what’s likely to happen is that no one trend will dominate this year.
The best designs — as it has always been — will come from the right mix of all the trends with good typography. Because you can cast a different type of shadow on your card, but if the text on it looks all misplaced and weird, no amount of extruded plastic will make that design good.
Readable IS beautiful. Remember that in 2020! | https://uxdesign.cc/neumorphism-will-not-be-a-huge-trend-in-2020-67a8c35e52cc | ['Michal Malewicz'] | 2020-06-13 08:53:14.811000+00:00 | ['Design', 'UI', 'UX', 'Visual Design', 'Prototyping'] |
How to Develop and Deploy a Webhook Alert Action App with Custom Payload and Headers for Splunk | How to Develop and Deploy a Webhook Alert Action App with Custom Payload and Headers for Splunk Ankit Tyagi Follow Jun 29 · 6 min read
Splunk is a wonderful tool with loads of features, yet a few of them are not available out-of-the-box — one of those missing features is the customizable webhook alert action.
While working on a project, I needed to send some custom metrics along with some headers from Splunk alerts and ended up exploring the default available webhook alert action.
Here’s what’s already available
Splunk, by default, provides webhook alert action for an alert; however, it is lacking some of the required features like:
Authentication mechanism
Custom header support
Custom payload support
With the above features being missing from the default webhook alert action on Splunk, it becomes much less useful because the webhooks mostly require some kind of authentication to receive the data and similarly, customized payload might be a requirement for some webhook APIs.
Splunk has done a great job with its pluggable platform, which allows you to write your own apps and deploy them to serve your custom requirements. Despite this, building a custom app can be tricky and time-consuming.
After reviewing the webhook alert action I decided to develop my own custom webhook alert action with all the required missing features.
customWebhook app to the rescue
I developed a customWebhook alert action application with the following configurable options:
URL: Target webhook URL
Headers (dict): Include any number of headers
e.g. {‘Authorization’: ‘Bearer <token>’, ‘Content-Type’: ‘application/json’…}
Payload (dict): Payload data with key-value pairs of your choice
e.g. {‘service_name’: ‘MyService’, ‘key1’: ‘value1’, ‘key2’: ‘value2’…}
I’ll walk you through easy and simple steps that would give you a kick start with your Splunk project.
Objectives:
1. Understand the Splunk app directory structure
2. Components with the respective directory structure
3. Configuration and logic code snippet
4. Deploy the application on the Splunk server
5. Create an alert with customWebhook alert action
Understand the Splunk app directory structure
A snapshot of the application directory structure.
Let’s break it down.
This is an exhaustive list of configuration I used in the customWebhook app, The complete list is here: https://docs.splunk.com/Documentation/Splunk/8.0.4/AdvancedDev/ModAlertsCreate
Configuration and logic code snippet
alert_actions.conf
# Configuration file path #$SPLUNK_HOME$/etc/apps/customWebhook_app/default/alert_actions.conf [customWebhook] is_custom = 1 label = customWebhook description = Send CustomWebhook alert notifications icon_path = customWebhook_icon.png payload_format = json param.alert_source = Splunk param.user_agent = Splunk/$server.guid$
app.conf
[ui] is_visible = 0 label = customWebhook alerts [launcher] author = Ankit Tyagi description = Send alert payload to custom webhook version = 1.0 [install] state = enabled is_configured = 1
restmap.conf
[validation:savedsearch] action.customWebhook = case('action.customWebhook' != "1", null(), 'action.customWebhook.param.base_url' == "action.customWebhook.param.base_url" OR 'action.customWebhook.param.base_url' == "", "No Webhook URL specified", 1==1, null()) action.customWebhook.param.base_url = validate( match('action.customWebhook.param.url', "^https?://[^\s]+$"), "Webhook URL is invalid")
default.meta
[alert_actions/customWebhook] export = system [alerts] export = system [restmap] export = system
customWebhook.html
<!--$SPLUNK_HOME$/etc/apps/customWebhook_app/default/data/ui/alerts/customWebhook.html-->
<form class="form-horizontal form-complex">
<div class="control-group">
<label class="control-label" for="customWebhook_base_url">URL</label>
<div class="controls">
<input type="text" class="input-xlarge" name="action.customWebhook.param.base_url" id="customWebhook_base_url" placeholder="https://server.com/api/v2/webhook/" />
<br>
<span class="help-block">
Webhook URL (str) https://server.com/api/v2/webhook
</span>
</div>
</div>
<div class="control-group">
<label class="control-label" for="customWebhook_headers">Headers</label>
<div class="controls">
<input type="text" name="action.customWebhook.param.headers" id="customWebhook_headers" placeholder="{'Content-Type': 'application/json', 'CustomHeader': 'Value'}"/>
<br>
<span class="help-block">
Headers (dict) <br>
e.g. {'Content-Type': 'application/json', 'CustomHeader': 'Value'}
</span>
</div>
</div>
<div class="control-group">
<label class="control-label" for="customWebhook_payload">Payload</label>
<div class="controls">
<input type="text" name="action.customWebhook.param.payload" id="customWebhook_payload" placeholder="{'key1': 'value1', 'key2': 'value2'}"/>
<br>
<span class="help-block">
Payload data (dict)
e.g. {'key1': 'value1', 'key2': 'value2'} <br>
add the key from seach result like this. <br>
e.g. {'key1': 'value1', 'key2': '$result.key_name$'}
</span>
</div>
</div>
</form>
alert_action.conf.spec
#$SPLUNK_HOME$/etc/apps/customWebhook_app/README/alert_actions.conf.spec [customWebhook] param.alert_source = "Splunk"
customWebhook.py | https://medium.com/adobetech/how-to-develop-and-deploy-a-webhook-alert-action-app-with-custom-payload-and-headers-for-splunk-6dc1529c49f3 | ['Ankit Tyagi'] | 2020-07-11 15:32:47.434000+00:00 | ['Webhooks', 'Adobe Engineering', 'API', 'Python', 'Splunk'] |
Computing PI in Only 3 Lines | Computing PI in Only 3 Lines
3.1415… the rest is calculus with a keyboard
Photo by sheri silver on Unsplash
I’ve always been amazed by how precisely computers can approximate Pi: the famous irrational constant 3.141592… plus infinitely more digits. Today, over 50 trillion digits have been found.
Though the value of Pi is essential to nature, Pi does not come naturally to computers. A number of algorithms are commonly used to approximate Pi, including the Gauss–Legendre, Borwein’s, and Salamin–Brent algorithms. These methods are extraordinarily efficient, but somewhat complex.
A More Straightforward Approach
There is one method — one that I discovered on my own in high school — that isn’t as fast, but is most elegant in its simple mathematic groundwork.
In calculus class, you probably studied the Taylor Series: an infinitely long summation that can model any function by matching all derivatives up to the nth degree. Our goal is to find the right Taylor Series such that the infinite summation converges to Pi itself.
Let’s start with a basic trig expression involving Pi.
1 = tan(π / 4)
Rearranging this equation allows us to solve directly for Pi.
π = 4 * arctan(1)
Though I won’t walk through the formal derivation, the Taylor Series for 4 * arctan(1) is
Image by Author
which can be written in summation notation as
Image by Author
As you can see, this definition of Pi finds its beauty in its simplicity.
Translating to Code (Python)
The formal summation we found falls hand-in-hand with modern programming methods, as the sigma is really a for-loop in disguise.
Amazingly, the summation itself only requires 3 lines of code! A variable pi holds the current sum, as new terms of the series are added on loop.
Though the above program is functional, the exponentiation runtime grows proportionally with i . An easy fix is using modulus to determine the next term’s sign.
Convergence and Algorithm Efficiency | https://towardsdatascience.com/computing-pi-in-only-3-lines-95c26276f4c9 | ['Blake Sanie'] | 2020-12-21 18:13:01.045000+00:00 | ['Calculus', 'Pi', 'Taylor Series', 'Python', 'Precision'] |
Introducing Pagination in the Syncfusion Flutter DataGrid | Paging is an important feature for loading large amounts of data and displaying it instantly in the DataGrid widget. It also provides easy navigation through the data. We at Syncfusion have developed the Syncfusion Flutter Data Pager widget to achieve pagination in the Flutter DataGrid widget. You can get this Data Pager in our 2020 Volume 3 release.
Let’s discuss in this blog how to integrate the SfDataPager with SfDataGrid and the customization options available in the Data Pager widget.
Integrating SfDataPager into SfDataGrid
Step 1: Include the Syncfusion Flutter DataGrid package dependency in the pubspec.yaml file of your project with the following code.
syncfusion_flutter_datagrid: ^18.3.35-beta
Step 2: Import the DataGrid package in the main.dart file using the following code example.
import 'package:syncfusion_flutter_datagrid/datagrid.dart';
Step 3: Create a common delegate for both the SfDataPager and SfDataGrid and do the following. Please note that, by default, DataGridSource is extended with the DataPagerDelegate.
Set the SfDataGrid.DataGridSource to the SfDataPager.delegate property. Set the number of rows to be displayed on a page by setting the SfDataPager.rowsPerPage property. Set the number of items that should be displayed in view by setting the SfDataPager.visibleItemsCount property. Override the SfDataPager.delegate.rowCount property and SfDataPager.delegate.handlePageChanges method in the SfDataGrid.DataGridSource. You can also load data for a specific page using the handlePageChanges method. This method is called for every page navigation in the Data Pager.
Refer to the following code example.
class OrderInfoDataSource extends DataGridSource<OrderInfo> { @override
List<OrderInfo> get dataSource => paginatedDataSource; class OrderInfoDataSource extends DataGridSource<OrderInfo> { @override List<OrderInfo> get dataSource => paginatedDataSource; @override Object getValue(OrderInfo orderInfos, String columnName) { switch (columnName) { case 'orderID': return orderInfos.orderID; break; case 'customerID': return orderInfos.customerID; break; case 'freight': return orderInfos.freight; break; case 'orderDate': return orderInfos.orderData; break; default: return ''; break; } } @override int get rowCount => orders.length; @override Future<bool> handlePageChange(int oldPageIndex, int newPageIndex, int startRowIndex, int rowsPerPage) async { int endIndex = startRowIndex + rowsPerPage; if (endIndex > orders.length) { endIndex = orders.length - 1; } paginatedDataSource = List.from( orders.getRange(startRowIndex, endIndex).toList(growable: false)); notifyListeners(); return true; } }
Step 4: Create an instance of OrderInfoDataSource and assign it to DataGrid’s source and Data Pager’s delegate properties.
Refer to the following code example.
List<OrderInfo> orders = []; List<OrderInfo> paginatedDataSource = []; static const double dataPagerHeight = 60; final _OrderInfoRepository _repository = _OrderInfoRepository(); final OrderInfoDataSource _orderInfoDataSource = OrderInfoDataSource(); @override void initState() { super.initState(); orders = _repository.getOrderDetails(300); } @override Widget build(BuildContext context) { return MaterialApp( title: 'Paginated SfDataGrid', home: Scaffold( appBar: AppBar( title: Text('Paginated SfDataGrid'), ), body: LayoutBuilder(builder: (context, constraint) { return Column( children: [ SizedBox( height: constraint.maxHeight - dataPagerHeight, width: constraint.maxWidth, child: SfDataGrid( source: _orderInfoDataSource, columnWidthMode: ColumnWidthMode.fill, columns: <GridColumn>[ GridNumericColumn( mappingName: 'orderID', headerText: 'Order ID'), GridTextColumn( mappingName: 'customerID', headerText: 'Customer Name'), GridDateTimeColumn( mappingName: 'orderDate', headerText: 'Order Date'), GridNumericColumn( mappingName: 'freight', headerText: 'Freight'), ])), Container( height: dataPagerHeight, color: Colors.white, child: SfDataPager( delegate: _orderInfoDataSource, rowsPerPage: 20, direction: Axis.horizontal, ), ) ], ); }), )); }
After executing this code example, we will get output like in the following GIF image.
Pagination feature in Flutter DataGrid
Appearance customization of Data Pager
Data Pager allows you to customize the appearance of its elements using SfDataPagerThemeData in the SfDataPagerTheme property. To do this, the SfDataPager should be wrapped inside the SfDataPagerTheme.
Follow these steps to customize the Data Pager:
Step 1: Import the theme package in the main.dart file using the following code.
import 'package:syncfusion_flutter_core/theme.dart';
Step 2: Add the SfDataPager widget inside the SfDataGridTheme like in the following code example.
@override
Widget build(BuildContext context) {
return Scaffold(
body: SfDataPagerTheme(
data: SfDataPagerThemeData(
itemColor: Colors.white,
selectedItemColor: Colors.lightGreen,
itemBorderRadius: BorderRadius.circular(5),
backgroundColor: Colors.teal,
),
child: SfDataPager(
delegate: _orderInfoDataSource,
rowsPerPage: 20,
direction: Axis.horizontal,
),
));
}
After executing this code example, we will get output like in the following screenshot.
Customized Data Pager widget
Conclusion
In this blog, we have seen the steps to integrate the Syncfusion Flutter DataGrid with the new Data Pager. This feature helps you to easily navigate to required data. You can get the complete source code for the example used in this blog in this GitHub repository.
So, try it out and leave your feedback in the comments section below!
Syncfusion Flutter widgets offer fast, fluid, and flexible widgets for creating high-quality apps for iOS, Android, and the web. Use them to enhance your productivity!
For existing customers, the new version is available for download from the License and Downloads page. If you are not yet a Syncfusion customer, you can try our 30-day free trial to check out our available features. Also, try our samples from this GitHub location.
You can also reach us through our support forums, Direct-Trac, or feedback portal. We are always happy to assist you! | https://medium.com/syncfusion/introducing-pagination-in-the-syncfusion-flutter-datagrid-b66c600add73 | ['Rajeshwari Pandinagarajan'] | 2020-11-19 12:56:42.558000+00:00 | ['Flutter', 'Android App Development', 'Dart', 'Mobile App Development', 'Web Development'] |
A philosophy of participation in dynamic wholeness | A philosophy of participation in dynamic wholeness
Excerpt from Exploring Participation (D.C.Wahl 2002)
[Note: This is an excerpt from my 2002 masters dissertation in Holistic Science at Schumacher College. It addresses some of the root causes of our current crises of unsustainability and applied insights from holistic science to ecological design. This excerpt explores the fundamentals of what Charles Eisenstein later (2014) referred to as ‘the story of separation’ and ‘the story of interbeing. Be warned, this is academic, somewhat dense writing, yet it addresses some crucial issues. Enjoy!]
“Our task is to look at the world and see it whole.” — E.F. Schumacher “The success or failure of saying, and hence of writing, turns upon the ability to recognize what is part and what is not. But a part is a part only inasmuch as it serves to let the whole come forth, which is to let meaning emerge. A part is only a part according to the emergence of the whole which it serves; otherwise it is mere noise. At the same time, the whole does not dominate, for the whole cannot emerge without the parts. The hazard of emergence is such that the whole depends on the parts to be able to come forth, and the parts depend on the coming forth of the whole to be significant instead of superficial.” — Henri Bortoft
The above remains meaningful whether we understand it hermeneutically, as referring to a text and the reciprocity of meaning between the whole and the parts, as well as understood as an analogy for our individual relationship to the wider whole — the world we live in.
In order to continue to emerge in good health, a whole like our living planet depends on the appropriate participation of its parts, and that includes humanity. Just as the parts — you and me — depend on the emergence of this healthy whole for their meaningful and healthy existence.
In this dissertation, I aim to convey some of the understanding that recognizes the intricate link between the health of the whole and the appropriate participation of the parts. I argue that appropriate participation takes place on the appropriate spatial-temporal scales and that the complex dynamic processes of life, which are continuously transforming the whole, depend on an intricate web of relationships through which everything and everybody participate simultaneously as the weavers and as the web of life.
Each individual part of this dissertation is intended to let meaning emerge and let the whole come forth. “Logic is analytical, whereas meaning is evidently holistic, and hence understanding can not be reduced to logic. We understand meaning in the moment of coalescence when the whole is reflected in the parts so that they disclose the whole.”10 The circle of relationship between the whole and the parts, in which meaning emerges, was first recognized by Friedrich Ast, who called it the hermeneutical circle.
I argue, based on my interpretation of Bortoft’s work, that the same circle of relationships between the whole and the part that is expressed in the hermeneutical circle of understanding meaning, may also help to understand our own reciprocally co-creative relationship between our perceived selves and the world we thus perceive.
It is the whole-part relation that manifests in creation, maintenance and transformation of the identities, which in turn manifest as the world we experience. At the core of this re-emergence of meaning, at the foundations of a newly emerging worldview is the awareness of our deeply participatory relationship with the living world.
In the same way that the parts and the whole of this dissertation bring forth each other, mutually dependent on each other for their meaningful existence, each one of us derives meaningful existence through his or her profoundly reciprocal relationships and interactions with the world.
Paying attention to the participatory nature of all existence and our associated creative agency in the world can help us to overcome the alienation and lack of meaning, which characterizes the modern world.
Whether we are conscious of it or not, each one of us is constantly engaged in a creative process through which we collectively bring forth or contribute to the emergence of the whole. Yet, the whole in its entirety has neither beginning nor end and there is no possibility of being outside the whole. Therefore, it is important to understand that the interactions and relationships of its parts continuously transform the whole from within, allowing for creative change to occur and new interpretations of meaning as well as new manifestations of the whole to emerge.
In other words, we need to face up to responsibility for our actions. Whatever we do is shaping the world we live in, while we are simultaneously being shaped by the world we collectively co-create through our thoughts, words and actions. The whole and the part create each other and it may serve us better to think of them not as separate and exclusive, but rather as mutually dependent and encompassing, reciprocating entities that co-create the world in a continuous process of interacting with and relating to each other. The whole and the part, as well as the observer and the observed are truly one, as well as being distinct within that one whole.
A participant-observer understanding of the world as a whole can lead us to seeing how the one can express itself as the many in the diversity of manifestations that emerge through the interactions and relationships of all participant-observers.
The process is analogous to how the one meaning of this dissertation expresses itself through the different interpretations of each individual reader. There is only one meaning and one whole, but it manifests itself differently through the ever-shifting web of relationships of its participant- observers. Like the one universal whole, meaning should not be thought of as fixed or static, but as a dynamic process un/en-folding through relationship, direct sensory experience and interpretation.
Seen in this way, diversity is no longer the threatening, challenging existence of many others competing with our self, but rather the breathtakingly beautiful and meaningful expression of a limitless variety of manifestations that the same one, which we are of, can take as we experience relationship.
We are all of the same one, yet we are the same one differently, so we can learn to begin to cherish diversity as the true expression of our unity with the world. We are not all the same, but we are one. In loosing diversity, we loose ways of experiencing our own individuality and ultimately we loose a part of ourselves.
This understanding of the relationships between the whole and its parts, between each one of us individually and the world or the universe as a whole can help us to find language to express our fundamental interconnectedness with the living world.
We are who we are only in relationship to all there is. The world as we know it emerges in a process of reciprocal co-creation that integrates us inseparably into the world we experience. In other words, as David Abram has put it so beautifully:
“Caught up in a mass of abstractions, our attention hypnotized by a host of human-made technologies that only reflect us back to ourselves, it is all too easy for us to forget our carnal inherence in a more- than-human matrix of sensations and sensibilities. Our bodies have formed themselves in delicate reciprocity with the manifold textures, sounds, and shapes of an animate earth — our eyes have evolved in subtle interaction with other eyes, as our ears are attuned by their very structure to the howling of the wolves and the honking of the geese. To shut ourselves off from these other voices, to continue by our lifestyles to condemn these other sensibilities to the oblivion of extinction, is to rob our own senses of their integrity, and to rob our minds of their coherence. We are human only in contact, and conviviality, with what is not human.”11 — David Abram
In the Santiago theory of cognition Humberto Maturana and Francisco Varela proposed that the process through which an individual interacts with its environment is fundamentally a cognitive process. This process of cognition, of structural coupling between the organism and its environment, is regarded as the fundamental process of life itself. In our reflective consciousness this process emerges as the process of knowing.
Maturnana and Varela emphasize that, as we are beginning to understand how we know, we have to realize “that the world everyone sees is not the world but a world which we bring forth with others” and “that the world will be different only if we live differently.”12 I would like to add that every time we bring forth a world, it also is the world manifesting itself in a particular way.
We will have to find new ways of expressing that we are living in a fundamentally paradoxical universe. While our individual experience of our embodied self is real and allows us, through our senses and the concepts we form, to enter into relationships with a real world, we paradoxically also are the world we thus perceive as it emerges out of our relationship with it.
A truly participatory understanding recognizes that a participating part can never be separate from the whole it participates in. It is always both part and whole simultaneously. This new dynamical way of thinking is “recovering a way of thinking based on living in the movement of paradox rather than eliminating it.” The dynamical way of thinking places paradox “at the very core of understanding.”13
The universe as the one unique whole can never be fully understood in the way that modern Reductionist science aims to understand the world through logical reasoning, prediction and control. Why? Simply because, it is not possible to take an outsider or objective point of view of the whole in its entirety.
The whole can only be approached by taking partial reference from within. This makes the observer of the whole inevitably a participant in it and blurs subject object distinctions. In observing the whole the observer will have to include him or herself, both as the observing subject as well as the observed object.
To be subject and object simultaneously fundamentally conflicts with logical reasoning dependent on either/or choices. We encounter the paradox. Reductionist science is based on the either/or logic of a Cartesian subject-object dichotomy.
Curiously, despite Werner Heisenberg’s famous reminder to the scientific community, that “what we observe is not nature itself, but nature exposed to our method of questioning” and his caution that every act of observation has its associated observational blind-spot, most of science today is still based on dualistic reasoning, that draws sharp either/or distinctions between subject and object and between the observer and the observed.
Based on Aristotle’s, often disregarded or misinterpreted, self actualisation thesis, which metaphorically explained equates the process of the builder building with the process of the building being built, Henri Bortoft points out that this way of experiencing an event as process constitutes “an intermediary philosophical position between monism and dualism.” In what Bortoft calls “a unitary event” one is not reduced to the other. It is not a “monistic event”.14 Neither are there two events. Dualism sees the builder as the subject and the building as the object being built and the process of building-a-building–a-building-being-built as two distinct processes.
I believe Aristotle’s focus here is on the process of reciprocal co-creation and the emergence of identity out of relationship and interaction in this unitary event. Building the building makes the builder a builder, while being built by the builder, makes the building a building. The individual identities emerge out of, or manifest themselves through the relationships established by one single process.
Understanding process in this way lets us experience the coming into being of individual identity through relationship. Rather than experiencing a static world of finished products preconceived by viewing with subject-object goggles from the start, we being to see and think more dynamically thus experiencing reciprocal co-creation of the diversity of identities through relationship.
It is important to realize how deeply the Cartesian subject-object separation is affecting the way we interpret the world and our experiences within it. Especially with regard to the conceptual framework that makes the world appear to us as it does. Bortoft, extending Gadamer’s work on hermeneutics shows that “meaning is understanding”15 and that it is precisely the Cartesian subject-object presupposition that stops us from understanding how meaningful this insight is.
Maybe this is also why the Reductionist universe we constructed based on the subject-object presupposition seems so devoid of meaning? Our purely rational understanding of the objectified world lacks the deeper meaning which arises in the participatory experience of true, embodied understanding as an expression of a conscious universe.
Just as there are sheer infinite possibilities to interpret meaning, there are infinite possibilities of the universe manifesting itself in a diversity of identities. In both cases, interpretation or manifestation depends on the relation between the part and the whole.
Seen from this perspective one could venture to say: the meaning of life is living in relation to the whole that enfolds us, participating meaningfully in its unfolding. To understand this be aware that neither life, nor mind, nor meaning, nor language, nor the universe are things, they are processes reflecting one single process: the whole un/en-folding in relation with and through the parts
As we begin to realize our fundamentally participatory relationship with this process of the whole un/en-folding, as we direct our attention toward our co-creative potential as consciously participating agents in the expression of individual identity and in the interpretation of meaning, life becomes profoundly meaningful. Meaning and life unfold together.
We can relate to this unfolding through our sensory experience, intuitive perception, as well as through our language and all other forms of communication. Since we are enfolded in the whole, we are participants by nature and always in relation. Seen from this perspective, our relationships are us.
Who we are is continuously being defined by the relationships that give us identity. Life and meaning are processes of expression and interpretation of the whole through its parts. We are participating parts of this process, therefore, as Brian Goodwin once told me:
“The point is not to understand the meaning of life, but to live a life of meaning.” — Prof. Brian Goodwin
I believe that living a life of meaning is about paying attention to our relationship to the community of life in its entirety. It is about appropriate participation in the process of life — participation in meaning unfolding.
Life is a universe of meaning unfolding through relationship. Or, as Thomas Berry has put it: “The universe is not a collection of objects, but a communion of subjects.”16 Meaning is not something we need to search for or that we may encounter at some time in the future. The only place that meaning can unfold is in our relation to the living present — in the relations we participate in from day to day.
“The notion of the living present is one in which the future, as expectation and anticipation, is in the detail of actual interactions [relation] taking place now, as is the past as reconstructions in this process of memory. There is no dismissing the past or the future here, nor is there any distraction from the present of what we are doing together.”17
Seen this way, the universe is a continuous and diverse interpretation of meaning, in the living present, life manifesting as the diversity of expressions of individual identity through relationship. The universe is the whole coming forth into and through its parts. Bortoft argues:
“There is only one meaning. It is the one meaning that can manifest itself in different forms and therefore there is difference within meaning. The differences are elicited by the different cultural and historical contexts and personal situations [read relation], which that meaning appears in … no matter how many times the work is understood it is always the one meaning … coming into being in the happening of understanding.” 18 — Henri Bortoft
Bortoft refers to the “one that appears as the many”, as the “intensive dimension of one”. He emphasizes that the diversity of interpretations of the work (whole) do not fragment it, because what we see as diversity is in fact dynamical living unity. Just like Maturana and Varela proposed to equate the process of cognition with the process of life and, in the reflective consciousness of humans, with the process of knowing, Bortoft shows that meaning is akin to life. Like Goethe and Gadamer before him, he takes the position that meaning is inexhaustible, but neither predetermined nor indeterminate. Meaning in Gadamer’s hermeneutics is rather like Goethe described the world:
“She is complete but ever unfinished”. — Goethe
Bortoft shows clearly that the world is not an object, but a process. We participate in a world that is complete but unfinished. The way that meaning continuously manifests itself through the diversity of its interpretations reflects the way the world continuously manifests as a “dynamical unity producing itself in different modes according to language.” What is important to understand here, as Bortoft emphasized, is that experiencing the world in this way “is not perspectivism. It is manifestationism.”19 Each interpretation is a manifestation of the whole, each experience of the world is the world.
Language and all other forms of communication are ways of entering into relation and thus participation in the whole. Sensory experience is fundamental for entering into relation with the world. Just as described by Aristotle’s self-actualization thesis, where the event of building manifests the identities of the builder and the building, in the event of perception the identities of the perceiver and the perceived manifest.
Through the same reciprocally co-creative relationship of the whole and the part, described above, the identities of the perceiver and of the perceived manifest within the web of relationships that connects them to each other and the whole. It is through relationship that we bring forth a world.
In his book The Spell of the Sensuous, David Abram explores our relationship as sensing, embodied beings with the living world we participate in. He points out that in the event of perception as it is experienced “neither the perceiver nor the perceived are wholly passive…. To the sensing body no thing presents itself as utterly passive or inert.”20
Experientially considered the world is a living presence to us; distinctions like animate and inanimate, active and passive arise when we interpret experience conceptually.
Entering into relation, including all forms of communication, transform the web of relationships, thereby changing (causing, defining or terminating) individual identities, their physical manifestation as well as an identity’s particular way of interpreting meaning or manifesting the whole.
Interpreting Merleau-Ponty, David Abram writes: “Experientially considered, language is no more the special property of the human organism than it is expression of the animate earth that enfolds us.”21 He points out that:
“Communicative meaning is always, in its depths, affective; it remains rooted in the sensual dimension of experience, born of the body’s native capacity to resonate with other bodies and with the landscape as a whole. Linguistic meaning is not some ideal and bodiless essence that we arbitrarily assign to physical sound or word and then toss out into the external world. Rather meaning sprouts in the very depth of the sensory world, in the heat of meeting, encounter, participation.”22 — David Abram
Yet another very important mental construct that influences the way we see the world fundamentally is our understanding of time and space. As three prominent phenomenologists, Husserl, Merleau-Ponty and Heidegger, have concluded independently, in direct pre-conceptual experience it is impossible to distinguish time and space.23 Our understanding of time and space is really at the heart of it all, but it would go beyond the bounds of this dissertation to mention more than the most fundamental points.
The philosopher and Zen master David Loy points out that “the objectification of time is also the subjectification of the self, which thus appears only to discover itself in the anxious position of being a nontemporal entity inextricably trapped in time.”24
In other words the idea of self as something permanent and unchanging, as a thing and not a process of changing identity in relation, creates linear time as separate from space.
Indigenous, oral cultures, living by the cycles of day and night, the moon and the seasons, perceive time as cyclical. “Unlike linear time, time conceived as cyclical cannot be readily abstracted from the spatial phenomena that exemplify it — from, for instance, the circular trajectories of the sun, the moon, and the stars. Unlike a straight line, a circle demarcates and encloses a spatial field.”25
Our visible experiential space, as David Abram points out, is also demarcated by a circle — the horizon. He concludes: “Thus cyclical time, the experiential time of an oral culture, has the same shape as perceivable space. And the two circles are in truth one.”26 Our predominant understanding of linear space and time is yet another example of rigid either/or thinking. David Abram believes:
“The conceptual separation of time and space — the literate distinction between a linear, progressive time and a homogenous, featureless space — function to eclipse the enveloping earth from human awareness. As long as we structure our lives according to assumed parameters of static space and a rectilinear time, we will be able to ignore, or overlook, our thorough dependence upon the earth around us. Only when space and time are reconciled into a single, unified field of phenomena does the encompassing earth become evident, once again, in all its power and its depth, as the very ground and horizon of all our knowing.”27 — David Abram
I would like to stress the fundamental importance of direct sensory experience as the primary mode of entering into relationship and knowing the world. Although, as human beings, we predominantly live in a world that manifests itself through language and the mental concepts we express, nevertheless language can only remain meaningful if it reflects our embodied, sensory experience of the world and allows us to express the direct, intuitive understanding of that world-whole, as it is reflected in us — the world-part — through direct experience. David Abram reminds us that:
“Language is … an evolving medium we collectively inhabit, a vast topological matrix in which speaking bodies are generative sites, vortices where the matrix itself is continually being spun out of the silence of sensorial experience…Merleau-Ponty comes in his final writings to affirm that it is first the sensuous, perceptual world that is relational and web-like in character, and hence that the organic, interconnected structure of any language is an extension or echo of the deeply interconnected matrix of sensorial reality itself. Ultimately, it is not human language that is primary, but rather the sensuous, perceptual life-world, whose wild, participatory logic ramifies and elaborates itself in language.”28 — David Abram
The distrust of the senses arose out of Descartes’ cogito ergo sum and is an expression of the resulting mind-body dualism. It has critically influenced Reductionist science, our culture and the way we experience the world. Phenomenologists, like Merleau- Ponty, do not follow the tradition of Reductionist science attempting to explain the world objectively, but aim to describe “as closely as possible the way the world makes itself evident to awareness, the way things arise in our direct sensorial experience.”29
By accepting one way of seeing and interpreting, that of Reductionist science and its dualistic perspective, as the only way of seeing, we have become epistemologically rigid. Not only have we locked ourselves into our bodies, through a rigidly adhered to, either/or — type, boundary between the self and the world, between subject and object. We have retreated even further in denying our own subjective sensory experience as a legitimate way of entering into relation and understanding (presence-ing meaning in) the world.
Our reduced sense of self is hiding out in our minds, busily re-enforcing the alienating prison of a mentally constructed objective reality that isolates us from the world of qualities and meaning in which we actually live and experience.
By asserting that only the measurable and quantifiable is real and doubting our own sensory, embodied experience of the world we have fallen victim to yet another rigid dualism — the separation of mind and body.
This same dualism finds expression in separating mind and matter, as well as energy and matter. Yet boundaries are never as rigid as they appear when viewed from within the dualist mindset. Seen more dynamically, nothing purely excludes its dualist opposite in the process of un/en–folding by which the whole transforms, as it manifests itself in diversity.
What one way of seeing labels as opposites, in another, more dynamical way of seeing, merely reflects the relationship of the whole and the part. Each one containing the other, but not as two, rather as potential manifestations of the one unity, depending on the complete but ever unfinished web of relationships through which temporary identities manifest and express themselves. Let me briefly exemplify the dissolution of these rigid either/or boundaries taking place in the history of science. I will dedicate more attention to this in the next chapter.
Almost one hundred years ago, quantum mechanics and the understanding of quantum entanglement provided a scientific basis for accepting the fundamental interconnectedness of all matter and relativity theory established how mass and energy can transform into each other ( Einstein’s famous e = mc2).
With regard to the split between mind and matter, Fritjof Capra believes that the Santiago theory of cognition is the first scientific theory that overcomes this division. He explains that by regarding mind not as a thing but as a process — the process of cognition — and by identifying this process as the process of life, “mind and matter no longer appear to belong to two separate categories, but can be seen as representing two complementary aspects of the phenomenon of life…”30
What is important to realize here is that “we are accustomed to thinking of mind as if it were inside us — ‘in our heads’. But it is the other way around. We live within a dimension of mind which is, for the most part, as invisible to us as the air we breathe.”31 There is no rigid, either/or boundary between mind and matter, or as David Abram put it:
“Clearly, a wholly immaterial mind could neither see nor touch things — indeed, could not experience anything at all. We can experience things — can touch, hear, and taste things — only because, as bodies, we are ourselves included in the sensible field, and have our own texture, sounds, and tastes. We can perceive things at all only because we ourselves are entirely a part of the sensible world that we perceive! We might as well say that we are organs of this world, flesh of its flesh, and that the world is perceiving itself through us.”32 — David Abram
In the predominant, dualistic way of seeing, “we consider knowledge to be a subjective state of the knower, a modification of consciousness which in no way affects the phenomenon that is known”, which we regard to be the same “whether it is known or not.” The dynamical way of seeing, regards the knower not as “an onlooker but a participant in nature’s processes, which now act in consciousness to produce the phenomenon consciously as they act externally to produce it materially.”33
As participants in the complex and dynamic processes of nature, as parts reflecting the whole, we are transformed by and transform the world through our way of knowing, which expresses and guides our way of participating.
What we have to realize here is that what we become aware of, depends on the sensory and on the non-sensory aspect to cognitive perception. While it is our direct, embodied, sensory experience that allows us to enter into relationship with the world in the first place, our way of seeing, the organizing ideas we employ, shape what we become aware of and thus the world we bring forth. This has profound implications as it can help us to understand that “all scientific knowledge …is a correlation of what is seen with the way it is seen.”34
One of the most promising and daring attempts to provide a philosophical framework to integrate most of what I have discussed above was recently provided by the philosopher Christian de Quincey in his book Radical Nature — Rediscovering the Soul of Matter.
De Qunicey offers both a new ontological basis, as well as a new epistemology, which complements the relation-focused understanding of participation in the un/en-folding of the whole, which I discussed above. As I have mentioned, I believe we live in a fundamentally paradoxical universe, since we all need to come to terms with our daily experience of subjectivity in what we otherwise describe as an objective universe.
I agree with De Quincey that rather than attempting to remove the paradox, “our task will be to move into it, and know it in a new way.” He proposes a new postmodern “paradox paradigm” that “asserts the primacy of extrarational experience.”35 He argues for a participatory and intersubjective epistemology — “a way of knowing that takes us into the heart of mystery, and invites the paradox of consciousness into our very being.”36 De Quincey believes that:
“Epistemologically, we must engage the paradox. “Paradox” means, literally, “beyond” (para) “opinion or belief” (doxa). Paradox, then, takes us into the “space” that is beyond belief — into experience itself. Ontologically, it invites us into the ambiguity of being — an ambiguity of neither this-or that nor this-and- that, nether either/or nor both/and, but all of these together.”37 — Christian De Qunicey
De Quincey calls his philosophical framework Radical Naturalism. Central to his argument is the fundamental assumption: “It is inconceivable that sentience (subjectivity; consciousness), could ever emerge or evolve from wholly insentient (objective, physical matter).” He therefore argues: “the assumption of consciousness and matter as coextensive and coeternal is the most adequate ‘postmodern’ solution to the question of consciousness in the physical world.
Where materialism, idealism, and dualism fall short as adequate ontologies for a science of consciousness, radical naturalism provides a coherent foundation. The central tenet of radical naturalism is that matter is intrinsically sentient — it is both subject and object.
Radical naturalism confronts head-on the essential paradox of consciousness: We exist as embodied subjects — as subjective objects or feeling matter.” 38 The proposed philosophical framework acknowledges “the ontological and epistemological primacy of embodied feeling.”39
De Quincey uses the ancient greek concept of entelechy to describe mind as ‘a becoming of matter’. He regards mind as “neither outside nor inside of matter” but rather as “constituent of the very essence of matter — interior to its being.”40
De Quincey believes that “the Cartesian error was to identify consciousness as a kind of substance, and not to recognize it as a process or as dynamic form inherent in matter itself. Mind is the self-becoming of self-organization — the self-creation — of matter. Without this matter could never produce consciousness.”41 De Quincey summarizes the implications of his proposed philosophical framework as follows:
“With this new perspective we can now embrace the actuality of consciousness and meaning in a self- organising cosmos. Elements of the new story will include: (1) complementarity rather than dualism, (2) organicism rather than mechanism, (3) holism complementing reductionism, (4) interconnectedness rather than separateness, (5) process rather than things, (6) synchronicity as well as causality, (7) creativity rather than certainty, (8) participation and entanglement rather than objectivity. But most of all, the new cosmology will emphasize (9) that matter is inherently sentient all the way down, and (10) that, therefore, nature, the cosmos — matter itself — is inherently and thoroughly meaningful, purposeful, and valuable in and for itself. Nature, we must see, is sacred.”42 — Christian De Quincey
It may now be more obvious why I found it necessary to set a philosophical context for this dissertation, why it is so important that we learn to employ multiple ways of seeing and to integrate the dualistic/Reductionist into a wider holistic, or non-dual context. It is important that we learn to embrace and live in the paradox. The rigid either/or logic of dualism is at the heart of our alienation from nature and the whole. It is also at the heart of the consequential environmental, social and cultural crisis. As David Loy explains:
“[In dualism] the self is understood to be the source of awareness and therefore of all meaning and value, which is to devalue the world/nature into merely that field of activity wherein the self labours to fulfil itself. … the alienated subject feels no responsibility for the objectified other and attempts to find satisfaction through projects that usually merely increase the sense of alienation. The meaning and purpose sought can be attained only in a relationship whereby nonduality with the objectified other is re-established.”43 — David Loy
The dualist distinction between the self and the world allowed for the development of a detached, Reductionist science in the first place. Bortoft argued that the existing ontological gulf between science and its object is a fundamental prerequisite of Reductionist science, as it allows for a science of measuring and experimenting, in which an “object appears to consciousness, it does not appear in consciousness.”44
I would add that it is precisely this ontological gulf, which enables the moral and ethical detachment with which modern science continues to evade responsibility for its participation in bringing about the current environmental, social and economic crisis, we observe worldwide.
Only if we understand our fundamental interconnectedness as participants in the process of life, in relationship and conviviality with the community of life, only then will meaning appear.
Only as we learn to participate appropriately in the whole, at the appropriate spatio-temporal scale, will we be able to sustain our participation in the process. This is the heart of sustainability.
If we truly understand our participation in the whole we don’t have to fear the future and can love the present. We will become aware of our responsibility to participate appropriately and meaningfully through focusing our attention to meaningful relation with the community of life.
Once we understand the meaning of the whole un/en-folding in relation with and through the parts, we have accomplished the task set by E.F. Schumacher at the beginning of this chapter “to look at the world and see it whole.” Then, as Erwin Schroedinger put it:
“You can throw yourself flat on the ground, stretched out upon Mother Earth, with the certain conviction that you are one with her and she with you. Your are as firmly established, as invulnerable as she is, indeed a thousand times firmer and more invulnerable. As surely as she will engulf you tomorrow, so surely will she bring you forth anew to new striving and suffering. And not merely ‘some day’: now, today, every day she brings you forth, not once but a thousand times over. For eternally and always there is only now, one and the same now. The present is the only thing that has no end.”45 — Erwin Schrödinger
[Note: This is an excerpt from my 2002 masters dissertation in Holistic Science at Schumacher College. It addresses some of the root causes of our current crises of unsustainability. If you are interested in the references you can find them here. The research I did for my masters thesis directly informed my 2006 PhD thesis in ‘Design for Human and Planetary Health: A Holistic/Integral Approach to Complexity and Sustainability’ (2006).]
—
If you like the post, please clap AND remember that you can clap up to 50 times if you like it a lot ;-)!
Daniel Christian Wahl — Catalyzing transformative innovation in the face of converging crises, advising on regenerative whole systems design, regenerative leadership, and education for regenerative development and bioregional regeneration.
Author of the internationally acclaimed book Designing Regenerative Cultures | https://medium.com/age-of-awareness/a-philosophy-of-participation-in-dynamic-wholeness-b5923c063a99 | ['Daniel Christian Wahl'] | 2020-02-05 12:52:43.382000+00:00 | ['Consciousness', 'Sustainability', 'Culture', 'Philosophy', 'Holistic Science'] |
Prototyping with React VR | Why React
One of React’s biggest innovations is that it enables developers to describe a system, such as the UI of a web or mobile app, as a set of declarative components. The power of this declarative approach is that the description of the UI is decoupled from its implementation, allowing authors to build custom “renderers” that target more platforms than just web browsers, such as hardware, terminal applications, music synthesizers, and Sketch.app.
React as a paradigm is perfect for wrapping the complexity of underlying platform APIs and providing consistent and smooth tools to the developers using them. Jon Gold, “Painting with Code” http://airbnb.design/painting-with-code
React Native Under the Hood
Because React VR is built on top of React Native, let’s start with a look at how it works under the hood. React Native is built on a renderer that controls native UI on iOS and Android. The React application code runs in a JavaScript virtual machine in a background thread on the mobile device, leaving the main thread free to render the native UI. React Native provides a bridge for communication between the native layer and the JavaScript layer of the app. When the React components in your application are rendered, the React Native renderer serializes all UI changes that need to happen into a JSON-based format and sends this payload asynchronously across the bridge. The native layer receives and deserializes this payload, updating the native UI accordingly.
Diagram 1: React Native architecture.
Over the past year at Airbnb, we’ve invested heavily in React Native because we recognize the power of being able to share knowledge, engineers, and code across platforms. In November, we launched our new Experiences platform, which is largely written in React Native on our iOS and Android apps, and we formed a full-time React Native Infrastructure team to continue this investment.
Learn once, write anywhere. Tom Occhino, React Native: Bringing modern web techniques to mobile
React VR is Built on React Native
React VR’s architecture mirrors that of React Native, with the React application code running in a background thread — in this case, a Web Worker in the web browser. When the application’s React components are rendered, React VR utilizes the React Native bridge to serialize any necessary UI changes and pass them to the main thread, which in this case is the browser’s main JavaScript runtime. Here, React VR utilizes a library from Oculus called OVRUI to translate the payload of pending UI updates into Three.js commands, rendering a 3D scene using WebGL.
Diagram 2: React VR is built directly on top of React Native
Finally, React VR utilizes WebVR’s new navigator.getVRDisplays() API to send the 3D scene to the user’s head mounted display, such as an Oculus Rift, HTC Vive or Samsung Gear VR. WebVR, a new standard being spearheaded by Mozilla, is supported in recent builds of major web browsers. Check out webvr.info for the latest information on browser support.
Because React VR implements a lot of the same public APIs that React Native implements, we have access to the same vast ecosystem of patterns, libraries, and tools. It will feel familiar for any developer who has built React or React Native apps. We were able to get a VR prototype up and running quickly; in no time at all, we scaffolded a basic React application, set up Redux, and began hitting our production JSON API for sample data.
With hot module reloading and Chrome Dev tools debugging, we could iterate nearly as fast as in React web and React Native development, which allowed us to throw a bunch of UI ideas at the proverbial wall to see what would stick.
Using React (JavaScript) has turned out to be a bigger win for VR app development than I expected — UI dev is several x faster than Unity. John Carmack, Oculus CTO and original creator of Quake.
Flexbox in VR
React VR inherits React Native’s flexbox-based layout and styling engine, with a few tweaks to allow transforms in 3 dimensions.
Flexbox support for React Native is provided by Yoga, a cross-platform layout engine created by Facebook to simplify mobile development by translating flexbox directives into layout measurements. Because it’s written in C, Yoga (née css-layout) can be embedded natively in Objective-C and Java mobile apps. React VR also uses Yoga for flexbox layout. “But how?” you ask, “It’s written in C!” The React VR team has accomplished this by using Emscripten to cross-compile the Yoga C code into JavaScript. How cool is that?
This is a powerful feature of React VR: developers can use the same styling and layout system across web, React Native, and VR, which opens the doors to directly sharing layout styles across these platforms.
Sharing Primitives
Like React Native, React VR provides a set of basic primitives used to construct UI— <View> , <Text> , <Image> , Stylesheet —in addition to adding of its own VR-specific primitives, such as <Pano> , <Box> , among others. This allowed us to drop in some of our existing React Native components into VR, rendered on a 2D surface.
This is hugely exciting because we’ve built our UI component system upon react-primitives , a library we developed for sharing React components across platforms by providing the basic <View> , <Text> , <Image> , etc. primitives for a variety of platforms, including web, native, and Sketch.app (via our react-sketchapp project).
This means we can use the buttons, rows, icons and more directly in VR, keeping Airbnb design language consistent without having to rewrite it all from scratch.
Check out our engineer Leland Richardson’s talk at React Europe for a more in-depth look at the promise of react-primitives , below.
Airbnb engineer Leland Richardson’s talk at React Europe: “React as a Platform: A path towards a truly cross-platform UI”
<CylindricalLayer />
As you could imagine, placing 2D content onto a flat plane in 3D space often falls short of an optimal viewing experience. Currently, many VR apps solve this by rendering 2D UI onto a cylindrical plane curved in front of the viewer, giving it a “2.5D” feel.
A screenshot of Oculus home captured in Gear VR, showing of its use of a cylindrical layer for displaying 2D content for a “2.5D” feel | https://medium.com/airbnb-engineering/prototyping-with-react-vr-4d5ab91b6f5a | [] | 2018-05-03 05:59:44.401000+00:00 | ['Product', 'React', 'Reactvr', 'VR', 'Prototyping'] |
Top 10 Features in Azure Synapse Analytics Workspace | You also have the ability to Copy Data from multiple format:
Image by Author
3. Data Flow
This is probably my favorite feature in Azure Synapse because it brings down the barrier in cleansing data. I’m a big proponent of making it easier to get things done (I think everyone should be :D). Data Flow brings SQL right to your doorsteps with the ability to perform common task like JOINS, UNIONS, Lookups, SELECT, Filter, Sort, Alter and much more. All with little to no code.
Image by Author
It also gives you a good visual of your data cleansing process. Take a look an example below.
Image by Author
4. Pipeline
Once you’ve created a Copy Job or Data Flow. You can run it through a pipeline. This gives you the chance to automate the process by Scheduling a Job by adding a trigger.
Image by Author
or
adding other activities to your pipeline like a Spark Job, Azure Function, Store Procedure, Machine Learning, or Conditionals (If, then, ForEach etc).
Image by Author
5. Write SQL Scripts
Under SQL Script you can write your familiar SQL Statements. It has the flexibility to connect to External Data Source outside of your Synapse Workspace e.g. Hadoop, Azure Data Lake Store or Azure Blog Storage. You can also connect to Public Datasets. Check out an examples below
Visualize your SQL Output
In the results window of a SQL Query, you have the option to visualize your results by changing your view menu from Table to Chart. This gives you the option to customize your results, for example the query I ran below gives me the option to view my result as a Line Chart, I can also edit the legends as I please and give it a Label. Once I’m done I can save the Chart as an image for further use outside of Azure Synapse. | https://towardsdatascience.com/top-10-features-in-azure-synapse-analytics-workspace-ec4618a7fa69 | ['Dayo Bamikole'] | 2020-08-26 13:17:22.612000+00:00 | ['Spark', 'Big Data', 'Azure', 'Azure Synapse Analytics', 'Data Warehouse'] |
Partitioning in Apache Spark | First of some words about the most basic concept — a partition:
Partition — a logical chunk of a large data set.
Very often data we are processing can be separated into logical partitions (ie. payments from the same country, ads displayed for given cookie, etc). In Spark, they are distributed among nodes when shuffling occurs.
Spark can run 1 concurrent task for every partition of an RDD (up to the number of cores in the cluster). If you’re cluster has 20 cores, you should have at least 20 partitions (in practice 2–3x times more). From the other hand a single partition typically shouldn’t contain more than 128MB and a single shuffle block cannot be larger than 2GB (see SPARK-6235).
In general, more numerous partitions allow work to be distributed among more workers, but fewer partitions allow work to be done in larger chunks (and often quicker).
Spark’s partitioning feature is available on all RDDs of key/value pairs.
Why care?
For one, quite important reason — performance. By having all relevant data in one place (node) we reduce the overhead of shuffling (need for serialization and network traffic).
Also understanding how Spark deals with partitions allow us to control the application parallelism (which leads to better cluster utilization — fewer costs).
But keep in mind that partitioning will not be helpful in all applications. For example, if a given RDD is scanned only once, there is no point in partitioning it in advance. It’s useful only when a dataset is reused multiple times (in key-oriented situations using functions like join() ).
We will use the following list of numbers for investigating the behavior.
Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Playing with partitions
Let’s start with creating a local context with allocated one thread only and parallelizing a collection with using all defaults. We are going to use glom() function that will expose the structure of created partitions.
From API: glom() - return an RDD created by coalescing all elements within each partition into a list.
Each RDD also possesses information about partitioning schema (you will see later that it can be invoked explicitly or derived via some transformations).
From API: partitioner - inspect partitioner information used for the RDD.
Output
Number of partitions: 1
Partitioner: None
Partitions structure: [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
Ok, so what happened under the hood?
Spark uses different partitioning schemes for various types of RDDs and operations. In a case of using parallelize() data is evenly distributed between partitions using their indices (no partitioning scheme is used).
If there is no partitioner the partitioning is not based upon characteristic of data but distribution is random and uniformed across nodes. Different rules apply for various data sources and structures (ie. when loading data using textFile() or using tuple objects). Good sumary is provided here.
If you look inside parallelize() source code you will see that the number of partitions can be distinguished either by setting numSlice argument or by using spark.defaultParallelism property (which is reading context information).
Now let’s try to allow our driver to use two local cores.
Output
Default parallelism: 2
Number of partitions: 2
Partitioner: None
Partitions structure: [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
Ok, that worked as expected — the data was distributed across two partitions and each will be executed in a separate thread.
But what will happen when the number of partitions exceeds the number of data records?
Output
Number of partitions: 15
Partitioner: None
Partitions structure: [[], [0], [1], [], [2], [3], [], [4], [5], [], [6], [7], [], [8], [9]]
You can see that Spark created requested a number of partitions but most of them are empty. This is bad because the time needed to prepare a new thread for processing data (one element) is significantly greater than processing time itself (you can analyze it in Spark UI).
Custom partitions with partitionBy()
partitionBy() transformation allows applying custom partitioning logic over the RDD.
Let’s try to partition the data further by taking advantage of domain-specific knowledge.
Warning — to use partitionBy() RDD must consist of tuple (pair) objects. It's a transformation, so a new RDD will be returned. It's highly adviseable to persist it for more optimal later usage.
Because partitionBy() requires data to be in key/value format we will need to transform the data.
In PySpark an object is considered valid for PairRDD operations if it can be unpacked as follows k, v = kv . You can read more about the requirements here.
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56fabd0>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]]
You can see that now the elements are distributed differently. A few interesting things happened:
parallelize(nums) - we are transforming Python array into RDD with no partitioning scheme, map(lambda el: (el, el)) - transforming data into the form of a tuple, partitionBy(2) - splitting data into 2 chunks using default hash partitioner,
Spark used a partitioner function to distinguish which to which partition assign each record. It can be specified as the second argument to the partitionBy() . The partition number is then evaluated as follows partition = partitionFunc(key) % num_partitions .
By default PySpark implementation uses hash partitioning as the partitioning function.
Let’s perform an additional sanity check.
Output
Element: [0]: 0 % 2 = partition 0
Element: [1]: 1 % 2 = partition 1
Element: [2]: 2 % 2 = partition 0
Element: [3]: 3 % 2 = partition 1
Element: [4]: 4 % 2 = partition 0
Element: [5]: 5 % 2 = partition 1
Element: [6]: 6 % 2 = partition 0
Element: [7]: 7 % 2 = partition 1
Element: [8]: 8 % 2 = partition 0
Element: [9]: 9 % 2 = partition 1
But let’s get into a more realistic example. Imagine that our data consist of various dummy transactions made across different countries.
We know that further analysis will be performed analyzing many similar records within the same country. To optimize network traffic it seems to be a good idea to put records from one country in one node.
To meet this requirement, we will need a custom partitioner:
Custom partitioner — function returning an integer for given object (tuple key).
Output
1
1
4
By validating our partitioner we can see what partitions are assigned for each country.
Pay attention for potential data skews. If some keys are overrepresented in the dataset it can result in suboptimal resource usage and potential failure.
Output
Number of partitions: 4
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7bd0>
Partitions structure: [[('United Kingdom', {'country': 'United Kingdom', 'amount': 100, 'name': 'Bob'}), ('United Kingdom', {'country': 'United Kingdom', 'amount': 15, 'name': 'James'}), ('Germany', {'country': 'Germany', 'amount': 200, 'name': 'Johannes'})], [], [('Poland', {'country': 'Poland', 'amount': 51, 'name': 'Marek'}), ('Poland', {'country': 'Poland', 'amount': 75, 'name': 'Paul'})], []]
It worked as expected all records from a single country is within one partition. We can do some work directly on them without worrying about shuffling by using the mapPartitions() function.
From API: mapPartitions() converts each partition of the source RDD into multiple elements of the result (possibly none). One important usage can be some heavyweight initialization (that should be done once for many elements). Using mapPartitions() it can be done once per worker task/thread/partition instead of running map() for each RDD data element.
In the example below, we will calculate the sum of sales in each partition (in this case such operations make no sense, but the point is to show how to pass data into mapPartitions() function).
Output
Partitions structure: [[('Poland', {'country': 'Poland', 'amount': 51, 'name': 'Marek'}), ('Germany', {'country': 'Germany', 'amount': 200, 'name': 'Johannes'}), ('Poland', {'country': 'Poland', 'amount': 75, 'name': 'Paul'})], [('United Kingdom', {'country': 'United Kingdom', 'amount': 100, 'name': 'Bob'}), ('United Kingdom', {'country': 'United Kingdom', 'amount': 15, 'name': 'James'})], []]
Total sales for each partition: [326, 115, 0]
Working with DataFrames
Nowadays we are all advised to abandon operations on raw RDDs and use structured DataFrames (or Datasets if using Java or Scala) from Spark SQL module. Creators made it very easy to create custom partitioners in this case.
Output
Number of partitions: 2
Partitioner: None
Partitions structure: [[Row(amount=100, country=u'United Kingdom', name=u'Bob'), Row(amount=15, country=u'United Kingdom', name=u'James')], [Row(amount=51, country=u'Poland', name=u'Marek'), Row(amount=200, country=u'Germany', name=u'Johannes'), Row(amount=75, country=u'Poland', name=u'Paul')]] After 'repartition()'
Number of partitions: 50
Partitioner: None
Partitions structure: [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [Row(amount=200, country=u'Germany', name=u'Johannes')], [], [Row(amount=51, country=u'Poland', name=u'Marek'), Row(amount=75, country=u'Poland', name=u'Paul')], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [Row(amount=100, country=u'United Kingdom', name=u'Bob'), Row(amount=15, country=u'United Kingdom', name=u'James')], [], [], [], []]
You can see that DataFrames expose a modified repartition() method taking as an argument a column name. When not specifying number of partitions a default value is used (taken from the config parameter spark.sql.shuffle.partitions ).
Let’s take a closer look at this method at the general.
coalesce() and repartition()
coalesce() and repartition() transformations are used for changing the number of partitions in the RDD.
repartition() is calling coalesce() with explicit shuffling.
The rules for using are as follows:
if you are increasing the number of partitions use repartition() (performing full shuffle),
the number of partitions use (performing full shuffle), if you are decreasing the number of partitions use coalesce() (minimizes shuffles)
Code below shows how repartitioning works (data is represented using DataFrames).
Output
Number of partitions: 2
Partitions structure: [[Row(num=0), Row(num=1), Row(num=2), Row(num=3), Row(num=4)], [Row(num=5), Row(num=6), Row(num=7), Row(num=8), Row(num=9)]] Number of partitions: 4
Partitions structure: [[Row(num=1), Row(num=6)], [Row(num=2), Row(num=7)], [Row(num=3), Row(num=8)], [Row(num=0), Row(num=4), Row(num=5), Row(num=9)]]
Vanishing partitioning schema
Many available RDD operations will take advantage of underlying partitioning. On the other hand operations like map() cause the new RDD to forget the parent's partitioning information.
Operations that benefit from partitioning
All operations performing shuffling data by key will benefit from partitioning. Some examples are cogroup() , groupWith() , join() , leftOuterJoin() , rightOuterJoin() , groupByKey() , reduceByKey() , combineByKey() or lookup() .
Operations that affect partitioning
Spark knows internally how each of it’s operations affects partitioning, and automatically sets the partitioner on RDDs created by operations that partition that data.
But the are some transformations that cannot guarantee to produce known partitioning — for example calling map() could theoretically modify the key of each element.
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a5711310>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]] Number of partitions: 2
Partitioner: None
Partitions structure: [[(0, 0), (2, 4), (4, 8), (6, 12), (8, 16)], [(1, 2), (3, 6), (5, 10), (7, 14), (9, 18)]]
Spark does not analyze your functions to check whether they retain the key.
Instead, there are some functions provided that guarantee that each tuple’s key remains the same — mapValues() , flatMapValues() or filter() (if the parent has a partitioner).
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7d90>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]] Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7d90>
Partitions structure: [[(0, 0), (2, 4), (4, 8), (6, 12), (8, 16)], [(1, 2), (3, 6), (5, 10), (7, 14), (9, 18)]]
Memory issues
Have you ever seen this mysterious piece of text — java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE ?
Looking into stack trace it can be spotted that it’s not coming from within you app but from Spark internals. The reason is that in Spark you cannot have shuffle block greater than 2GB.
Shuffle block — data transferred across stages between executors.
This happens because Spark uses ByteBuffer as abstraction for storing block and it's limited by Integer.MAX_SIZE (2 GB).
It’s especially problematic for Spark SQL (various aggregation functions) because the default number of partitions to use when doing shuffle is set to 200 (it can lead to high shuffle block sizes that can sometimes exceed 2GB).
So what can be done:
Increase the number of partitions (thereby, reducing the average partition size) by increasing the value of spark.sql.shuffle.partitions for Spark SQL or by calling repartition() or coalesce() on RDDs, Get rid of skew in data
It’s good to know that Spark uses different logic for memory management when the number of partitions is greater than 2000 (uses high compression algorithm). So if you have ~2000 partitions it’s worth bumping it up to 2001 which will result in smaller memory footprint.
Take-aways
Spark partitioning is available on all RDDs of key/value pairs and causes the system to group elements based on a function of each key.
Features
tuples in the same partition are guaranteed to be on the same machine,
each node in the cluster can contain more than one partition,
the total number of partitions are configurable (by default set to the total number of cores on all executor nodes)
Performance tuning checklist
have the correct number of partitions (according to cluster specification) — check this and that for guidance,
consider using custom partitioners,
check if your transformations preserve partition schema,
check if memory could be optimized by bumping number of partitions to 2001
Settings
spark.default.parallelism - sets up the number of partitions to use for HashPartitioner (can be overridden when creating SparkContext object),
- sets up the number of partitions to use for HashPartitioner (can be overridden when creating object), spark.sql.shuffle.partitions - controls the number of partitions for operations on DataFrames (default is 200)
As the final thought note that the number of partitions also determine how many files will be generated by actions saving an RDD to files.
Sources | https://medium.com/parrot-prediction/partitioning-in-apache-spark-8134ad840b0 | ['Norbert Kozlowski'] | 2018-01-11 20:45:26.932000+00:00 | ['Data Science', 'Big Data', 'Apache Spark', 'Tutorial'] |
Clustering population in London to find a suitable location for ethnic restaurant | As a digital communication professional, data science is not new to me, but I have never systematically learned it, just picked up the necessary skills on the job. So one of my COVID-19 lockdown commitments was to look into data science and machine learning in a bit more structured way. Google must have guessed my intention as I got a few targeted ads — which led me to IBM’s Data Science Program on Coursera. This post is not a review, it’s about my capstone project.
The idea was to find a suitable location for a traditional ethnic restaurant in London using exploratory data analysis and machine learning. It made sense to look into London’s Chinese population — if you live in the city you must have heard of the Chinese district, possibly the worst place to open a new traditional restaurant, right? But then where else, are there other parts of the city where the concentration of Chinese people* is high and there aren’t many popular Chinese restaurants?
*in the census data people are classified according to their own perceived ethnic group and cultural background
To get things started we need quite a few datasets to work with. Most of the files can be found in my repo except the data files as those are quite big — but check the notebook for the links:
London’s census data (2011 is the latest) broken down to Middle Layer Super Output Areas (MSOAs) level
level Population weighted centroids for MSOAs
Land area of MSOAs
Shapefiles for MSOAs
Foursquare to get the most popular venue types
Before jumping in…
If you are using any APIs, database connections, or else in your code, it’s a really good practice not to include the credentials in the files that you share, upload to GitHub, or to any other code sharing platform. Personally, I like to use dotenv for Python-related projects — it’s easy to set up: create a .env file in your root folder, store your credentials there, and then add to the beginning of your Jupyter Notebook the following:
%load_ext dotenv
%dotenv # get your key client_id = %env CLIENT_ID
cliend_secret = %env CLIENT_SECRET
Initial exploratory data analysis
I won’t include here the data wrangling that I did for the combined dataframe of the census, centroids, and land areas datasets — please check the repo if you are interested. I ended up with the following table:
The MSOAs name and total columns are in the table for reference only, we are not going to use those in our analysis. The Chinese population column contains the combined population of all Chinese related ethnic groups, while the latitude and longitude columns are the population-weighted centroids of the MSOAs.
The idea is to find out which neighborhoods have a higher Chinese population, and then explore the most popular restaurant types in those areas. The Office of National Statistics works with different kinds of output areas: Middle Layer Super Output Areas (MSOAs) is a sensible choice as it provides the desired granularity to define our own ‘neighborhoods’. Using the MSOAs, folium’s choropleth map showing the Chinese population will look like the following. | https://nubianlachlan.medium.com/clustering-population-in-london-to-find-a-suitable-location-for-ethnic-restaurant-4b610b49673e | ['Balint Hudecz'] | 2020-10-31 13:38:37.615000+00:00 | ['Kmeans', 'Population', 'Data Science', 'Clustering', 'Data Visualization'] |
The Simple Two-Step Funnel I Use to Qualify Freelance Leads | Finding freelance leads can be a thankless task.
You send out 30+ pitches and land two clients and that’s considered a Very Good Outcome.
So, you sign a contract with those two clients, start the work, and everything is unicorns and rainbows; it’s all hunkydory.
Each project lasts for a heady two weeks, but then you hit the ground with a bone-crunching bump and a dawning realisation: you have to send out another 30+ pitches to get another two clients.
It’s relentless and can seem like a lot of work — particularly when you’re busy doing your actual job. Because of course you’re going to forget to pitch some weeks if you’re snowed under with your latest contract, right?
But pitching is the only way to get work… Isn’t it?
Or is it…?
In the early days of freelancing, spending 20+ hours a month identifying clients and sending out pitches can be incredibly rewarding. It can bolster your schedule for a good few months and get good work flowing in.
For long-term freelancers, 20+ hours a month is way too much time to spend landing new work. Those are potentially billable hours that could be spent honing your craft, building relationships with existing clients, or, you know, doing actual work that you get paid for.
Your Marketing Tactics Need to Change
Halfway through my freelance career, my marketing tactics took a sharp left turn.
At the start, pitching was the number one way I landed new work, whether that was via job boards or identifying and reaching out to leads myself.
Two years in, I didn’t have the time to send 10 pitches a day to potential clients, but I still needed a steady stream of work coming in for when my contracts ended or the leads dried up.
It was at this point I started creating content geared towards my target client (I should note here that this was also the point that my target client took a sharp left turn — I went from writing for travel and hospitality companies to working for marketing, SaaS, and ecommerce brands).
As a writer, creating content came naturally to me and it served two purposes:
It showed off my skill set to potential clients (you’ve got to practice what you preach, right?)
It created a presence that brought freelance leads directly to my door
This was content that I spent an hour or so creating, but that continued to work hard at bringing in clients long after I had pressed publish. It was a far cry from the 20+ hours I was spending pitching in the early days.
So which kind of content worked best?
Throughout the past 5 years, I’ve crafted a really simple lead generation process that doesn’t take up a lot of time but that is truly effective.
It works in two stages:
Create a downloadable “lead magnet” that gives your prospects extra value in exchange for their email address Publish regular blog posts that touch on specific pain points your target audience have
The two stages work in tandem to, first of all, qualify potential leads by identifying pain points that prospects are willing to pay for and, secondly, grabbing their email address so you can nurture a potential relationship.
Let’s talk more about the two stages in isolation.
1. Create a Downloadable Lead Magnet
You see these everywhere.
“Grab your FREE checklist for writing blog posts”
“Download your FREE ebook and learn how to get 5,000 subscribers without spending a single penny at all. Nada. Nothing”
You get the gist.
While I’m always wary of “GET THIS AMAZING FREE THING” copy, there’s a reason so many brands are pushing out lead magnets to their readers.
It’s because they offer a win-win situation.
The reader gets access to even more value, while the site owner gets a new subscriber that they can nurture via email.
There’s a really simple formula I use for creating a Really Good Lead Magnet.
You take ONE pain point your target client has and you provide ONE solution that can be achieved in a day.
People want to know that you understand their needs and they want to be able to take quick action. Remember, your lead magnet should tie into the services you offer too, so if you’re a designer, it should be design related; if you’re a writer, it needs to have something to do with content.
Here are some lead magnet ideas that use this formula:
A guide to creating a one-page client proposal in 20 minutes
7 healthy but tasty recipe ideas for the next week
Tweaks you can make to your website UX today
A checklist to make your website mobile-ready right now
Let’s take my target client for a moment: startup SaaS companies that create software for ecommerce brands to use.
Their biggest pain point is attracting new users. I help them create content to attract new users, so I might play around with these lead magnet ideas:
3 Ways to Optimise Your Blog Posts and Get More Users
A Copy Checklist for Getting More App Users
How This Ecommerce Brand Gained 50 New Users Through Content Creation
You’ll notice that the latter example is more of a case study, and this can work particularly well if you’ve got a really great story from one of your clients.
I once sold out a client’s program in 10 minutes with an email campaign. If I was focusing on selling email copy, I might weave this into my lead magnet (e.g. How X sold out their program in 10 minutes with engaging emails).
2. Publish Regular Blog Posts
In order to attract people to your lead magnet in the first place, you have to publish Good Regular Content.
Usually this takes the form of blog posts, but it might also be video content or something else depending on your skills and services.
For me, writing was the obvious choice.
It’s not enough to churn out weak 300-word posts and hope for the best. If you want to attract High Quality Clients, your posts have to be good enough to attract people in the first place and good enough for them to stick around and read it.
To do this:
Write a list of pain points your target client has (think about the common questions your prospects will be searching for in order to land on your website)
(think about the common questions your prospects will be searching for in order to land on your website) Determine how your services tie into those pain points (take the ecommerce SaaS example I highlighted above)
(take the ecommerce SaaS example I highlighted above) Create content that combines the two together
For me, a list of potential blog posts might look something like this:
How Ecommerce Brands Can Use This Writing Technique to Attract More Users
5 Headline Formulas for Forward-Thinking SaaS Brands
The Biggest Copywriting Mistakes SaaS Brands Make on Their Websites
These are just off the top of my head, but you get the idea.
Your blog posts should be intriguing enough for your target clients to click into and then, somewhere within all of that good content, you want to offer them your lead magnet in exchange for their email address.
Try Out This Two-Step Funnel Yourself
Funnels often put the fear of god in people. They can be complex and overwhelming, but they really don’t have to be.
If you’re past the point where you can afford to spend 20+ hours a month pitching new prospects, funnels might be the answer for you. And, luckily, this one only involves two simple (but very effective) steps!
— -
Mission: Get Better Clients
This post forms part of my mission for March: to help freelancers get better paying, higher quality clients.
I’m publishing 30 posts in 30 days aimed at helping freelancers like YOU build a better business (you can follow my story on Wanderful World, on Twitter, or on Instagram).
Here’s what you can do next: | https://lizziedavey.medium.com/the-simple-two-step-funnel-i-use-to-qualify-freelance-leads-3c44b200af6a | ['Lizzie Davey'] | 2020-03-03 11:48:21.741000+00:00 | ['Freelance', 'Freelancing', 'Business Strategy', 'Entrepreneurship', 'Lead Generation'] |
Swapping with the Monster | POETRY
Swapping with the Monster
A poem on understanding why I’m an asshole sometimes
You come and go as you please. You love the surprise
and the grimace on my face when I realize
I’m being mean and cruel for no reason.
I don’t always recognize you — you’re subversive and sly.
Disguised as a simple annoyance, a whim, a mood,
you make yourself at home and wait for a chance to strike:
when I’m with a loved one or at my happiest.
Let’s switch, beast. I’ll be the fiend that feeds the fear
you constantly carry in the back of your mind.
Maybe I’ll take up residence in a nook you haven’t decluttered in a while.
But don’t worry, I’ll pay rent for it — I’ll feed you
apathy, pride, envy, and regret.
I’ll then package it all in poetry; you’ll think you’re deep and shit
while I remorselessly fuel your pain-body.
But I’m not like you. The Light in me, the Presence, would try
to understand you, where you come from, and the suffering
you conceal. Serenely,
I’d ambush you with care and understanding.
I’d let you untangle your want for attention and revenge,
your need to act out when things don’t go your way,
when you don’t receive what you’re entitled to.
I’d humbly dilute your heartache with tenderness.
Does that sound good? Let’s swap.
Lola Sense © All Rights Reserved. | https://medium.com/scrittura/swapping-with-the-monster-757b64c4a261 | ['Lola Sense'] | 2020-12-14 15:11:56.638000+00:00 | ['Self-awareness', 'Self Improvement', 'Self Love', 'Self', 'Poetry'] |
How to Hash in Python | 3. Secure Hashing
Secure hashes and message digests have evolved over the years. From MD5 to SHA1 to SHA256 to SHA512.
Each method grows in size, improving security and reducing the risk of hash collisions. A collision is when two different arrays of data resolve to the same hash.
Hashing can take a large amount of arbitrary data and build a digest of the content. Open-source software builds digests of their packages to help users know that they can trust that files haven’t been tampered with. Small changes to the file will result in a much different hash.
Look at how different two MD5 hashes are after changing one character.
>>> import hashlib
>>> hashlib.md5(b"test1").hexdigest()
'5a105e8b9d40e1329780d62ea2265d8a'
>>> hashlib.md5(b"test2").hexdigest()
'ad0234829205b9033196ba818f7a872b'
Let’s look at some common secure hash algorithms.
MD5– 16 bytes/128 bit
MD5 hashes are 16 bytes or 128 bits long. See the example below, note that a hex digest is representing each byte as a hex string (i.e. the leading 09 is one byte). MD5 hashes are no longer commonly used.
>>> import hashlib
>>> hashlib.md5(b"test").hexdigest()
'098f6bcd4621d373cade4e832627b4f6'
>>> len(hashlib.md5(b"test").digest())
16
SHA1–20 bytes/160 bit
SHA1 hashes are 20 bytes or 160 bits long. SHA1 hashes are also no longer commonly used.
>>> import hashlib
>>> hashlib.sha1(b"test").hexdigest()
'a94a8fe5ccb19ba61c4c0873d391e987982fbbd3'
>>> len(hashlib.sha1(b"test").digest())
20
SHA256–32 bytes/256 bit
SHA256 hashes are 32 bytes or 256 bits long. SHA256 hashes are commonly used.
>>> import hashlib
>>> hashlib.sha256(b"test").hexdigest()
'9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08'
>>> len(hashlib.sha256(b"test").digest())
32
SHA512–64 bytes/512 bit
SHA512 hashes are 64 bytes or 512 bits long. SHA512 hashes are commonly used. | https://medium.com/better-programming/how-to-hash-in-python-8bf181806141 | ['David Mezzetti'] | 2020-01-23 00:34:25.222000+00:00 | ['Encryption', 'Programming', 'Cybersecurity', 'Python', 'Duplicate Detection'] |
The Latest: The Boston Globe is publishing fiction now (May 11, 2020) | The Latest: The Boston Globe is publishing fiction now (May 11, 2020)
Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews.
THE NEWS
The Boston Globe is publishing a serialized fictional “mystery thriller” set in Boston over the course of two weeks, with prominent placement on its website homepage and the A1 of its print edition.
SO WHAT
This is the latest example of how fiction — and other non-journalism content, like poetry — can serve readers in ways that journalism can’t: countering news fatigue, reaching new readers, and driving habit and conversions with new products.
Publishers have turned to fiction to help engage and retain readers experiencing news fatigue, an issue affecting about a third of those surveyed by Reuters in 2019. The Boston Globe framed its decision as giving readers a respite from COVID-19 coverage, a real need given that seven in 10 Americans want to take breaks from pandemic news according to Pew. Even outside of pandemic times, The Verge, for example, hoped that it would “refresh the feeds of followers who may be getting burnt out of Facebook data privacy explainers or the government shutdown” with Better Worlds, an initiative last year to publish original and adapted science fiction stories.
Different kinds of content can bring in new readers. These can be fans of the writers published (as The Verge banked on) or more generally, consumers of fiction or poetry. A viral story can also entice even those who aren’t fiction readers: The New Yorker’s Cat Person, for example, was the second-most viewed story on its website in 2017, even though it was published in mid-December.
Fiction can also inspire new products that drive habit and subscriptions. The New Yorker, for example, spotlights its published fiction in a bi-weekly Books & Fiction newsletter. On its fiction podcast, recent New Yorker fiction contributors read other author’s stories from the archive. The magazine also launched a poetry bot, which sent followers on Facebook Messenger and Twitter a poem a day from its archives. Monica Racic, The New Yorker’s director of production and multimedia, told us that the poetry bot’s audience was highly engaged and subscribed at a significantly higher rate than people who came to the site from other avenues.
LOOK FOR
More blurring of the line between fiction and journalism. It isn’t necessarily black-and-white: speculative journalism, which intertwines reporting with science fiction to imagine possible futures, has grown in popularity. Readers can find the form in the op-ed pages of The New York Times, which runs an “Op-Eds from the Future” series and publications like High Country News and McSweeney’s Quarterly, which devoted entire issues to speculative climate change writing.
Also look for forays into fiction in non-text forms. Publishers may build on their experiments with speculative journalism and enter the fiction podcast game. This could open the door to optioning as another business justification for fiction production down the line, as podcast-native companies like Gimlet Media have done. Gimlet’s Homecoming, for example, became an Amazon Show, and the company has a whole division dedicated to making Hollywood deals (read our conversation with Chris Giliberti, Head of Gimlet Pictures for more). | https://medium.com/the-idea/the-latest-the-boston-globe-is-publishing-fiction-now-may-11-2020-449bec76edd9 | ['Saanya Jain'] | 2020-05-11 21:01:00.952000+00:00 | ['Media', 'Fiction', 'Journalism', 'The Latest'] |
Serious and Easy Crypto With AES/GCM | First things first, here are some definitions:
Key: A byte array that both parties have, 128-bit (16 bytes)
Nonce: A number used once, 96-bit (12 bytes)
Authorization data: A public text from sender, arbitrary size.
Authorization tag: A byte array created at sender side, 128-bit (16 bytes)
Plain text: Data to be encrypted, can be text or binary, arbitrary size.
Cipher text: Encrypted data, same size as plain data.
4 of these components should be transmitted/received by communicating parties; nonce, cipher, authorization tag and authorization data. Below is the structure for these 4 components, considered as the data package to be carried over i.e. a network.
public struct AESData
{
public string nonce;
public string cipher;
public string authTag;
public string authData;
}
Encrypted data is essentially a byte array, containing non-ASCII characters. We’ll use base64 encoding to convert byte arrays to strings, so the package can be sent and received over any text-based communication medium.
Encryption
AES/GCM, when encrypting, takes the key, nonce, authorization data and plain text as input, gives cipher text and authorization tag as output.
Plain text gets encrypted using the key and the nonce, creating cipher text. Then, authorization data is mixed with cipher text using the key, resulting 128-bit authorization tag, which is to prove both authenticity and integrity of the message.
Below piece of code shows how encryption is done in C#.
// declare aes variables
byte[] key = new byte[16];
byte[] nonce = new byte[12];
byte[] authTag = new byte[16];
byte[] authData = utf8enc.GetBytes("Auth data");
// data to be transmitted and received in base64 encoding
AESData aesData = new AESData();
// assign plain text
string plainText = "Hello AES/GCM! Some non-standard chars: öçşığü";
// convert the plain text string to a byte array
byte[] plainBytes = utf8enc.GetBytes(plainText);
// allocate the cipher text byte array as the same size as the plain text byte array
byte[] cipher = new byte[plainBytes.Length];
// perform encryption
using (AesGcm aesgcm = new AesGcm(key)) aesgcm.Encrypt(nonce, plainBytes, cipher, authTag, authData);
// encode aes data to Base64 strings, which will be transmitted
aesData.nonce = Convert.ToBase64String(nonce);
aesData.cipher = Convert.ToBase64String(cipher);
aesData.authTag = Convert.ToBase64String(authTag);
aesData.authData = Convert.ToBase64String(authData);
Decryption
AES/GCM, when decrypting, takes the key, nonce, authorization data, authorization tag and cipher text as input, gives plain text as output.
Authorization data is mixed with cipher text using the key, resulting 128-bit authorization tag. If this calculated tag matches the received tag, then the message is really sent by the sender, since only the person with the correct key can create that authorization tag. Then, cipher text gets decrypted using the key and the nonce, creating plain text.
Below piece of code shows how decryption is done in C#.
// decode received Base64 strings to aes data
nonce = Convert.FromBase64String(aesData.nonce);
cipher = Convert.FromBase64String(aesData.cipher);
authTag = Convert.FromBase64String(aesData.authTag);
authData = Convert.FromBase64String(aesData.authData);
// allocate the decrypted text byte array as the same size as the plain text byte array
byte[] decryptedBytes = new byte[cipher.Length];
// perform decryption
using (AesGcm aesgcm = new AesGcm(key)) aesgcm.Decrypt(nonce, cipher, authTag, decryptedBytes, authData);
// convert the byte array to the plain text string
string decryptedText = utf8enc.GetString(decryptedBytes);
What about nonce?
So far we got data authenticity, integrity and confidentiality, without using nonce value. But what if an eavesdropper saves a copy of a package, despite being unable to see the plain content, send it to receiver at a later time. How the receiver can determine whether the package is from the real sender or an eavesdropper? This is called replay attack.
Nonce value can be used to have protection against replay attack. Communicating parties may both have a record of nonce value of zero at the beginning. Every time one party preparing an AES package, may increase nonce value by one and use that value. Receiver may check if nonce value in the received package is greater than the last value of nonce on its record or not. If it’s not, then that package is a replayed package, so receiver discards it. | https://medium.com/swlh/serious-and-easy-crypto-with-aes-gcm-708e5176a198 | ['Yaşar Yücel Yeşilbağ'] | 2020-12-15 11:01:32.520000+00:00 | ['Aes Gcm', 'Cryptography', 'Dotnet Core', 'Csharp', 'Cross Platform'] |
Germantown: Growing Food in a Pandemic | Jasmine Thompson starts getting the lot at Awbury Agricultural Village ready for growing. (Maleka Fruean for Germantown Info Hub)
Germantown is known for its mix of houses, apartments, and row homes, as well as for its variety of green spaces.
And in that green space, neighbors — from the novice grower to the experienced farmer — have been catalyzed by the Covid-19 outbreak to start growing food, for themselves and others.
“ Honestly, I just really saw so many inspiring efforts, especially around food and the giving of it for free, and I just wanted to grow free food for people because of the crisis,” said Jasmine Thompson.
Thompson grew up in East Germantown near Stenton and Wister. She has a full time job working in community food systems at the Food Trust, but also created Philly Forests. Originally it was her personal landscaping business.
When the pandemic began, she adapted her plans.
“So many of my family members and friends lost their jobs because of this,” Thompson said. She knew she wanted to grow food and give it away, so she approached Awbury Agricultural Village, an area in the Awbury Arboretum that had some available land. Projects that were philanthropic in nature could apply to grow on the lots, so Philly Forests became a food growing project.
She was able to procure a lot, with access to a small greenhouse area. She paid a reduced fee for the entire growing season, and even with a late start into preparation, plans on growing a variety of vegetables.
Sid Bailey prepares his garden. (Photo: Valerie Peghini Bailey)
She also hopes to include a combination of harvest-your-own days and free boxes of fresh produce delivered in Germantown, with volunteers on board to help with the operation.
There are no eligibility guidelines. Thompson is still trying to figure out how to make sure boxes of food will get to the most vulnerable in the neighborhood — older folks who can’t leave home and the immunocompromised. Getting the word out about the program is also something she is thinking about. Right now, information is only available through social media.
But it’s not just experienced farmers that are growing food. Kristen O’Guin is a sexuality educator in Mt. Airy. Dwendolyn Lloyd is a pastry chef and Sid Bailey is an online translator, both from Germantown. All three of them decided this was a good time to start growing.
Bailey knew that it was important for him and his family to have access to fresh foods. He also felt the experience itself was valuable.
“I think in most of this neighborhood there’s enough space to have food production in the area,” Bailey said. “It’s not only about safety and resilience, but also growing food is a lot of fun.”
“It’s so nice to get food fresh from your garden or your neighbor’s garden,” he continued. “It makes you connected to your food, and the quality.”
Dwendolyn Lloyd does some type of gardening every year, but this year she was especially inspired to become more active. “Now I have a lot more time… I don’t know what is happening in the future so let’s start planting more fruits and veggies.” She helps out with her next door neighbors’ gardens and is beginning to share seeds, and trying to plan for what seems like an unpredictable future.
Most of Lloyd’s family from Germantown has grown food, especially her sister Amanda. “My sister grows a lot of food, she has an empty lot next to her house, “ said Lloyd. The lot isn’t owned by anybody as far as they know, and Amanda cleaned it up years ago and started planting. Amanda now gardens the lot every year, and is thinking about adding chickens this year.
Food insecurity and fear of our industrial food chains and food systems being interrupted or infected has resulted in these kind of responses before.
George Boudreau, a history and research fellow at University of Pennsylvania, recalls some of what he refers to as the “second founding of the Germantown area”. Boudreau says during the yellow fever epidemic of 1793, many people fled from downtown and started camping in the area along the Schuylkill. Wealthier folks rented apartments and houses in the greener spaces of Germantown.
“People moved up here because it provided health and well being,” Boudreau continued. “People were up here growing food and taking it down to the city.”
Two Facebook groups connect those interested in Germantown gardening: Germantown Victory Gardens and Germantown Growing Together. | https://medium.com/germantown-info-hub/germantown-growing-food-in-a-pandemic-9bc1116af88 | ['Maleka Fruean'] | 2020-05-27 22:20:58.202000+00:00 | ['Gardening', 'Philadelphia', 'Coronavirus', 'Pandemic'] |
How I turned coffee into Bitcoin | I few months ago I had to lower my coffee consumption and was struggling to find the best way to “force” myself to do so. Nothing major prompted the change, but wanted a catalyst to really stop.
I have been following Bitcoin and other cryptocurrency for awhile, and thought I would use this as a way to get more.
In looking at all my options I figured out a way to turn coffee into Bitcoin…sort of.
The Digital Alchemist at Work
To really motivate myself I figured I would lower my costs by stopping purchasing coffee every day (easy!) but then divert funds into purchasing Bitcoin dollar cost averaging into the currency (hard!).
This prompted me to start looking for a recurring purchase into a cryptocurrency that was liquid and growing and quickly settled on BTC.
They say that using Bitcoin is key to its success and by purchase more and forcing a use case, I can help the overall ecosystem better. Having more will create usage, which will create value in the network, which will drive more use and hopefully the ecosystem as a whole.
Enter Coinbase (they have a referral program!) which is a great solution for setting up a reoccurring payment buy in platform. Below is what I set up:
The trick to dollar cost averaging into more Bitcoin was setting up a simple reoccurring transaction every week. This would prevent me from making a few coffee purchases (let’s be honest — this will get you just 1 cup in some NYC spots) and built my BTC balance.
While not the most economical approach as Coinbase has their fee structure, it really worked. I have been doing this for the past 6–9 months and it has worked well. Inadvertently I have lowered my spending habits too, being in less stores and coffee shops which evens out the fees a bit.
The question of course is where to store things, and I may wait for another post for that. In the interim BTC/USD has been trending in the right direction — although technically I shouldn’t care as I am now better equipped to weather a downturn.
While I am not quite there yet, I was partly inspired by my friend Steve
I think it is entirely possible that the value generated from purchases will yield a result, I think it’s far more likely that I will spend it first thus helping the ecosystem as a whole.
I have always been a tinkerer, and to really understand something you need to use it — this project has given me a great solution to coffee consumption and a BTC balance to spend on things. | https://medium.com/startup-grind/turning-coffee-into-bitcoin-706d3569afde | ['Eric Friedman'] | 2017-02-09 23:46:07.754000+00:00 | ['Bitcoin', 'Life Lessons', 'Productivity', 'Coffee', 'Finance'] |
An exciting journey begins for 40 young journalists in European newsrooms | In less than two months, over 800 students applied for the GNI Fellowship in Europe, which offers summer placements in a wide range of media organisations across 11 countries. The European Journalism Centre (EJC) was thrilled with the enthusiastic response. Today we are proud to announce the recipients of the 40 fellowships.
When we announced the launch of the GNI Fellowship programme earlier this year, we explained why we wanted to connect young talent with leading news organisations in Europe. In order to embrace change, news organisations must learn how to best use technology and integrate young professionals into multidisciplinary teams.
The GNI Fellowship provides fellows with the chance to step into the professional world in a highly competitive job market. All positions will be paid. Fellows will be embedded in some of the most innovative newsrooms in Europe and, at the same time, these newsrooms will benefit from their energy and new ideas.
The selection process carried out by each host organisation was hard-won and we were awed by the motivation and richness of experience of many candidates.
Meet the 40 GNI Fellows of 2019
The group is a diverse pool of aspiring journalists with backgrounds that range from design and engineering, to computer science and economics. They will soon join their selected newsrooms to explore the intersection of journalism and technology.
Here are the names of the 40 fellows and the areas of work they will be focusing on:
Data journalism and visualisation
“This is the first time I get the chance to be a valuable part of a journalistic project — instead of just being the young intern. Also, I am finally able to apply my knowledge and creative ideas to a ‘real-life’ environment.” Lea Weinmann
Design and product development
“What fascinates me most about journalism is its unstoppable speed. As a designer it is absolutely necessary to follow, and to be ahead. How to master this challenge is nowhere better to learn than in journalism.” Nicola Ritter
Audience engagement and digital storytelling
“Having a strong record in local journalism, I want to learn how to apply new means of researching, telling and displaying stories to this sector of news — because local stories tend to affect people more.” Anna Klein
Fact-checking, verification and investigative journalism | https://medium.com/we-are-the-european-journalism-centre/an-exciting-journey-begins-for-40-young-journalists-in-european-newsrooms-ee7553c7a61d | ['Paula Montañà Tor'] | 2019-05-17 13:06:52.257000+00:00 | ['Innovation', 'Newsroom', 'Journalism', 'Fellowship', 'Updates'] |
A Necessary Nihilism. “Natural science produces ancestral… | “Natural science produces ancestral statements, such as that the universe is roughly 13.7 billion years old, that the earth formed roughly 4.5 billion years ago, that life developed on earth approximately 3.5 billion years ago, and that the earliest ancestors of the genus Homo emerged about 2 million years ago. Yet it is also generating an ever-increasing number of ‘descendent’ statements, such as that the Milky Way will collide with the Andromeda galaxy in 3 billion years; that the earth will be incinerated by the sun 4 billions years hence; that all the stars in the universe will stop shining in 100 trillion years; and that eventually, one trillion, trillion, trillion years from now, all matter in the cosmos will disintegrate into unbound elementary particles. Philosophers should be more astonished by such statements than they seem to be, for they present a serious problem for post-Kantian philosophy.“ -Ray Brassier, Nihil Unbound: Enlightenment and Extinction (Palgrave-Macmillan, 2007), 49–50.
Ray Brassier may be the rightful heir to Friedrich Nietzsche’s wrestle with nihilism, in no small part because I think Brassier possesses a far more articulate and considered version of nihilism than most.
When many use the term, I think they usually mean pessimism, which is not necessarily nihilistic as pessimism assigns value to the universe; even if that value is in the negative, it is still a valuation, and not genuinely nihilistic. Brassier grasps this: if the universe persists as is, earth will die in the expansion of our sun into a red giant, then the sun itself will evaporate. The last stars, all red dwarfs in the end, will each blink out of existence. Then the universe will then expand into a thin nothingness and beyond.
I believe Brassier is correct that, as Western philosophy now stands, few if any have sufficiently addressed this conundrum: that, as things now stand, the universe is simply bound for extinction, evaporation, all our lives, cares, worries, values, meanings to vanish with it. I think Ray Kurzweil expresses a uniquely transhumanist hope that our distant descendants, more advanced than us as we are than our pre-human ancestors, will decide what to do when that time comes — -for now, it’s irrelevant. I largely agree with Kurzweil, if only in saying that this extinction only seems inevitable to humanity at present, but that this delusion plays into the naive belief that evolution stops with us, Homo sapiens. Evolution carries on and up, or it collapses; it never stops.
I’m still ambivalent, though: there’s some “pie in the sky” to assuming our descendants will fix things down the line, and maybe some futility in planning for an event I don’t plan to be around for. My extinction will come long before that of the last stars. What about the prospect of genuinely ultimate cosmic extinction that captivates me, though? If it’s an inevitability, then, as Brassier says, in logical space-time everything is already dead. There’s no argument to be made against it. Maybe it’s how one wishes to respond, then.
Total cosmic extinction seems only depressing if one sticks to what the Bhagavad Gita calls karma, which despite its Western bastardizations is not “you get what you put out,” but the attachment to valuing action based on results. Depressing though it may (initially) seem, extinction only scalds the soul if one sees the value in his or her actions in their results. But there are actions which are valuable in themselves, the Gita insists, not because they leave a lasting effect but because they are.
The pessimism of extinction comes from mourning for lost possibilities, perhaps; but what alternatives are there to an inevitability? What’s to be said of a universe bound for nihil yet capable of producing affectation so powerful one may end or begin life because of it? I’m not pompous enough to give a verdict on where the universe will be in trillions of years, or to suggest how one should respond, but I will say I’m unconvinced that depression or pessimism must go hand-in-hand with nihilism, or that nihilism cannot possess beauty, wonder. Brassier’s nihilism, at least, seems to me not antithetical to these things. He’s against attempts to re-instill the universe with meaning after our collective disillusionment, but I”m unsure what ‘meaning’ he wishes to eschew. At the very least, if absolute extinction is the total future, inevitable, perhaps the Gita and Eastern thought may have something to say here: don’t find value in ‘the winding-up scene’; even if you eschew ‘value’ and ‘meaning,’ act for action’s sake, be for being’s sake.
A scene from Nietzsche’s The Gay Science may be apropos: the mad man coming to tell the laughing masses that God is dead and they’ve killed him. He’s a panicky, hopelessly nostalgic fool who can’t deal with the death of his old scheme of meaning, but the crowd is not innocent. The crowd scoffs, but only at the madman; they can’t quite laugh off his point. The Gay Science’s thesis seems to be not just that our old system of meaning is no longer viable, but that, with the gift of science, we’ve uncovered a world we no longer know how to live in.
Modern science since the nineteenth century has only exacerbated Nietzsche’s thesis. We’ve dislodged ourselves (for the most part) from inadequate systems of meaning, but in stumbling from the bind we’ve yet to catch our balance. | https://medium.com/interfaith-now/a-necessary-nihilism-facing-the-elephant-in-the-room-with-ray-brassier-41618150bcd | ['Nathan Smith'] | 2020-12-20 10:33:55.418000+00:00 | ['Science', 'Nihilism', 'Ray Brassier', 'Philosophy', 'Hinduism'] |
[Python][Selenium] 人生苦短,把麻煩的 Chrome Browser Driver Version Mapping 自動化 | Packages matching selenium
Chocolatey is software management automation for Windows that wraps installers, executables, zips, and scripts into… | https://medium.com/drunk-wis/python-selenium-chrome-browser-%E8%88%87-driver-%E6%83%B1%E4%BA%BA%E7%9A%84%E7%89%88%E6%9C%AC%E7%AE%A1%E7%90%86-cbaf1d1861ce | [] | 2020-07-19 14:08:49.253000+00:00 | ['Selenium', 'Windows', 'Chrome', 'Automation', 'Python'] |
7 Reasons Why Writers Make the Best Lovers | Have you ever flipped to the back cover of a book just to admire the author’s bio? Of course, you have.
Writers are some of the sexiest creatures to ever grace the surface of our planet. Not only are we incredibly talented, but we have the ability to capture your attention for hours on end, leaving you smiling, crying, or shaking your knees in anticipation.
So it should come as no surprise that writers also happen to make the best lovers.
1. We have a keen eye for detail
Unlike our fellow non-writer brothers and sisters, we have an acute sense for the smallest details. After all, we’re tasked with remembering every single feature, component, and element for all our stories and characters. We know exactly who “has luscious blond locks like the morning sun” or who “can flutter around the room like a delicate butterfly emerging from her cocoon.”
So obviously, we remember how you like to be fucked.
Gentle nibbles on your right earlobe? Tell us no more.
Subtle scratches down the spine of your back with our nails? We know that too.
A slice of pizza post-coitus? Baby, it’s already heating up in the oven.
2. We have passion flowing out of every pore
Boring just doesn’t suit a writer. We are passionate beings that seek out inspiration in everything we do.
No one wants to read a lifeless story, so why would anyone want to have sex with a lifeless lover? Our thoughts exactly.
Every single part of us is constantly on the lookout for new feelings, new emotions, and above all else, new experiences. And luckily for you, that carnal craving for passion always follows us into the bedroom.
If we can’t make your toes curl or your eyes roll back into your skull, then we aren’t doing our job right.
3. We’re good with our hands
Trust us — being able to type 90+ words a minute gives us the ability to diddle your genitals in ways you never thought possible.
4. We love adventure
Our imaginations are almost as uncontrollable as our sexual urges. And that’s why we don’t mind stepping outside of our comfort zone to try out that new vibrator or to give that new leather whip a few cracks. In fact, we love experimenting with new things in and out of the bedroom.
It’s inherently ingrained in a writer to play around with different topics, characters, and storylines, so we aren’t afraid of dipping our toes into the sexual unknown.
To some people, we can be a little too adventurous. But in our minds, if you aren’t living on the edge, then you don’t have a life that’s worth living.
5. We pay attention to our surroundings
When it comes to storytelling, setting up the plot is almost as important as the story itself. Writers have a firm grasp on what it takes to get those juices flowing and can easily set the scene to make their partners feel comfortable and relaxed.
From the flickering candles to the oiled back massages, writers are professionals at crafting the perfect atmosphere for a sexual rendevous. And if you happen to be more of the back-alley-by-the-dumpster lover, don’t worry your little heart out. We’re equally prepared for that too.
6. We’re great communicators
Because duh — that’s our job.
To a writer, communicating with another person comes second nature. We aren’t shy about asking for what we want, whether it’s to be spanked on our booties or slapped on our titties.
Communication is one of our most powerful tools, and words are our most valuable assets.
If we can make a scene about a grain of sand stretch out to three pages, then we can easily share our feelings beyond a few simple head nods or shoulder shrugs.
7. We’re ridiculously attractive
It should go without saying, but yeah, writers are incredibly attractive. Not only do we wear those adorable thick-framed glasses, but we also have perfected the art of the messy hairdo.
We’re scrappy-chic — but in a way where we look completely flawless. You won’t know whether we woke up like this or spent three hours in front of the mirror primping for your visit. | https://medium.com/sex-and-satire/7-reasons-why-writers-make-the-best-lovers-f3df1918e359 | ['Ms. Part Time Wino'] | 2020-12-08 14:10:39.953000+00:00 | ['Writers On Writing', 'Humor', 'Sexuality', 'Writing', 'Dating'] |
Keeping Up with Deep Learning — 26 Nov 2020 | Keeping Up with Deep Learning — 26 Nov 2020
Deep learning papers, blog posts, Github repos, etc. that I liked this week
Photo by Sebastian Pena Lambarri on Unsplash
This is the second edition of my weekly update on deep learning. Every Thursday, I’ll release a new batch of research papers, blog posts, Github repos, etc. that I liked over the past week. Links are provided for each featured project, so you can dive in and learn about whatever catches your eye. If you missed last week’s edition, you can find it here. All thoughts and opinions are my own. Follow me or check back next week for more. Enjoy!
Very Deep VAEs
[ArXiv][Github]
OpenAI recently showed that Variational AutoEncoders can outperform other likelihood-based generative models at creating realistic 2D images. Although the authors don’t compare their results against adversarial models (e.g. StyleGAN or PGAN), it’s clear from the generated images that VAE can’t match the performance of GANs yet. But this is still really exciting research, because VAEs are much easier to train and understand than GANs. For that reason, I’m excited to see more research like this from OpenAI, because if VAEs ever match the performance of GANs, they will be highly preferred over adversarial models.
Images generated with VAEs. Source: https://github.com/openai/vdvae
End-to-End Object Detection with Adaptive Clustering Transformer
[ArXiv]
Adaptive Clustering Transformer is one of the first notable research papers to improve upon DETR. (DETR was a recent landmark paper for object detection using transformers. See here for more details.) DETR requires a large amount of training for good performance, but ACE reduces training time by about 30%. Personally, I’m surprised this paper hasn’t received more attention. DETR is an amazing development for computer vision, and we need research like this to advance the state of the art!
Obligatory photo of a Transformer… Photo by Arseny Togulev on Unsplash
Propagate Yourself
[ArXiv]
Propagate Yourself advances the state of the art for unsupervised learning on vision-related tasks. Unsupervised learning greatly reduces the amount of labeled data needed for deep learning, because it uses unlabeled samples to learn meaningful representations of the data. I’m a big believer in unsupervised and self-supervised learning, and I recently wrote an article on self-supervised learning with BYOL! Check it out for a thorough introduction to self-supervised training of neural networks.
Design Space for Graph Neural Networks
[ArXiv]
This is a survey paper of various architectures for graph neural networks. GNNs have grown tremendously in popularity over the past few years, because unlike other types of neural networks, they’re able to process irregular data types like 3D point clouds. They’re used in many 3D object detection applications, which in turn is used by most self-driving algorithms. The landscape for GNNs has changed a lot over the past couple of years, and I highly recommend this paper if you need a refresher.
NVIDIA DGX A100
[Webpage][YouTube]
NVIDIA recently released the DGX A100 — a small server with some serious GPU horsepower. Each DGX packs four A100 chips (fastest GPUs in the world at the time of writing) and up to 640 GB of GPU memory. Unfortunately, each DGX station costs a minimum of $200,000, which essentially guarantees that I’ll never use one. (Google Cloud now offers A100s in beta mode, which is much more appealing.) But it’s admittedly fun to marvel at the horsepower we’re able to pack into one server, compared to just 5 years ago.
Photo by Francesco Lo Giudice on Unsplash
Google Cloud MLE Certification
[Course][Blog Post]
I believe that online developer certifications are the way of the future. Too many ML Engineer positions currently require a graduate degree, plus multiple years of professional experience. In order to grow our field, we need an accessible, low-cost option to get qualified for ML-focused jobs. That’s exactly what the Google Cloud MLE Certification tries to accomplish. For the price of $200, you’ll learn many of the requisite skills to land a job in today’s ML Engineering landscape. (I have no affiliation with Google or their certification programs. This is purely a personal endorsement of online ML courses like this one.)
Photo by MD Duran on Unsplash
Conclusion
Research has slowed down slightly with Thanksgiving this week, but it’s still a very exciting time for deep learning. Expect to see a lot more papers involving transformers and self-supervised learning in the near future — both are extremely hot topics right now, and for good reason. (If you’re not familiar with those, check out my recent articles on Transformers from Scratch and Self-supervised Learning with BYOL.) If you enjoyed the article, follow me to get all future weekly updates and other technical articles. | https://medium.com/the-dl/keeping-up-with-deep-learning-26-oct-2020-6a5bedeb11b9 | ['Frank Odom'] | 2020-12-10 17:49:25.980000+00:00 | ['Programming', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning'] |
Blood Moving Fast | ©Shannon Mastromonico 2020
Me from yesterday. Pacific
Masking pain with too much
eager
puppy
Today I am feline
Keen, barbed wire boundaries
Breathing
Eyes on watch. Today
I am a tempest
Blood moving fast
Governing my own galaxies
and gravities
Electric
Vast | https://medium.com/scrittura/blood-moving-fast-b42f1893a821 | ['Shannon Mastromonico'] | 2020-02-27 15:15:43.670000+00:00 | ['Canadian Poet', 'Poetry Community', 'Writing', 'Writer', 'Poetry'] |
AWS CDK and Typescript: Using the New ApiGatewayV2 | Using the new API Gateway V2 is a 3-steps-process:
Create an integration.
Define an HTTP API.
Add Routes.
Note that there’s still no easy way to use WebSockets.
Create an integration
The integration is used to tell API Gateway where it should send incoming requests to. In the most simplistic case, this would be your Lambda handler. But Load Balancers or an HTTP Proxy are also possible. What might be strange is that this comes in its own package @aws-cdk/aws-apigatewayv2-integrations .
const httpApiIntegration = new LambdaProxyIntegration({
handler: fn,
});
Define an HTTP API
You have to create an HTTP API that will hold all of your routes. This instance is the equivalent of the LambdaRestApi from V1, but it has a different API.
const httpApi = new HttpApi(this, "MyApi");
There’s also a third parameter for options, such as a CORS configuration or the default integration.
Add Routes
With routes, you can specify which integration will be triggered depending on the path that has been entered or the HTTP Method that has been used.
httpApi.addRoutes({
path: "/",
methods: [HttpMethod.ANY],
integration: httpApiIntegration,
});
Bonus: Add Cloudfront
Putting a Cloudfront in front of your API Gateway V2 is luckily not that different from V1.
const feCf = new CloudFrontWebDistribution(this, "MyCf", {
defaultRootObject: "/",
originConfigs: [{
customOriginSource: {
domainName: `${httpApi.httpApiId}.execute-api.${this.region}.${this.urlSuffix}`,
},
behaviors: [{
isDefaultBehavior: true,
}],
}],
enableIpV6: true,
});
Done. If you want to create an Output to your console to get the Cloudfront Domain Name, you will have to use distributionDomainName now, instead of domainName .
new cdk.CfnOutput(this, "myOut", {
value: feCf.distributionDomainName,
});
That’s it!
Thank you very much for your attention; I hope that this was any help for you. | https://medium.com/swlh/aws-cdk-and-typescript-using-the-new-apigatewayv2-f7ad06c560d3 | ['Enrico Gruner'] | 2020-11-24 18:10:46.134000+00:00 | ['Typescript', 'Aws Tutorial', 'Aws Cdk', 'AWS Lambda', 'Api Gateway'] |
What’s in a Name? | What’s in a Name?
Is there a better option than changing the names of our brands and institutions?
Photo by Nong Vang on Unsplash
Earlier this year, Quaker Oats announced the end of Aunt Jemima. The company said, “Aunt Jemima’s origins are based on a racial stereotype” and the removal of the name and logo represents a stride “toward progress on racial equality.”
Shortly thereafter, the Washington Redskins football team announced they would be dropping the word “Redskins” and their logo. Then, the Cleveland Indians baseball team said it would also consider changing its name.
Not to be undone, Princeton University renamed a programme and building formerly named after Woodrow Wilson. The University of Southern California changed the name of one of their buildings that was previously named after a president who supported eugenics.
That these names and brands are mired in the racism and hatred of the past is without question. Today, we are striving toward something better, so, understandably, people want these names and brands stricken from history. What they represent are heart-wrenching and enraging facts, which still impact our societies to this day.
But, is the removal of these names and brands a gut-reaction that misses a crucial opportunity? In our efforts to right the wrongs of past and present injustices, could we be going overboard? Could there be a better way?
Lest we forget
Many countries observe what was originally called Armistice Day. This was first held on November 11, 1919, in honor of the ending of the First World War. Member states of the Commonwealth now refer to this day as Remembrance Day. In the United States, it’s called Veteran’s Day.
We use this day as a reminder of the sacrifices and hardships that past generations have endured due to war so that we could enjoy the freedoms of the present. The phrase “lest we forget” has been used as a plea to not forget the past and therefore not allow it to be repeated.
By changing the names of existing brands and institutions, are we purposefully striking out on the road to forgetfulness? In our efforts to make people feel safe and welcome, could our current solution be pushing us in a direction in which we are more likely to repeat the past rather than prevent it?
“Not all tears are an evil”
The history of the human race is rife with heartache. Unimaginable suffering has been endured and abided. And suffering continues to this day.
But not all suffering is bad. Not all emotional pain is negative. Sometimes it inspires us to be more and better.
When we look upon our shared history, when we examine it in a fair and unbiased way, we see our follies and our triumphs, our sins and our virtuous acts. We see the mistakes we have made and the pain we have caused. We see ourselves in everyone: the victims and the perpetrators.
What every human must learn from history is that we are all capable of incredible kindness and unspeakable malice. This is a painful lesson. No one wants to accept that within them lies the potential for evil. Yet, there it is. It’s a haunting, uncomfortable truth. One we forget at our peril. After all, November 11th is not only a day to feel grateful for what we’ve been given but a day for remembering the horrors of which humans are more than capable.
It is with this painful knowledge that we can move forward more appropriately, more carefully, and more thoughtfully. As J.R.R Tolkien wrote in The Return of the King:
“I will not say: do not weep; for not all tears are an evil.”
What can we do, instead?
We are at a crucial moment in history. The choices we make in the coming decades will determine the fate of our species, as well as this beautiful, unique planet we call home. What we do next is of the utmost importance.
When it comes to renaming our brands and institutions, let’s pause for a moment to consider our options. By removing these names and logos from public spaces, we may be creating an environment that has less emotional “triggers”, but at what cost?
Purging our public spaces of negative imagery may cause us to forget the hardships we’ve endured and thereby lead us to commit more evils in the future. This purge might cause us to repeat the past in unforeseeable ways.
Could we, instead, re-claim these names and logos for ourselves and re-purpose them toward our own ends?
What if the stories of these names and brands weren’t over? What if we could create new endings for them? Endings that don’t stop at racism or sexism or any other “-ism”, but ones that remind us of where we’ve come from and the better future we’re fighting for?
What if instead of demanding companies and institutions purge these names and logos from our public spaces, we pressure them to write these new stories? Stories that includes the truth of the heartache and the pain, but also the promise of a better tomorrow? Could these names and brands be used to inspire positive change if only they were presented in a different light?
Yes, it would hurt. Yes, it would be uncomfortable. Yes, it would have to be done thoughtfully and carefully. But, it would paint the struggle of humanity’s pursuit of justice and fairness in a more accurate and, I think, more hopeful light.
After all, human progress is not made by forgetting our past follies, but by embracing them and using what we’ve learned to make life better. | https://jeff-valdivia.medium.com/whats-in-a-name-a066869a602a | ['Jeff Valdivia'] | 2020-12-30 21:22:03.209000+00:00 | ['Politics', 'Advertising', 'History', 'Psychology', 'Racism'] |
The Literally Literary Weekly Update #2 | One Last Note
We have 27,365 followers at Literally Literary. We have approximately 200 writers. Even with these tremendous numbers, most of the submitted works get less than 20 views. Why? Algorithms.
How do we combat this and support each other? Bookmark our homepage and once a day, come here and see what you missed. The only way we can be the kind of community we all want to be is to support each others’ works by reading them.
Our homepage has a Top 25 that is updated every single day with new works published in the last month. Below that, you will find our latest works and then trending ones that you may have missed. Be a participant and read works from amazing writers that maybe you don’t follow yet, but might want to. | https://medium.com/literally-literary/the-literally-literary-weekly-update-2-a4b3deacbd20 | ['Jonathan Greene'] | 2019-12-18 15:20:15.380000+00:00 | ['Writing', 'Blogging', 'Ll Letters', 'Publication', 'Publishing'] |
Why (Nearly) Every Novelist I Know Has a Day Job | Why (Nearly) Every Novelist I Know Has a Day Job
It’s difficult to sustain a life by writing; there’s zero shame in working a 9-to-5, too.
These collected works earned the author $1.59 in royalties.
Whenever the topic of writers and pay comes up, I always enjoy dropping this little nugget: “Nearly every novelist I know has a day job.”
I don’t mean the novelists whose indie-press masterpiece sold a grand total of 15 copies (14 of them to family) before disappearing into the ether; I know mega-successful novelists, the kind whose books were optioned for movies and television shows on the way to the New York Times bestseller lists, who nonetheless hold down a 9-to-5.
Some do it for the healthcare. Others because they genuinely liked the jobs they were working before they hit it big, and have zero urge to quit now. But I also suspect there’s another element at work: fear.
Writing books doesn’t yield a consistent income, to put it mildly. According to a new study by the Authors Guild, the median pay for full-time writers was $20,300 in 2017; for those writing part-time, $6,080. Among those part-time writers, income has dropped noticeably, from $10,500 in 2009. To make matters worse, the number of magazine and newspaper venues has declined precipitously over the past few years, restricting the opportunities to supplement income via articles.
Unless you’re already a mega-selling author, or your publisher is willing to take a very expensive chance on your groundbreaking book, your advances will vary from project to project — and that’s before we talk royalties, which can fluctuate considerably. Hence the fear; you have zero idea how much you might be making a year or two from now.
Granted, I do know some folks for whom novel-writing is their one and only job. Some are retired, and writing is their second career; others have family money of some sort, or a rich spouse, or at least a spouse willing to foot most of the bills. The majority, however, work some other gig: PR, journalism, teaching, video editing, driving, and so on.
“The people who are able to practice the trade of authoring are people who have other sources of income,” a book editor is quoted as saying in the Times.
All that being said, there’s a discrepancy between the reality of the writer’s life, and the perception of the writer’s life by people not in the business; I blame Hollywood, which often portrays writers as living in enormous New York City apartments, enjoying an expensive lunch with their agents before driving up to their second home on the Hudson. Some writers do have that lifestyle (I’ve known some of them), but for the vast majority, writing is a side-hustle, even if they have several published books on their special Author’s Shelf.
This is why things like book piracy hurt; in many cases, those who illegally download novels or nonfiction tomes aren’t stealing from millionaires who won’t miss the extra $10 — they’re taking from people whose margins are already razor-thin or nonexistent. It’s also why many authors get really, really irritated when people ask for free copies of their books.
In other words, life for many writers is hard, and only getting harder — you have to be in it for the love. And find a job that can sustain you. | https://nkolakowski.medium.com/why-nearly-every-novelist-i-know-has-a-day-job-948e9c8e0753 | ['Nick Kolakowski'] | 2019-01-07 21:02:59.849000+00:00 | ['Publishing', 'Novel Writing', 'Authors', 'Writing', 'Publishing Industry'] |
Non-Linear Paths | Non-Linear Paths
Email Refrigerator :: 15
Dots in triangular pattern by unknown source
Hi there!
Big news in our house. This month, Golda went from two naps down to one. For most of you, this is not that exciting. Actually, even for me, it’s not that exciting. It means more activity planning, less downtime to rest and clean, and earlier bed times. It has also meant sleep regression.
For the non-parents, sleep regression is that thing where parents think that their kid is finally sleeping through the night. And then said kid decides to wake up at 11pm until 3am for a week. Just because. It happens about every 2 or 3 months.
A child’s brain develops asynchronously. Big growth in one area means a temporary regression in another. As they learn to walk, their language skills might lag slightly. While they learn about object permanence and their own independence (that’s where Golda is right now… lots of “no!”), sleep might be disrupted. This is normal. She’ll emerge with a more regular sleep pattern and show new signs of development.
Growth requires relapse.
It’s a great reminder for all of us, thinking about the state of the world and the state of our country. It often feels dire and hopeless, scary and gruesome. Daily. But maybe we’re going through a regression. The optimist in me is saying that just like a baby, in order for progress to happen, maybe we need a moment of backwards motion…
The paths that we’re on are not always direct. Sometimes we go backwards to go forwards. Sometimes our journeys are cyclical, returning where we started but with a new perspective. Sometimes, it’s pure chaos. Let’s talk about the non-linear paths in our lives.
Happy snacking.
Night Waves by Pi Slices
I. Cyclical Thinking
Learning to surf, I first believed that it was like riding a bike. And sure, the metaphor of not forgetting how to do it is true. But the biggest difference that quickly became apparent is that every ride leads to a fall, no matter how good I get. As the wave approached I would paddle to match its speed, it would lift me up, I’d stand up and ride as it crashed and swallowed me.
Waves are cyclical. Increase, crest, decrease, trough. And repeat. Our world is full of cycles. Moon cycles. Hormonal cycles. Seasons. Sleep. Time.
But our culture is obsessed with linear thinking: step by step guides to better, more, bigger. We’re used to seeing progress as following a win with a bigger win. So we’re surprised when things in our lives don’t follow that linear progression:
•Relationships don’t always get better and better. There are times of unhappiness and resentment, growth and independence. Regressions aren’t reasons to leave, they’re indicators of a cycle.
•Creativity often comes in waves of inspiration and breakthroughs and then creative blocks.
•Sustaining hard work requires rest. (But because we follow cultural values that promote overworking and under-resting, the idea of slowing down, taking time off, or even sleeping becomes counter to the capitalistic pursuit of greatness through work.)
It’s in our nature to project and predict. And it can feel great to forecast our lives growing upward and ever-expanding on a linear path. Every year, more passport stamps. Every move, a bigger house. Each lease, a better car. New job, higher salary.
But that’s not how most things actually work. We don’t have to tie our happiness or definitions of success to linear growth. Cyclical thinking could be going back to a place we’ve already visited in order to go deeper. Choosing to downsize our house to live more financially comfortable. Taking a pay cut for a job that will actually make us happier and have more time.
Cyclical thinking can be comforting when things don’t go perfectly, or when we make a choice that might feel “backwards.” Backwards only exists in linear thinking. On a long enough timeline, there’s always an ebbing after expansion.
Our lives are the waves we ride. Sometimes we luck into good timing and drop into to an epic wave before tumbling. And then resetting ourselves and getting up again. Sometimes the next set comes quickly and we ride another, sometimes we have to wait in the calm.
But, thinking cyclically, there is always another wave coming from the horizon to lift us.
Convergence 2 by Janusz Jurek
II. Chaos Theory in Career paths
This month, I reconnected with someone from high school. We weren’t friends then, but he posted on LinkedIn two weeks ago and it made me reach out to propose a business partnership.
It’s been said that most of us will meet upwards of 10,000 people in our lives. Likely more. Each of those people add to the noise in an infinite sea of data points in our lives. Its chaos. It’s impossible to know which one will lead to a job, a marriage, lifelong friendship, heartbreak, betrayal, mentorship…
I’m far from being an expert in chaos theory but there are two principles I do understand. The first is that in a chaotic system, there is no simple cause-and-effect patterns; everything is a result of multiple, unpredictable forces. The weather is a great example of this.
The second principle is there are patterns amid the chaos. Order, while impossible to forecast, does emerge over time.
Our career trajectories can be better understood through chaos theory. No simple cause-and-effect. Over the course of our lifetime, we will meet people that might consider us for a job or introduce us to someone pivotal. But because we happened to email at the right time or recently posted on Instagram, the forces of the universe collide into a dream job offer. Or an investment. Or business partner. There is no way to predict outcomes.
We’re constantly looking for patterns to emerge amid our career. Most people I know (myself included) look back on our last 5–10 years of work and re-edit our story. We try and summarize the patterns by (re)defining ourselves– either through a new bio, LinkedIn title, our website, or resume.
Our resume! What better example of shoehorning a chaotic system into a linear framework than our resume? Chronological. Bulleted accomplishments. The expectation that each role is more senior than the last, salary has been linear, and that gaps between jobs are minimal or explainable.
The resumes of the future will be far from linear stories. They will chart the chaotic paths to our present, they will embrace the integration of work into the rest of our lives, they will weave the thread of work together with the threads of travel, relationships, learning, and creativity.
Our careers are unpredictable, non-linear, chaotic systems. Rather than try and plan the whole thing, what if we just chose what we want to learn next, what kind of environment we can focus in, and what kinds of people we work with best?
Because the rest is just noise.
III. The Swerve
In 2012, I took a week-long trip to Peru to see the ruins of Machu Picchu. Of course I made spreadsheets and custom maps and Google docs about the best restaurants and things to do from Lima to Cusco. After weeks and weeks of planning I finally arrive, and the worst possible thing happened…no wifi.
So here I am in Peru, a Google Drive full of things to do, and no way to access it. Luckily I wrote down my hotel information and printed out my train tickets. But for almost a full week, I’m relying on my memory to direct me.
The first day, I happen upon a really fun bar where I make conversation with the bartender who tells me about a great, but not so well-known restaurant. There, I try some of the most incredible dishes of corn and coconut and cuy (when in Lima…). Over the course of 3 days in Cusco, I run into the same woman 3 times at different coffee shops and restaurants and start up a conversation with her. I still keep in contact with Becky today and just saw her a couple months ago when she was visiting NYC.
More than the impact of seeing the ancient city or the awe-inspiring view after climbing the mountain Huyana Picchu, my biggest learning of the trip was a new life philosophy.
I call it The Swerve.
To swerve is to change direction suddenly. The premise of The Swerve is to pick a direction and be open to leaving the path. When traveling, research all possibilities and then under-plan the time, choosing one or two intended highlight of the day. While heading towards the destination, the intent is to be open to what’s around and willing to leave the path in service of our own feelings. Choosing to swerve down an interesting street might lead to an unknown cafe or hidden vintage store. That might open other possibility. Alternatively, it might be a dead end but the upside is that we can always go back to our original direction.
It’s a useful method of travel but it’s also a philosophy that applies to other courses in life. In our careers, in dating, in creative work, in our health. We can set a goal, not as a destination but as a direction. Along the way, learning what we can and being open to letting our feelings influence us to new and exciting divergent paths.
The world is an exciting place with too many side roads offering possibilities we might have never expected. We’ll never find them with our faces buried in a map (or phone). Look up.
Ready? Set. Swerve.
Kiss by Quibe
In finishing up this refrigerator, I have been hyper-aware of my own creative process. Sometimes, this document looks like a mess, I don’t know where to begin. Sometimes, I’m struck by an idea and it writes itself. The creative process is definitely a non-linear one. But in order to feel good about my work, I usually keep track of it linearly. Counting hours, or days in a row that I’ve showed up to work, checking a box when I feel good about my contribution for the day. Our emotions love linear progress. And even when our world and our work are chaotic or cyclical or backwards, we may feel the need for linearity. It’s ok to resist it.
Thanks for taking time to read this. I hope it’s made you see things a little differently or made whatever you’re going through right now a little clearer.
As always, I love hearing any thoughts this might have stirred. And if you feel like sharing it, there’s no greater compliment.
Enjoy the chaos out there,
-Jake | https://medium.com/email-refrigerator/non-linear-paths-57e4f79f5b9d | ['Jake Kahana'] | 2020-12-28 00:57:58.019000+00:00 | ['Life Lessons', 'Careers', 'Planning', 'Patterns', 'Cycles'] |
COVID-19 and the first war of data science | In the subtitle of his remarkable history about the race for the nuclear bomb, science writer and historian of science Jim Baggott referred to World War II as the “first war of physics”.
Today, the efforts waged to curb the COVID-19 pandemic may be the first example of a large-scale, global data-driven response to a worldwide crisis, and as such perhaps the first war of data science.
It is difficult to overstate just how much data has become available in an extremely short time, and open science and the networks for sharing clinical and epidemiological data have enabled an unprecedented depth of analytics. We have the data, we have the tools, and we have experts. They are working hard, but we’ve never done this before and haven’t trained for it. While much progress is being made, we still must overcome many challenges.
The Starschema COVID-19 dataset
One of the tactical challenges lies in making the data “analytics ready” — ensuring the data is accurate, readily available, constantly updated, and in a format that can be easily used by data scientists on the front lines.
The case count dataset collated by Johns Hopkins University’s Center for Systems Science and Engineering (JHU CSSE) is referenced and relied on by hundreds if not thousands of data scientists. This dataset, and the dashboard JHU CSSE provides, is very important and helpful, but it wasn’t constructed in a way that makes the data analytics-ready and often data scientists who want to use the dataset need to go through a time-consuming process to unpivot, union and clean the data before they can use it in their own models and applications.
Based on the JHU CSSE data stream, Starschema has built — and will continue to update and improve — an analytics-ready dataset that draws on multiple data sources, eventually integrating high temporal resolution domestic data (e.g. the dataset provided by Italy’s Department of Civil Protection). The entire dataset is available for download via AWS S3, as well as via Snowflake‘s Data Exchange. This will enable data scientists, epidemiologists and analysts to access the most up-to-date data on COVID-19 cases through a cloud-based data warehouse, including datasets enriched with relevant information such as population densities and geolocation data. Users can leverage this information to build inferential models of disease propagation and provide their customers with unique insights into the behavior of this novel epidemic.
The Starschema COVID-19 data set on Snowflake’s Data Exchange
The Starschema COVID-19 dataset is in a classical “long” format — each permutation of a point in time and a case type — confirmed, recovered, deceased, active — is a row. In addition, many of the inconsistencies between county-level reporting — prior to 10 March 2020 — and state-level reporting have been resolved.
We are in the process of collating all data pertaining to COVID-19 and making additional data available, such as information on population size, population density, and other metrics that enable contingency planning, forecasting, and visualization. As the data size expands, we are constructing an Airflow-based dynamic DAG in order to provide a reproducible workflow for rapid ingestion and transformation of data.
The outlook
As the COVID-19 pandemic progresses, we can expect data to play an increasing role in both public and private operations. Public health authorities are already using data-driven approaches to monitor the spread of SARS-CoV-2, and phylogenetic analyses are used to identify whether particular strains of SARS-CoV-2 carry a higher risk. In the private sector, information on the number of cases is used to support business contingency operations and analyze supply chains for possible vulnerabilities.
With the increasing importance of data, ensuring data quality and consistency is paramount. Through this dataset Starschema intends to provide a practical demonstration of best-of-breed data quality assurance and data management procedures to provide public health professionals, contingency planners, and enterprises with an outline of data management sound enough to stake lives on.
As we fight SARS-CoV-2, data-driven approaches may well be what gives humanity an edge. With the rapid expansion and democratization of data and advanced analytics, data science is in a unique position to bring the tools, techniques, and procedures that were developed in the analytics domain over the last decade to bear on this unprecedented challenge.
If you have any questions regarding this dataset or how to use it, please reach out. We are here to help. | https://medium.com/starschema-blog/covid-19-and-the-first-war-of-data-science-980798f075ef | ['Chris Von Csefalvay'] | 2020-03-19 00:10:27.044000+00:00 | ['Coronavirus', 'Co Vid 19', 'Data Science', 'Machine Learning', 'Dataset'] |
Black holes— The Strangest Objects in The Universe Are Finally Acknowledged By The Nobel Committee | A Brief Explanation of Their Work
Roger Penrose wrote a ground-breaking paper with the late Stephen Hawking in 1970. They presented in that paper a new generalized theorem on spacetime singularities which revolutionized our understanding of spacetime. They proved that if our universe obeys Einstein’s general theory of relativity and Friedmann’s models about the universe, then there is essentially a singularity at the beginning. According to the Nobel Prize website:
“Penrose used ingenious mathematical methods in his proof that black holes are a direct consequence of Albert Einstein’s general theory of relativity.”
This time the Nobel prize is awarded for theoretical and observational discoveries. The theoretical work was being done in the 1970s by the legends Stephan Hawking and Roger Penrose, and it was observationally proved in the 21st century. They used Einstein’s general theory of relativity and proved the singularity theorems i.e the existence of a point in space-time where the gravitational pull is so strong that even light could not escape. These were just mathematical manipulations at that time but now it's a truth in front of us.
Now it’s an obvious question to ask why they both didn’t receive Nobel prize when we knew the black hole’s existence for so long. The answer to this is the Nobel community likes a theory to be experimentally or observationally verified. For example, Einstein predicted the existence of gravitational waves in 1916 and the Nobel community awarded them in 2017 upon their detection. This year it is awarded because we finally have received the incredible image of a supermassive black hole in the center of M-87 the previous year. It was brought to light by a huge collaborative team of the EHT (Event Horizon Telescope).
Now one can wonder why not the prize is shared between the EHT team and Roger Penrose. Well, it is hard to swallow but it's the policy of Nobel that the prize is only shared between three people and not the entire collaboration.
Professor Genzel and Andrea Ghez both have done an incredible job to answer the biggest mystery that lies inside our MilkyWay galaxy. They were observing stars near the galactic center using high-resolution infrared telescopes.
Infrared light has a long wavelength than visible light, which means it’s not blocked by tiny particles of interstellar dust in the space. We can’t see the center of our galaxy with visible light but with infrared light. Hence, in this way, one can measure the position and speed of stars locating near the center of the MilkyWay, then work out the orbits of those stars and the influence of gravity around them, after that one can calculate the mass of the central object to speculate the size of that object at the center.
In 1969, Donald Lynden-Bell and Martin Rees have put forward a suggestion that our home galaxy might contain a dense, compact supermassive black hole at its core. But there was no way to verify that because the core of the galaxy was always hidden behind gases and interstellar dust.
At that time Genzel was working as a post-doctoral fellow at UC Berkely with the late Charles Townes who was a Nobel laureate. His collaborator asserted that he presented a “remarkable technique, in which he can measure very accurately and determine quite precisely the mass and behavior of stars circulating around the galactic center.”
Together with this technique and the ground-based telescopes at ESO (European Southern Observatory) i.e NTT (New Technology Telescopes) located in Chile, they observed the motion and position of ten stars for about four years from 1992 to 1996. They collected data and modeled the orbits of those stars and derived the mass of a central object as about 2.45 million times the mass of the Sun. But that was just one result and hence can not be generalized at that time.
Observation by Genzel’s research group about the stars orbiting the galactic center also confirmed the predictions of Einstein's general theory of relativity; that the orbit of a star follows a flower-like pattern (perihelion shift) while orbiting a massive object.
Andrea Ghez has done that with much more precision with her team the galactic center group at UCLA. With more high-resolution W.M. Keck observatory’s telescopes in Hawaii, they observed the interstellar medium and dust around the supermassive black hole i.e Sagittarius A*.
They used adaptive spectroscopy, more precisely the laser technology to avoid turbulence which disrupts the light under observation in the atmosphere. They removed all kinds of turbulence and noise from their data. They observed more than 3000 stars for 12 years from 1995 to 2007. Finally, they obtained the result of the mass of a central object to be 4.5 million times the mass of the Sun. She also studied the dynamics and interactions between the stars. That observation again confirmed the previous result that a dense compact supermassive black hole resides in the galactic center.
The amazing work done by both independent teams has been recognized by royal Swedish academy of sciences. They said it “has given us the most convincing evidence yet of a supermassive black hole at the center of the Milky Way.” Keck Observatory Director Hilton Lewis admired Ghez as “one of our most passionate and tenacious Keck users.”
Andrea has been acknowledged by many people from MIT in very kind words. Nergis Mavalvala said, | https://medium.com/mathphy-exclusive/black-holes-the-strangest-objects-in-the-universe-are-finally-acknowledged-by-the-nobel-community-ee87b8b056a6 | ['Areeba Merriam'] | 2020-10-15 04:02:56.319000+00:00 | ['Astronomy', 'Nobel Prize', 'Black Holes', 'Space', 'Science'] |
Constraints | Constraints by definition are not normally considered as something positive, they mean limitations or restrictions. For me, are a necessity.
Is not amazing how constraints can help to realise how much time you waste?
Cleaning the house just in time before your parents get home.
Finishing the suitcase just int time to go straight to the airport.
That assignment that has been half-way done for three weeks, but is sharply completed on the final day just few minutes before the deadline.
Living on a tight budget. Taking care of other on a minimum wage.
Constraints are a powerful thing and we should expose ourselves more to them.
I’ve been preaching for a while, and probably have it as a personal motto: “Excess of resources are also a problem”.
For businesses.
For people.
For start-ups.
And even for governments.
The more you have of x, the more you waste.
Time, money, you name it.
The trick for me, is to realise it, not punishing yourself about how much have been wasted, but learn from it, aiming to maximise those constraints in your favour.
tHow to create those constraints, that will allow you to work at max, with less, while getting (in most of the cases) the best possible outcome.
For me is all about time, wasting and management. I’m always impressed of how much I’m able to do in short periods of time when pressure is involved.
This is post is just a clear example of me taking advantage of the circumstances when possible.
This have been written in about 5 minutes (aiming to edit later). I’m in serious rush, I’m very limited on time; having to leave the flat in 30 minutes, not before completing two other tasks.
The idea just came to my mind and immediately thought it was worthwhile to write about it. I had two choices.
1 — Leave if for later, miss the momentum and inspiration; meaning I will probably pick up the subject later, will have more time to work on it, increasing the the possibilities of abandoning it until my next bright moment.
2 — Embrace the limitations and just get it done. Write it down as draft, but put it all out there, looking for the best possible outcome, like it’s going live right away.
This, is a clear exercise of number two, and have to admit I’m very pleased. Not necessarily my best piece, but certainly not my worst either. And it’s done.
What is your constraint and how does it work for you? | https://medium.com/thoughts-on-the-go-journal/constraints-1276515d2e12 | ['Joseph Emmi'] | 2016-10-24 22:58:06.226000+00:00 | ['Personal', 'Constraints', 'Self Improvement', 'Life', 'Productivity'] |
[Java-201a] The Hidden Dangers of Scanner | Don’t Close Scanner!
When we use Scanner and leave it open, our IDE may complain that we have a resource leak. Even if the IDE does not complain, usually it’s a good idea to close something if we don’t need it anymore. The intuition, then, is to simply close it… Right? Consider the following code snippet:
public class Main {
public static void getInput1() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void getInput2() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void main(String[] args) {
getInput1();
getInput2();
}
}
At a glance, there is nothing wrong with the code. We create a new Scanner, close it, create another one, and close that one. However, if we try to run it, this is what we get:
ss
Exception in thread "main" java.util.NoSuchElementException: No line found
at java.util.Scanner.nextLine(Scanner.java:1540)
at com.usc.csci201x.Main.getInput2(Main.java:17)
at com.usc.csci201x.Main.main(Main.java:23) Process finished with exit code 1
I typed ss .
What’s going on here then? Observe the constructor of Scanner when we instantiated it: we gave it System.in . That’s an okay thing to do. However, it has an implication that may not be obvious: when we do scanner.close() , we are closing System.in as well.
What is System.in ?
System.in is the system input stream opened up by the JVM. Normally it stays open throughout the lifetime of the program so we can capture user input. However, if we close it prematurely, we will not be able to capture user input anymore. Hence, when we try to read input with System.in again, it tells us No line found .
The Solution
Just don’t close Scanner.
If you must close Scanner, you can wrap System.in inside of a FilterInputStream and override the close() method:
Scanner scanner = new Scanner(new FilterInputStream(System.in) {
@Override
public void close() throws IOException {
//don't close System.in!
}
});
And now it is safe to call scanner.close() as it will only close FilterInputStream instead of System.in .
If your application is single-threaded, consider making Scanner static. Maybe like so: | https://medium.com/swlh/java-201a-the-hidden-dangers-of-scanner-7c8d651a1943 | ['Jack Boyuan Xu'] | 2020-01-25 07:17:51.034000+00:00 | ['USC', 'Viterbi', 'Programming', 'Java'] |
3 to read: The trouble with platforms & newsrooms | ‘Darts & Laurels’ goes monthly | Planning a… | 3 to read: The trouble with platforms & newsrooms | ‘Darts & Laurels’ goes monthly | Planning a data story
By Matt Carroll <@MattatMIT>
A good week: Platforms vs newsrooms; ‘Darts & Laurels’ goes monthly (yeah!); and some help for writing data stories.
Get notified via email: Send note to 3toread (at) gmail.com
“3 to read” online
Matt Carroll runs the Future of News initiative at the MIT Media Lab. | https://medium.com/3-to-read/3-to-read-the-trouble-with-platforms-newsrooms-darts-laurels-goes-monthly-planning-a-9000c065a08c | ['Matt Carroll'] | 2016-07-13 12:01:12.473000+00:00 | ['Journalism', 'Cuny', 'Media', 'Data', 'Data Visualization'] |
Exploring New York City Event Permits with Vega-Lite | By visualizing information, we turn it into a landscape that you can explore with your eyes, a sort of information map. And when you’re lost in information, an information map is kind of useful. - David McCandless (Journalist and Information Designer)
You’ve landed on one of the tens of thousands of datasets in Enigma Public, the world’s broadest repository of public data, and are interested in exploring it visually. Given the infinite range of visual forms that are possible to represent any dataset, the options may feel overwhelming. A good tool helps to constrain this vast design space, and gives the user a solid set of principles to build upon.
Vega-Lite is a layer of abstraction on top of d3.js. Developed at the University of Washington Interactive Data Lab, it is a web-based “grammar of graphics” that gives users the power to rapidly experiment with different visual encodings for their data. As a web based tool, it lets the user create both static and interactive data graphics, making it an excellent item to have in any data explorer’s toolbox.
Click to read my interactive tutorial on exploring data with Vega-Lite!
Never heard of Observable Notebooks before? Read on!
Data scientists and journalists alike love using “notebook”-style tools such as Jupyter (in contrast to plain text editors) for many reasons, including
Ability to present text, code, and graphics side-by-side
Ability to run and iterate on code one section at a time through “cells”
A better overall coding experience
Observablehq is a free, web-based notebook for data science, founded by a team of folks with roots in the open-source data visualization community (Mike Bostock, Tom MacWright). Unlike Jupyter, readers can view and run Observable notebooks without needing to install anything, making it an ideal tool for sharing reproducible and interactive analyses.
_______________________________________________________________
Interested in solving complex, unique problems? We’re hiring. | https://medium.com/enigma-engineering/exploring-new-york-city-event-permits-with-vega-lite-f83178ff9a8d | ['Cameron Yick'] | 2018-04-03 14:06:39.412000+00:00 | ['Open Data', 'D3js', 'Vega Lite', 'Engineering', 'Data Visualization'] |
The CIA’s War On WikiLeaks Founder Julian Assange | (Image: Lance Page / t r u t h o u t; Adapted: public domain / Wikimedia)
On behalf of the Central Intelligence Agency, a Spanish security company called Undercover Global spied on WikiLeaks founder Julian Assange while he was living in the Ecuador embassy in London.
The Spanish newspaper El Pais reported on September 25 that the company’s CEO David Morales repeatedly handed over audio and video. When cameras were installed in the embassy in December 2017, “Morales requested that his technicians install an external streaming access point in the same area so that all of the recordings could be accessed instantly by the United States.”
Technicians planted microphones in the embassy’s fire extinguishers, as well as the women’s bathroom, where Assange held regular meetings with his lawyers — Melynda Taylor, Jennifer Robinson, and Baltasar Garzon.
Morales’ company was hired by Ecuador, but Ecuador apparently had no idea that Morales formed a relationship with the CIA.
The world laughed at Assange when it was reported in a book from David Leigh and Luke Harding that he once dressed as an old woman because he believed CIA agents were following him. It doesn’t seem as absurd now.
A Tremendous Coup for the CIA
WikiLeaks founder Julian Assange as he was expelled from Ecuador embassy. Screenshot of Ruptly coverage.
Julian Assange was expelled from the embassy and arrested by British authorities on April 11. It was subsequently revealed that the U.S. Justice Department indicted him on a conspiracy to commit a computer crime charge, and in May, a superseding indictment charged him with several violations of the Espionage Act.
He became the first journalist to be indicted under the 1917 law, which was passed to criminalize “seditious” conduct during World War I.
The WikiLeaks founder was incarcerated at Her Majesty’s Prison Belmarsh in London. A court found him guilty of violating bail conditions when he sought political asylum from Ecuador in 2012. He was sentenced to 50 weeks in prison. But following his sentence, authorities refused to release him. They decided Assange should remain in the facility until a February hearing, where the U.S. government will argue for his extradition.
The expulsion, arrest, and jailing of Assange represented a tremendous coup for the CIA, which views WikiLeaks as a “hostile intelligence service.”
“It is time to call out WikiLeaks for what it really is — a non-state hostile intelligence service often abetted by state actors like Russia,” Mike Pompeo declared in April 2017, when he was CIA director.
“Julian Assange and his kind are not the slightest bit interested in improving civil liberties or enhancing personal freedom. They have pretended that America’s First Amendment freedoms shield them from justice. They may have believed that, but they are wrong.”
Pompeo added, “Assange is a narcissist who has created nothing of value. He relies on the dirty work of others to make himself famous. He is a fraud — a coward hiding behind a screen. And in Kansas [Pompeo was a representative from Kansas], we know something about false wizards.”
Unwanted Scrutiny
The CIA’s loathing for Assange stems from the fact that the dissident media organization exposed the agency to unwanted scrutiny for its actions numerous times.
In 2010, WikiLeaks published two Red Cell memos from the CIA. One memo from March 2010 outlined “pressure points” the agency could focus upon to sustain western European support for the Afghanistan War. It brazenly suggested “public apathy enables leaders to ignore voters” because only a fraction of French and German respondents identified the war as “the most urgent issue facing their nation.”
The second memo from February 2010 examined what would happen if the U.S. was viewed as an incubator and “exporter of terrorism.” It warned, “Foreign partners may be less willing to cooperate with the United States on extrajudicial activities, including detention, transfer [rendition], and interrogation of suspects in third party countries.”
“If foreign regimes believe the U.S. position on rendition is too one-sided, favoring the U.S. but not them, they could obstruct U.S. efforts to detain terrorism suspects. For example, in 2005 Italy issued criminal arrest warrants for U.S. agents involved in the abduction of an Egyptian cleric and his rendition to Egypt. The proliferation of such cases would not only challenge U.S. bilateral relations with other countries but also damage global counterterrorism efforts,” the February memo added.
On these memos, which were disclosed by U.S. military whistleblower Chelsea Manning, she said, “The content of two of these documents upset me greatly. I had difficulty believing what this section was doing.”
CIA Renditions Further Exposed
More than 250,000 diplomatic cables from the U.S. State Department, largely from the period of 2003–2010, were provided by Manning to WikiLeaks. There were several that brought unwanted scrutiny to the CIA.
The CIA abducted Khaled el-Masri in 2003. He was beaten, stripped naked, violated by a suppository, chained spread-eagled on an aircraft, injected with drugs, and flown to a secret CIA prison in Kabul known as the “Salt Pit.” El-Masri was tortured and eventually went on hunger strike, which led to personnel force-feeding him. He was released in May 2004, after the CIA realized they had the wrong man.
Cables showed the pressure the U.S. government applied to German prosecutors and officials so 13 CIA agents, who were allegedly involved in el-Masri’s abduction, escaped accountability. They were urged to “weigh carefully at every step of the way the implications for relations.”
Pressure was also applied to prosecutors and officials in Germany. They feared that magistrate Baltasar Garzón, who is now one of Assange’s attorneys, would investigate CIA rendition flights.
The cache of documents brought attention to Sweden’s decision to curtail CIA rendition flights after Swedish authorities realized stopovers were made at Stockholm’s Arlanda International Airport.
During the “Arab Spring,” cables from Egypt showed Omar Suleiman, the former intelligence chief who Egyptian president Hosni Mubarak selected as his potential successor, highlighted his collaboration with the CIA. Suleiman oversaw the rendition and torture of dozens of detainees. Abu Omar, who was kidnapped by the CIA in Milan in 2003, was tortured when Suleiman was intelligence chief.
The world also learned that the CIA drew up a “spying wishlist” for diplomats at the United Nations. The list targeted UN Secretary General Ban Ki-moon and other senior members. The agency sought “foreign diplomats’ internet user account details and passwords,” as well as “biometric” details of “current emerging leaders and advisers.” It was quite an embarrassing revelation for the CIA.
As cables spread in the international media, the CIA launched the WikiLeaks Task Force to assess the impacts of the disclosures.
Documents revealed by NSA whistleblower Edward Snowden showed during this same period the security agencies had a “Manhunting Timeline” for Assange. They pressured Australia, Britain, Germany, Iceland, and other Western governments to concoct a prosecution against him.
Several NSA analysts even wanted WikiLeaks to be designated a “malicious foreign actor” so the organization and its associates could be targeted with surveillance, an attitude likely supported by CIA personnel.
‘We Look Forward To Sharing Great Classified Info About You’
The CIA joined Twitter in June 2014. WikiLeaks welcomed the CIA by tweeting at the agency, “We look forward to sharing great classified info about you.” They shared links to the Red Cell memos and a link to a search for “CIA” documents in their website’s database.
By December, the media organization published a CIA report on the agency’s “high value target” assassination program. It assessed attacks on insurgent groups in Afghanistan, Algeria, Chechnya, Colombia, Iraq, Israel, Libya, Northern Ireland, Pakistan, Peru, Sri Lanka, and Thailand.
The review acknowledged such operations, which include drone strikes, “increase the level of insurgent support,” especially if the strikes “enhance insurgent leaders’ lore, if noncombatants are killed in the attacks, if legitimate or semilegitimate politicians aligned with the insurgents are targeted, or if the government is already seen as overly repressive or violent.”
WikiLeaks also released two internal CIA documents from 2011 and 2012 detailing how spies should elude secondary screenings at airports and maintain their cover. The CIA was concerned that the Schengen Area — ”a group of 26 European countries that have abolished passport control at shared borders” — would makie harder for operatives because they planned to subject travelers to biometric security measures.
After CIA director John Brennan had his personal AOL account by hackers, the contents were provided to WikiLeaks for a series of publications that took place in October 2015.
Julian Assange. Photo by Ministerio de Cultura de la Nación Argentina (culturaargentina) on Flickr.
U.S. Intelligence Steps Up Effort To Discredit WikiLeaks
As Democratic presidential candidate Hillary Clinton campaigned against President Donald Trump, WikiLeaks published emails from John Podesta, chairman for the Clinton campaign.The national security establishment alleged the publication was part of a Russian plot to interfere in the 2016 election.
Assange held a press conference in January 2017, where he countered, “Even if you accept that the Russian intelligence services hacked Democratic Party institutions, as it is normal for the major intelligence services to hack each others’ major political parties on a constant basis to obtain intelligence,” you have to ask, “what was the intent of those Russian hacks? And do they connect to our publications? Or is it simply incidental?”.
“The U.S. intelligence community is not aware of when WikiLeaks obtained its material or when the sequencing of our material was done or how we obtained our material directly. So there seems to be a great fog in the connection to WikiLeaks,” Assange contended.
He maintained, “As we have already stated, WikiLeaks sources in relation to the Podesta emails and the DNC leak are not members of any government. They are not state parties. They do not come from the Russian government.”
“The [Clinton campaign] emails that we released during the election dated up to March [2016]. U.S. intelligence services and consultants for the DNC say Russian intelligence services started hacking DNC in 2015. Now, Trump is clearly not on the horizon in any substantial manner in 2015,” Assange added.
Yet, in the information war between WikiLeaks and the U.S. government, Brennan responded during an appearance on PBS’ “NewsHour.” “[Assange is] not exactly a bastion of truth and integrity. And so therefore I wouldn’t ascribe to any of these individuals making comments that [they are] providing the whole unvarnished truth.”
Special Counsel Robert Mueller oversaw a wide-ranging investigation into alleged Russian interference in the 2016 election. The report, released in April 2019, did not confirm, without a doubt, that Russian intelligence agents or individuals tied to Russian intelligence agencies passed on the emails from the Clinton campaign to WikiLeaks.
CIA Loses Control Of Largest Batch Of Documents Ever
Mike Pompeo, CIA director from January 2017 to April 2018 (Photo: U.S. Government)
In February 2017, WikiLeaks published “CIA espionage orders” that called attention to how all of the major political parties in France were “targeted for infiltration” in the run-up to the 2012 presidential election.
The media organization followed that with the “Vault 7” materials — what they described as the “largest ever publication of confidential documents on the agency.” It was hugely embarrassing for the agency.
“The CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation,” WikiLeaks declared in a press release. “This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.”
“The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive,” WikiLeaks added.
Nearly 9,000 documents came from “an isolated, high-security network inside the CIA’s Center for Cyber Intelligence.” (WikiLeaks indicated the espionage orders published in February were from this cache of information.)
The publication brought scrutiny to the CIA’s “fleet of hackers,” who targeted smartphones and computers. It exposed a program called “Weeping Angel” that made it possible for the CIA to attack Samsung F8000 TVs and convert them into spying devices.
As CNBC reported, the CIA had 14 “zero-day exploits,” which were “software vulnerabilities” that had no fix yet. The agency used them to “hack Apple’s iOS devices such as iPads and iPhones.” Documents showed the “exploits were shared with other organizations including the National Security Agency (NSA) and GCHQ, another U.K. spy agency. The CIA did not tell Apple about these vulnerabilities.”
WikiLeaks additionally revealed that CIA targeted Microsoft Windows, as well as Signal and WhatsApp users, with malware.
The CIA responded, “The American public should be deeply troubled by any Wikileaks disclosure designed to damage the intelligence community’s ability to protect America against terrorists and other adversaries. Such disclosures not only jeopardize U.S. personnel and operations but also equip our adversaries with tools and information to do us harm.”
But the damage was done. The CIA was forced to engage with the allegations by insisting the agency’s activities are “subject to oversight to ensure that they comply fully with U.S. law and the Constitution.” Apple, Samsung, and Microsoft took the disclosures very seriously.
Assange attempted to force a public debate that high-ranking CIA officials did not want to have.
“There is an extreme proliferation risk in the development of cyber ‘weapons,’ Assange stated. Comparisons can be drawn between the uncontrolled proliferation of such ‘weapons,’ which results from the inability to contain them combined with their high market value, and the global arms trade. But the significance of “Year Zero” goes well beyond the choice between cyberwar and cyberpeace.”
(Note: Josh Schulte, a former CIA employee, was charged with violating the Espionage Act when he allegedly disclosed the files to WikiLeaks. He was jailed at the Metropolitan Correctional Center in New York.)
CIA Exploits New Leadership In Ecuador
Lenín Moreno was elected president of Ecuador in May 2017. At the time, the U.S. Justice Department had essentially abandoned their grand jury investigation into WikiLeaks. President Barack Obama’s administration declined to pursue charges against Assange. But officials in the national security apparatus recognized a political shift in Ecuador and exploited it.
By December, the CIA was able to fight back against Assange and WikiLeaks by installing spying devices in the Ecuador embassy.
Former CIA officer John Kiriakou contended, “The attitude at the CIA is that he really did commit espionage. This isn’t about freedom of speech or freedom of the press because they don’t care about freedom of speech or freedom of the press. All they care about is controlling the flow of information and so Julian was a threat to them.”
Recall, as the Senate intelligence committee compiled a study on the CIA’s rendition, detention, and interrogation program, the CIA flouted restrictions on domestic spying and targeted Senate staff. Personnel even hacked into Senate computers.
“The CIA likes nothing more than being able to operate unfettered,” Kiriakou further declared.
He also commented, “[Moreno] did the CIA’s bidding. I have no idea why he would do such a thing, but he was the perfect person to take over the leadership of Ecuador at exactly the time that the CIA needed a friend there.”
As 2018 progressed, restrictions imposed by the Ecuador government on what Assange was allowed to do on the internet and in his daily work for WikiLeaks intensified.
A doctor named Sondra Crosby, who evaluated Assange’s health on February 23, described the embassy surveillance she experienced during her visit. She left the embassy at one point to pick up some food and returned to the room where they were meeting to find her confidential medical notes were taken. She found her notes “in a space utilized by embassy surveillance staff” and presumed they were read, a violation of doctor-patient confidentiality.
Forcing the removal of Assange from the embassy was a major victory for the CIA, and if prosecutors win his extradition to the United States, the agency will have a hand in how the trial unfolds. | https://medium.com/discourse/the-cias-war-on-wikileaks-founder-julian-assange-4a26b78fa042 | ['Kevin Gosztola'] | 2019-10-07 12:44:03.624000+00:00 | ['Politics', 'Wikileaks', 'News', 'CIA', 'Journalism'] |
Marketing lessons from ‘The Jetsons’ | Marketing lessons from ‘The Jetsons’
What can a 58-year-old TV show teach us about modern sales?
Image via Mark Anderson on Flickr
In 2020, there’s no way to deny that technology is taking over — it’s cleaning your floors, monitoring your home for burglars, paying for your groceries, allowing you to eavesdrop on live-in guests, and controlling your television and computer viewing. Mostly it’s convenient and fun to use. Other times it can be creepy.
But fans of the ’60s (or ’80s) “The Jetsons” managed to make their tech-savvy businesses and home life more cool than it is invasive. There were no Echo users who had recorded conversations sent to someone’s employer. Hackers weren’t breaking into Amazon Ring to harass small children. Alexa and Google Home weren’t targeting users for phishing out unauthorized password changes.
But if you look closely enough, the Jetsons had to do their fair share of marketing cleanup in a tech-crazed world too. Here’s how I saw them do it. | https://medium.com/we-need-to-talk/marketing-lessons-from-the-jetsons-b40ecfabe269 | ['Shamontiel L. Vaughn'] | 2020-10-12 02:35:37.146000+00:00 | ['Technology', 'Marketing', 'Advertising', 'Space', 'Inventions'] |
How to Write a Good Essay | So you need to learn how to write a good essay. This may seem like a pretty intimidating task, but it’s really not that bad when you take the time to know and understand what you’re doing.
A standard essay has a lot of working parts. There’s the formatting, thesis statement, writing structure, grammar and punctuation, and much more. It can seem overwhelming when you think about how many elements you need to remember. But it doesn’t have to be that hard. With the right advice, you can get ahead and make sure that you turn in a paper that will blow your professor’s mind and get you the grade you need to ace your class.
Ready to learn how to write a good essay? We’ll walk you through it, from beginning to end. With our help, you can learn and understand exactly what goes into an A+ essay. Let’s start at the beginning.
Types of Essays and Papers
First, it’s good to take a look at the different types of essays that you could be writing. Each type of essay will have different requirements or formats that you should follow in order to complete the best work possible.
Here are some of the more common essay assignments you may need to write during your time at school:
● Argumentative Essay: This type of essay will present an argument to the reader and provide solid evidence as to why they should agree with your stance.
● Research Essay: A research essay takes an in-depth look at a specific topic using lots of reliable and academic sources, facts, and other data. It’s similar to the expository essay below.
● Expository Essay: This type of essay is used to explain something without taking a particular stance. When writing this paper, assume that you are writing for an audience that knows nothing about the topic and provide them with facts and data.
● Compare/Contrast Essay: With a compare/contrast essay, you are taking two things and analyzing them to showcase their similarities and differences.
● Personal or Reflective Essay: Generally, this type of essay doesn’t always follow typical format and can make use of first-person voice to reflect on your thoughts and experiences about something specific.
● Literature Review: A literature review essentially provides an overview of the literature and research that has already been done about a particular topic.
● Book Review: A book review essay is done to provide a critical analysis about a book or other piece of literature. It generally includes a summary and assessment.
How to Start an Essay
If you’re not overly familiar with how to write a good essay, it can be tricky to know where to start. This is the point where most people sit down, stare at a blank document, and start to get stressed. Don’t let yourself get stressed out before you’ve even done anything.
Every good essay starts with a topic and a plan. Begin by determining which type of essay you’re going to write. This helps you pick the right topic. For example, if you’re writing an argumentative essay, you want to make sure that you choose a topic you have an opinion about and can argue one way or another. If you’re writing a research paper, you want to make sure you choose a topic that you can find a lot of academic research about.
So, with that being said, it’s time to choose your topic.
Choosing the Right Topic For Your Paper
Choose your topic wisely. A good topic makes a big difference when it comes to your paper. It’s what drives all of your research, defines your writing, and keeps people interested — including yourself. Do you really want to spend the next few weeks writing about some topic you couldn’t care less about? Probably not. Don’t make things harder on yourself. Put some thought into this portion of your paper, or you’ll really regret it when you sit down to write.
It Should Be Interesting to You
You’re going to be doing a lot of reading and writing about this topic, so you should always choose something you’re interested in wherever possible. Sometimes you’re given your topic and don’t have a choice, but you can still spin it so that it’s something that interests you. This is incredibly important. You’re going to be sifting through academic journals and dedicating a lot of your time becoming an expert in this topic. Make sure you’re not going to get bored.
Being interested in the topic also helps you write content that really engages your reader and hooks them right away. When you’re excited about something, you want to show all of the facts and present the best argument about that topic. If you aren’t interested in what you’re writing about, how can you sell that topic to your reader?
Do the Research First
Start with some research. Don’t make a decision until you’ve been able to take a look at what’s out there and how much research you’re actually going to find about it. Often, doing initial research helps you notice and identify any trends in this topic and if there are certain research questions that come up more than others. For example, you may find that there’s a certain question or issue that keeps popping up when you’re doing the initial research. If you keep seeing those patterns, this can guide you because it may be something you want to look into.
Start Broad, Then Narrow It Down
Your topic should be something that you can narrow down to one statement or argument. Start with a broad topic that you know you want to write about (or that you have to write about as per your teacher’s request). Then, think about smaller topics within that broad argument, and figure out how you want to get specific. Find your niche and go with it.
You can’t simply take a broad topic and write about it. This is not the best way to learn how to write a good essay. You’ll find way too much research to actually make a point about something, and your essay will just be filled with generic information. This makes it really hard to find the focus of your paper, which will score you a lower grade.
For example, a topic about World War II would be really broad for one essay. Instead, you could narrow that topic down to one specific topic about World War II. So, if you’re writing an argumentative essay, you could choose the topic “why aerial warfare during World War II changed modern warfare” or “contributions by women during World War II.”
However, be cautious about being too narrow with your topic. Make sure you can still find enough relevant information before you start writing. And don’t worry — you can always adjust your thesis statement after you start writing. In fact, this happens to the best of the best more often than you can imagine. It’s all part of the writing process.
Crafting the Perfect Thesis Statement
Your thesis statement is the most important part of your essay. It’s the argument or statement that will guide the rest of your paper. You will be using your thesis statement to structure your entire paper, guide your research and determine what points you should include, and to formulate your overall argument that indicates your knowledge and opinions on the subject.
A thesis statement is basically your answer to a research question. Think about what you want to answer within your paper. This question could be something basic, such as “why were William Shakespeare’s plays and sonnets important to the English language?” Once you have your question, think about your answer, and put it into a sentence. So, for this particular question, your thesis statement could look something like this:
William Shakespeare’s plays and sonnets were important to the English language because they developed many words and terms still used today, he was the first writer to use modern prose, and he set a precedent that today’s playwrights still follow.
Now, this is still a broad thesis statement because you could fill up pages and pages about each of those arguments. But you can see the idea of how we are trying to narrow down your thesis and formulate arguments that answer the research question you’ve selected. Don’t be afraid to continue narrowing down your thesis and refining it until you’ve hit something perfectly narrow.
A thesis statement should also act as an outline for your paper, which tells your readers what you’re going to present to them and how you will be organizing that argument. It is not uncommon to see thesis statements that state outright what the paper is aiming to do. For example, you could use a thesis statement that looks like this:
This research paper will examine the contributions William Shakespeare made to the English language by analyzing his use of modern prose in three of his plays: Richard III, Hamlet, and Titus Andronicus.
Generally, your thesis should be a maximum of one to two sentences. If you can’t explain your argument or the purpose of your paper within two sentences, you need to narrow it down further or find another way to describe what you’re thinking.
Decide On the Right Essay Format to Use, Then Make an Outline
Once you’ve decided on your perfect thesis statement, you can start to plan out how your essay will be structured in a nice outline. Some professors will ask you to provide your outline before you start the research paper as an initial assignment. However, even if your professor doesn’t ask for this, you should still make sure you always use an outline to help yourself as you write.
This is one of the biggest secrets when learning how to write a good essay. A good outline always gives you something to follow and helps you stay on track without getting sidetracked. Once you do a couple papers using an outline, you won’t want to write one without an outline again.
The Importance of an Essay Outline
Making an outline to follow for your essay can be a major help when it comes to your research and writing. It will help you stay on track, and guide you as you begin to write your paper, ensuring that you stay organized and follow your thesis statement. A structured essay outline also helps you understand what you need to write about and where you should look for sources and information. Then, you can stay on track and make sure you are only looking for information that helps your paper without getting distracted by unnecessary details that don’t matter to your paper.
Your outline should, of course, follow the specific format for your essay. The professor of your course will have likely provided you with essay assignment instructions, which sometimes include the format you should be using. Determining which essay format to follow comes down to two main factors: the type of essay you’re writing, and the referencing style you’re using. Sometimes your professor will tell you which style guide to follow, while others will give you the choice.
Standard Essay Format: Building a Tasty Burger
Most essays follow the standard format of an introduction, body paragraphs for each argument or statement, and a conclusion. You will often see this type of essay format being described as the Hamburger Outline. That’s because the meat, cheese, and toppings (your body paragraphs and the bulk of your argument) are in the middle, while the buns hold it together and round it out (your introduction and conclusion). This also goes for each individual paragraph: each point needs a topic sentence and a conclusion sentence to round it out, just like burger buns.
Here’s a basic outline you should follow according to the standard burger outline:
1. Introduction Paragraph
a. The first sentence should be catchy and attention-grabbing.
b. Then, introduce the topic and provide some basic background about what you’re going to be covering.
c. The last line should be your thesis statement.
2. Body Paragraph 1: First Argument or Point
a. Start with a topic sentence introducing the point you’ll be making in that paragraph.
b. Use evidence and sources to make your points.
c. Write a transition sentence that concludes your argument and leads into the next paragraph.
3. Body Paragraph 2: Second Argument or Point
a. Start with the topic sentence introducing your point and arguments.
b. Use evidence and sources to make your points.
c. Add the transition sentence to lead into the next paragraph.
4. Body Paragraph 3: Third Argument or Point
a. Start with your topic sentence.
b. Add your evidence.
c. Conclude with your transition sentence.
5. Conclusion Paragraph
a. Restate your thesis statement (not word for word, though).
b. Summarize your arguments and provide further questions/thoughts, or relate your arguments to a greater context.
Specific Essay Formats For Different Types of Papers
If you’re writing a specific type of essay, your paper structure might look slightly different than the standard burger format. However, they’re all going to follow the basic concept of the introduction, body paragraphs, and conclusion.
For example, argumentative essays look a little different. Argumentative essay format generally contains a section where objections or opposing viewpoints are expressed and rebutted. You want to make sure this comes after your main arguments and before your conclusion. Some argumentative essays also include a section for rebuttal after each main argument, showcasing that you have acknowledged both sides of the story.
How to Write a Good Essay Using the Proper Referencing Styles
It’s important that you properly use the specified referencing style in your paper. You could lose marks simply for not following these guidelines. These are lost marks that could easily be avoided if you check the online referencing guides and take the time to follow the right instructions set out by each style manual.
There are usually three main types of referencing styles used to write most academic papers. They are MLA, APA, and Chicago/Turabian. If your program is more specialized, you may find that you are required to use other types of citation, such as ASA or Harvard. However, these three are the most common styles you will encounter and you will likely use at least one of them throughout your time in school.
MLA Citation
Modern Language Association (MLA) citation is a general format typically used in the humanities. A typical in-text citation using MLA contains the author’s last name and the page number. Here is an example (with a completely fabricated fact):
Shakespeare’s Macbeth is commonly associated with the Gunpowder Plot of 1605 and the subsequent execution of Henry Garnet for crimes of treason (Hudson 22).
When using MLA, your sources will be listed at the end of the paper in a separate Works Cited page. For a full guide on MLA citations and references, visit our handy MLA citation guide. However, to give you some idea, a typical MLA Works Cited entry for a book looks like this:
Hudson, Mila. A Global Guide to Shakespeare. Philadelphia, PA: University of Pennsylvania Press, 2008.
Papers using MLA citation style do not require a title page and usually just have the student’s name, the professor’s name, class title, and date in the upper left corner, with the title centered on the next line. Page numbers are in the top right corner with the student’s last name and the page number.
APA Citation
American Psychological Association (APA) is commonly used for papers within the social science and behavioral science fields. It’s a little more tricky than MLA because there are some specifics you need to follow. In-text citations include the author’s last name, date of publication, and page number. They look like this:
One study found that one in four Americans are diagnosed with ADHD (Ingers, 2004, p. 324).
Sources are listed at the end of the paper on a separate References page. Generally, titles are written in sentence form (with capitals only for proper nouns and at the beginning). A typical reference for an academic journal would look like this:
Ingers, E. (2004). ADHD clinical trial studies in small town America: Finding solutions for young children.
The Journal of Social Science Research, 14(3), pp. 296–340.
Your paper should include a title page with the name of the paper centered on the page, then the institution name and the student’s name on their own lines approximately two to three lines below the title. Page numbers are in the top right corner, with the title of the paper in all capitals on the top left of the page. The title page is structured slightly different — in front of the title, it should state “running head:” and continue with the title.
Here is an in-depth guide on how to cite specific sources in APA, including some examples if you’re not sure about what you’re doing.
Chicago/Turabian Citation
Chicago/Turabian citation is a very common citation style for history papers, but is also used for fine arts and business related subjects. It uses the footnotes-bibliography format. This consists of footnotes at the bottom of each page with a short form reference, with a full bibliography at the end of the paper. Your first footnote from a specific source will be a full version, slightly modified from the bibliography, and then any footnotes following would be shortened.
Here is an example using a completely made up source from a peer-reviewed journal. The in-text citation would include the sentence followed by the footnote number.
First Footnote: John Hughes, “Kamikaze Fighters in World War II,” The Journal of War History 22, no. 1 (March 2002): 68.
Subsequent Footnotes: Hughes, “Kamikaze Fighters,” 68.
Bibliography Entry:
Hughes, John. “Kamikaze Fighters in World War II.” The Journal of War History 22, no. 1 (March 2002):
50–80.
Papers using the Chicago style citation generally include a title page, with the title of the paper centered in the middle, and then the student’s name, the professor’s name, class title, and date on their own lines a few spaces down from the title.
Don’t Overlook the Introduction
The introduction of your paper is extremely important. When learning how to write a good essay, think about it from the perspective of the reader. One of the first things you’ll notice is the introduction. This is where you’re going to hook your reader and write something catchy that makes them want to keep reading. You have to give your reader enough information to understand what you’re getting at, without spilling the arguments and evidence you’re going to use in the body of the paper. Essentially, you’re explaining to your reader why it’s worth it for them to read the rest of your paper.
Start with your first sentence. Think of something that will make someone become unable to resist reading to find out more. You should avoid using cliches when you’re trying to think of something catchy. This can be hard because we’re so used to seeing those cliches in other areas of our lives, but they really have no place in a paper and often professors will dock you for being unoriginal.
When writing the rest of the introduction, start broad and then narrow down until you come to your thesis statement. It’s best to write with the assumption that your audience doesn’t know much about the topic. Give your audience a bit of context as to what you’re going to talk about so that they have enough background information to understand the points you’re making. For example, if you’re writing a paper about one of the characters in a book, give the audience a small summary about the book and the author.
If you need to, leave your introduction and write it after you’ve written the rest of the paper, or at least some of the main body paragraphs. Sometimes you need a little bit of context from the rest of the paper to understand what you need to be telling your reader, so it can be helpful to do this afterward.
Body Paragraphs
All essays, regardless of format, should be separated into different body paragraphs for each main point you’re making. Each body paragraph should begin with a topic sentence that introduces the specific point you’ll be making in that paragraph. This is almost like a mini thesis statement introducing that specific detail. At the end of each body paragraph, you should have a concluding sentence that acts as a transition to the next paragraph, whether that’s a new topic point or your conclusion.
Basically, you want to follow the same structure you would use for your introduction. Start broad, and then narrow it down until you’ve included the details and evidence to argue your point. Use as many citations from sources as you need to prove your point, but always make sure that you explain yourself and justify why that information is relevant. You need to be able to contextualize your sources and show that you have a broader understanding of the subject at hand.
There are two main styles when incorporating research and sources into your body paragraphs: induction and deduction. When using induction, you are taking specific details and information and forming a general conclusion. With deduction, you’re doing the opposite. You take general information and details, and narrow down a specific conclusion about those details. Induction is based on facts and logistics, while deduction is based on reasoning.
So, for example, if you are using induction to show that Macbeth is not a qualified leader in Shakespeare’s Macbeth, you’d prove this by showcasing how many people died under his watch and how many enemies he created. On the other hand, if you are using deduction to prove that Macbeth is not a worthy leader, you could argue that good leaders don’t kill kings and show remorse for others. Therefore, since Macbeth does not show qualities of a good leader, he is not one himself.
Nailing Your Conclusion
The conclusion is where you’re going to sum up everything. This is where you take your paper, package all the information, and put a nice bow on top to present it.
All conclusions should begin with a sentence re-stating your thesis statement from the introduction. This should be the same points, but paraphrased in a new way. After that, restate some of the general information that takes you back to your original points. Don’t start introducing new ideas and concepts. If you haven’t already talked about it in the paper, don’t mention it now. This is a summary.
A good conclusion provides the reader with something to think about. Think of this like the “so what?” portion of the essay. Why should your reader even care about what you have to say? Why are you talking about this? This is where it’s a good idea to relate your information to the current day or explain why it’s a significant subject to talk about now. For example, if you’re writing that paper about aerial fighting in World War II, talk about why this is relevant for us to talk about today. You could do so by mentioning the way our modern wars are fought from the skies and that aerial warfare paved the way for nuclear weapons, which changed the game for everyone.
Lastly, your final sentence should leave an impression on your reader while concluding everything in your paper. Be sure to go out with a bang!
Reliable Research is Key
With most good essays, research will be key. Sometimes you’ll have a specific number of sources you need to use to hit minimum requirements for your paper. Other times, it’ll be up to you and what you find in your research.
You will have already done a little bit of initial research when deciding on your topic and thesis statement, so now you can expand on that. Don’t be afraid to broaden your horizons. Check books, browse academic journals, and even ask your local librarian if you need to.
If you really want to know how to write a good essay, pay attention to your sources. The strongest essays are backed up with a good variety of primary and secondary sources, with only reliable and credible information. Here is a breakdown of the main types of sources you may use when writing essays.
Use Academic, Peer Reviewed Sources
Preferably, unless your teacher has specified otherwise, you want to use reliable sources from your school’s library or online academic database. These should always be peer reviewed. You can find this in the journal’s guidelines, the specific article details, or by filtering for peer reviewed articles when searching your online library.
Stay away from Wikipedia and other online encyclopedias. Professors hate these because they aren’t factual or peer reviewed and often can be edited by just about anyone.
Books are great too, but sometimes they can be risky because many people write them with bias about a certain subject or topic. When using books, you need to be sure that you are using something that is published by a historian, a professor, or another expert in the field. However, this depends on the subject of your paper. If you’re writing about a certain event in history and you’d like to use a book written by a firsthand witness, use quotes from their book sparingly to emphasize your point.
Primary Sources
There are two main types of academic sources that could be included in your paper: primary and secondary sources. Secondary sources are those mentioned above — anything that is peer reviewed or written from the perspective of someone providing an analysis of an event or other subject.
Primary sources are generally firsthand accounts or documents from a specific event or time. They are commonly used in history and the humanities, but could apply to many other types of essays.
Some common types of primary sources include:
● Letters
● Diary entries
● Reports
● Interviews
● Government documents (such as the U.S. Constitution)
● Newspaper articles or advertisements from the time period
● Manuscripts or plays
● Other correspondence, such as ship’s logs
● Journal articles that provide new research conclusions or results
Finding primary sources can be difficult, but many of these documents are available online. For history papers, try the Internet History Sourcebooks Project. If you’re looking for old government documents from a particular time period, you can try your country’s National Archives. Your school’s library should also have its own collection available for you to use.
How to Write a Good Essay Title For Your Paper
Of course, your paper will need a catchy and awesome title. It’s best to save this step for last, when you’re done writing your essay. If you have a working title at the beginning, that’s great. But go back to this at the end, when all of the details are fresh in your mind and you know exactly what the content of the essay includes.
A good title should be interesting, unique, original, and relate directly to your thesis. Yes, that seems like a lot for one title. But it’s an important part of getting your reader’s attention and telling them what you’re going to be talking about. It will establish the tone, the context, and the premise of the paper.
So, how do we decide this title? Don’t be afraid to get creative. Write out a bunch of options and see which one catches your eye. The more you draft, the easier it is to find something that works. You can even ask a friend or classmate to take a look at giving you feedback on which title they like best. When in doubt, use a how or why question.
More Essay Writing Tips to Follow as You Go
As with many other things in life, writing an essay has its fair share of tips and tricks that many writers develop over the years. Some of these seem basic, but they’re easy to overlook when you’re worried about getting everything done right. Here are some additional essay writing tips to improve your writing that could help you learn how to write a good essay.
● Don’t use a first person voice, EVER, unless your teacher has specifically requested it or you are writing a personal/reflective essay.
● Avoid contractions or casual language.
● Always proofread and edit your work. Re-read the paper and check for clarity issues, smoothness, and flow.
● Be open to feedback from peers. This is how you learn and grow.
● Read all of the instructions carefully. Your professors are expecting you to follow directions and are grading you based on these expectations.
● Slow down, don’t rush, and give yourself time. It’s easy to miss details when you’re pushing yourself to go faster.
● Avoid run on sentences.
● Keep it consistent. Make sure you’re using the same tense throughout the paper, and that you’re sticking to one style of spelling. For example, don’t start an essay in American spelling and then finish it in British spelling.
● Don’t stress yourself out! Take breaks and reward yourself for a job well done.
Still Can’t Figure Out How to Write an Essay? Get Essay Writing Help From Homework Help Global When You Need It
All of this seems like a whole lot of information to take in. When it comes to writing essays and getting ahead in school, it never hurts to ask for help. Sometimes you just don’t have time to balance a social life or a part time job and the amount of schoolwork that keeps piling up. If you’re fading under stress and piles of work, don’t hesitate to reach out to a professional who can help.
For starters, take a look at Episode 57 of The Homework Help Show, where we go over how to write a good essay. Our host, Cath Anne, goes over some awesome tips and tricks that can help you feel more comfortable with your assignments. If you still feel a little lost, consider looking into a professional custom essay writing service.
Homework Help Global provides reliable essay writing services that can help you get your paper done to the highest possible standard. Our team of highly educated professional and academic writers are here to take a load off your shoulders and complete your assignments with utmost care and consideration to every detail. We take care of the hard work, so you don’t have to worry about trying to check all of the boxes and meet all of the requirements.
If this sounds like something that you could benefit from, get in touch with us for a quote today. Our team is on hand and ready to take on your assignments. | https://medium.com/the-homework-help-global-blog/how-to-write-a-good-essay-d9447d9aa5ee | ['Homework Help Global'] | 2019-07-31 20:53:54.425000+00:00 | ['Media', 'Writing', 'Essay', 'Education', 'Essay Writing'] |
The Hidden Impact of Product Messaging on Your Scaling Business | Time for the obvious statement of the day: the startup journey is chock full of challenges. From fundraising to selling to just keeping the lights on, founding teams have plenty to deal with. But there is one thing that should be at the top of a startup’s to-do list: positioning and messaging.
I know what you’re probably thinking. Of all the things on that to-do list, why is positioning and messaging so important?
It’s so important because having the wrong messaging impacts all aspects of your business. Whether you’re pitching, selling, or marketing, messaging is involved. And creating the right messaging is not a one-and-done process. It takes trial and error, feedback, validation, and revisiting the drawing board numerous times to finally get the story right. And even when you do get it right, the story will have to scale with your business, which means that your messaging journey is never really done.
This is one lesson that Eric Prugh learned along his journey co-founding PactSafe, a platform that empowers high-velocity contract acceptance through seamless clickwrap agreements. In the early days, the PactSafe team was constantly testing and adjusting to find the right story to tell. While PactSafe is scaling and growing, it wasn’t always that way.
As Eric explains, “It took us a year and some change to really get that first big deal across the line. And you know I think it was obviously a combination of the right message, right time, right product, but it’s also aligning to the right type of person that’s going to align to the product and understand the value.” Through the journey to finding the right message, Eric learned a lot of other lessons along the way, and he shared those during his guest appearance on the Better Product podcast. Here are a few of the things he learned.
Your lack of sales success can either be a product problem or a message problem.
Some sales meetings don’t convert, and that’s just the name of the game. But if you’re striking out repeatedly and don’t know where you’re going wrong, you could have a product problem or you could have a message problem. If it is a product problem, then there are things you can do to address it and make sure that it is solving real problems. But if you’re like Eric and you’re confident that your product is addressing real problems and providing value, then it may be time to check in on the story you’re telling.
Strong messaging is more than just explaining what you do. In order to craft a compelling story and resonate with your audience in a way that drives them to buy, you need a real understanding of the value you provide, the right stories to tell, and the personas of the people who can derive the most value from your product. That way, you can be confident that the story you’re telling is the one that your audience wants to hear and it is the one that will make your product irresistible.
“It was really challenging in the beginning to get people to care. And it was because we started out with a very horizontal focus where we said anybody with online terms and conditions can use this product. And that is such a classic startup mistake to think about it that way.”
Refining your message takes persistence (and a lot of questions).
Finding the right messaging doesn’t happen overnight, and you certainly can’t do it from a bubble. In order to find the message that resonates with customers, you need to talk to customers over and over again. You should be obsessively looking for chances to test out your ideas and making in-the-moment tweaks to be constantly honing your message for each audience.
Start it early, seek validation, and don’t be afraid to get real feedback. In the early days of PactSafe, Eric and team were constantly testing, and it was from those failed meetings that they learned the mistakes they were making. As Eric explained, “It was really challenging in the beginning to get people to care. And it was because we started out with a very horizontal focus where we said anybody with online terms and conditions can use this product. And that is such a classic startup mistake to think about it that way.”
After learning lessons, making changes, and collecting wins to site in the process, they were able to align on the right messaging that spoke to the buyer’s pains and helped convert sales.
“If we didn’t have customers to talk about that were using the product, if we didn’t have the stories that related back to the real value that PactSafe provided… That’s what got us to the right messaging. That’s what got us to the right personas, and to think about things the right way.”
You need to scale your message alongside your product, and make sure that you have the right team to support it.
When you’re in startup mode, you’re likely extra responsive to your customers because you want to keep them around. But as you scale you’re going to have more differing opinions and it can be easy to lose sight of what you’re working towards. But just like you use a product roadmap to guide product growth, you need to align your message to the high-level vision and value proposition of your product, and understand when to say no. As Eric of PactSafe explains, “We definitely fell prey to being too responsive in the beginning and I think there’s a little bit of residue of that even today.”
Don’t get me wrong, it is important to listen to your customers and make sure you’re scaling in a way that aligns to their needs and goals. But resist the urge to be too accommodating and instead root your team in the foundational position and message that you want to communicate in the market.
The best way to do this is to hire someone to lead the effort. In the beginning when there is just a handful of people responsible for the message externally, it is pretty easy to keep it aligned. But as your team grows and your organization becomes more complex, product marketing can become a big blind spot for the business. If you have groups of people out there telling a different story, it will cause confusion, but bringing product marketing in house will control the message and align it to the product’s value. | https://medium.com/better-product/the-hidden-impact-of-product-messaging-on-your-scaling-business-15cd31504957 | [] | 2019-06-27 17:38:20.790000+00:00 | ['Scaleup', 'Startup', 'Product Marketing', 'Messaging', 'Positioning'] |
Stay The Course | Image courtesy of the Toronto Star
This is a chart from the fine folks at the Toronto Star, using information from Johns Hopkins University’s COVID-19 dashboard. This information represents the growth in a variety of countries around the world.
That green line that represents China finally starts to make the bend around the middle of February, when they finally ease the growth of COVID-19. This is approximately a month after the city of Wuhan was first quarantined and isolated from the rest of China.
It is also approximately a month before the easing of the physical distancing and quarantine restrictions in Wuhan.
It takes roughly two months of serious self-isolation, physical distancing, and quarantine measures to get this pandemic under control.
We aren’t out of the woods yet, folks.
Most of us in North America are only a couple of weeks into serious self-isolation and quarantine measures, and the numbers are incredibly different in US and Canada.
Look at the bright orange line that represents US cases of COVID-19. It shows no sign of relenting in its ever-increasing numbers. It is a very ugly line that should be sparking fear in every American, especially those that don’t have access to adequate and affordable healthcare.
Now, look at the red line that represents Canada. While the US cases have increased on a sharper curve than any other country (including China), Canadian cases have increased on a much slower rate.
Here in Canada, we have taken physical distancing seriously, and enacted it early. The results are showing in our numbers and our much slower rate of increase. This isn’t to say that there aren’t Americans who are doing everything they can, just that it appears to have been more successful in Canada to this date.
The bigger story from this chart? We’ve got a long way to go. Until we start to see consistent change in the rate of daily increase, we can’t even begin to think about resuming daily life. The World Health organization said in its March 30 briefing that the data being reported now represents the situation two weeks ago. We won’t know about today until two weeks from now.
So, it doesn’t matter how restless you are, nor how much you miss your loved ones. We have to stay the course and stay isolated until we are told otherwise.
We have done hard things before.
We will do hard things again.
This is just one more hard thing, and we can do it.
But we have to do it together. | https://matthewwoodall.medium.com/stay-the-course-2cb015a24582 | ['Matthew Woodall'] | 2020-03-31 13:23:43.632000+00:00 | ['Challenge', 'Health', 'Data', 'Life', 'Covid 19'] |
Singlism Is Officially in the Dictionary Now | Singlism Is Officially in the Dictionary Now
It defines the stereotyping and stigmatizing of people who are single.
Photo by Edgar Gomezon Unsplash
Coined by Dr. Bella DePaulo, the word singlism has just been officially added to the Cambridge English Dictionary. The definition in the dictionary reads as follows:
Singlism: unfair treatment of people who are single (= not married).
The word has appeared in Dr. DePaulo’s work since at least 2005. Much has been said about how singles are perceived in a society that values marriage and life-long partnerships so highly, but having a word like “singlism” in the dictionary highlights how important it has become to address the negative bias against people who are single and their lifestyle.
Author of Singled Out: How Singles Are Stereotyped, Stigmatized, and Ignored, and Still Live Happily Ever After, Dr. DePaulo has long been an advocate of singles. She defends that the single life isn’t devoid of happiness and fulfillment like so many tend to believe, but that it’s instead a valid lifestyle choice that fits some people even better than marriage.
Dr. DePaulo also highlights the importance of minding microaggressions against singles. She recognizes the problem isn’t as damaging as microaggressions rooted in racism or sexism, but she points out that the bias against singles can be just as pervasive — only we’re too used to ignoring and thinking of it as “normal,” or “just the way things are.”
Many singles can definitely relate to examples of singlism in their daily lives. From being asked why are they still single to having people assuming their lives are easy and free from any complication due to not having to work things out with a partner on a daily basis.
While the impact of microaggressions is still the subject of debate, there’s no denying our culture at large often implies that there’s something wrong with people who are single and that the anxiety over the social pressure to partner up can be draining on a person’s mental health. Understanding singles anxiety through the lens of singlism can help find new solutions to fight it, including broader validation and social acceptance of the single lifestyle.
Having singlism included in the dictionary may not solve any issue of prejudice or negative bias by itself, but having a word that defines that issue is an important tool in discussing it. Despite people like Dr. DePaulo dedicating their life’s work to it, it’s a discussion that still has a lot to develop in the future. | https://medium.com/acid-sugar/singlism-is-officially-in-the-dictionary-now-830ec98d1d19 | ['Renata Gomes'] | 2020-12-19 14:25:17.720000+00:00 | ['Relationships', 'Love', 'Life Lessons', 'Psychology', 'Lifestyle'] |
Forms with Formik & Yup | Overview
Capturing data from users is important when developing mobile apps and websites. When we are building a react website or a react-native mobile app, one hurdle people always encounter is picking a reliable forms library to facilitate capturing information and managing it — the UI components might be easy to build but capturing the input; providing validated & meaningful feedback and submitting the form is a lot to think about.
Formik is a library that has taken into consideration all the downsides of any previously built libraries and introduced an easy way to handle forms without any ‘magic’. There is a small learning curve, but once we pass this, the library makes form state management a breeze!
This article will focus on creating a simple login page using Formik with code examples to reference. By the end of this article, the mentioned learning curve should be addressed. For context, the structure will be:
1. General Setup
2. Validate inputs and provide meaningful feedback to the user
3. Understand how to submit a form
UI Set Up
React-Bootstrap comes with a lot of basic components to make a decent form— not super flashy, but the set up is as below, the following code is the UI setup only and has no logic implemented yet.
Note: You can use any UI library, some have special Formik plugins — with react-bootstrap we can use the Formik library directly. | https://medium.com/javascript-in-plain-english/forms-with-formik-yup-5ae384352f47 | ['Sufiyaan Mitha'] | 2020-11-19 08:49:29.776000+00:00 | ['React Native', 'Technology', 'Software Development', 'React', 'JavaScript'] |
Coping with Trauma During the Holidays | Post Traumatic Stress Disorder or PTSD is defined as a condition that includes flashbacks and memories of a traumatic event, avoidant behaviors, anxiety, and depression. The holidays can be filled with triggers for those who suffer from PTSD. Whether you are spending more time with family* or social distancing in your own home, the holidays can provide its own set of challenges. Your triggers do not define you and there are ways to cope with them when you experience them. Below is a list of ways you can have a safe and healthy holiday season.
Join a group therapy session or sign up for one on one therapy sessions
Group therapy can be helpful to show you that you are not alone. Many people can share their different experiences and coping skills that may be able to help you through this stressful time. One on one therapy is another option that can assist you in sorting through and coping with uncomfortable feelings and memories. There can be community programs in your area which may provide these services at little to no cost.
Find some “Me Time”
If you are spending a lot of time with your family, find a moment that you can spend doing an activity that will improve your mental health. Go for a walk, meditate, work on a craft or hobby, make sure you find some time, especially if you are feeling anxious or overwhelmed. Exercise is another way to find that “me time.” Exercise has also proven to help with the symptoms of PTSD. You can go for a jog, watch a tutorial on YouTube for yoga, ride your bike, there are many ways to relieve that stress.
Reach out to others
It isn’t unusual for someone who is suffering from PTSD to cut themselves off from their support system and isolate. Isolation, however, can worsen PTSD symptoms. Reach out to trusted friends or family members who may be able to help you through any depression or anxiety.
Set boundaries with others — and yourself
When spending time around family during the holidays, it is important to address any boundaries that you may have. Setting up boundaries can help you cope with trauma and sort through feelings in your own way. If it is safe to do so, talk to a trusted family member about what your boundaries are and what they look like. For example, If you need extra “me time,” communicating with a family member that you are living or staying with, can help open up dialogue and communication so that you can get what you need without any conflict.
You can also set up boundaries with yourself as well. You can give yourself permission to not engage in a discussion that could be harmful to your well-being, set up boundaries with yourself with stress eating and drinking, and also not allow yourself to feel pressured to buy a lot of gifts. Do what you can with what you have, and the sentiment will shine through.
Overall, remember to be gentle with yourself. At the end of the year, we may need to be even kinder to ourselves as we process all that we have been through. Just remember that you are worth feeling better.
*The CDC recommends only celebrating with those in your own household.* | https://medium.com/matthews-place/coping-with-trauma-during-the-holidays-e3d3a2038643 | [] | 2020-12-07 18:10:44.626000+00:00 | ['Mental Health', 'PTSD', 'Holidays', 'Trauma', 'Family'] |
Redis vs Memcached — Which one to pick? | Redis vs Memcached
When people talk about the Performance Improvement of an application, the one integral factor that everyone considers is server-side caching. Identifying the right cache provider that suits the requirement is an integral part of adopting the server-side caching.
Redis and Memcached are widely used open-source cache providers across the world. Most of the Cloud providers support Redis and Memcached out of the box.
In this article, I would like to share similarities and differences between the Redis and Memcached and when do we need to go for Redis or Memcached.
Similarities between Redis vs Memcached
Key-value pair data stores
Supports Data Partitioning
Sub-millisecond latency
NoSQL family
Open-source
Supported by the Majority of programming languages and Cloud providers
Redis vs Memcached — Feature Comparison
DataTypes Supported
Memcached: Supports only simple key-value pair structure
Redis: Supports data types like strings, lists, sets, sorted sets, hashes, bit arrays, geospatial, and hyper logs.
Redis allows you to access or change parts of a data object without having to load the entire object to an applicational level, modify it, and then re-store the updated version.
Memory Management
Memcached: Strictly in memory and extended to save key-value details into drive using an extension extstore
Redis: Can store the details to disk when the physical memory is fully occupied. Redis has the mechanism to swap the values that are least recently used to disk and the latest values into the physical memory.
Data Size Limits
Memcached: Can only store the data of size up to 1 MB
Redis: can store the data of size up to 512 MB (string values)
Data Persistence
Memcached: Doesn’t support data persistence
Redis: Supports data persistence using RDB snapshot and AOF Log persistence policies
Cluster Mode (Distributed caching)
Memcached: Memcached doesn’t support the distributed mechanism out of the box. This can be achieved on the client-side using a consistent hashing strategy
Redis: Supports distributed cache (Clustering)
Multi-Threading
Memcached: Supports multithreading and hence can effectively use the multiple cores of the system
Redis: Doesn’t support multi-threading
Scaling
Memcached: can ve vertically scalable. Horizontal scalability is achieved from the client-side only (Using consistent hash algorithm)
Redis: Can be horizontally scalable
Data replication
Memcached: Doesn’t support data replication
Redis: Supports data replication out of the box. Redis Cluster introduces the master node and slave node to ensure data availability. Redis Cluster has two corresponding slave nodes for redundancy.
Supported Eviction Policies
Memcached:
Least Recently Used (LRU)
Redis:
No Eviction (Returns an error if the memory limit has been reached when trying to insert more data)
All keys LRU (Evicts the least recently used keys out of all keys)
Volatile LFU (Evicts the least frequently used keys out of all keys)
All keys random (Randomly evicts keys out of all keys)
Volatile random (Randomly evicts keys with an “expire” field set)
Volatile TTL (Evicts the shortest time-to-live and least recently used keys out of all keys with an “expire” field set.)
volatile LRU (Evicts the least recently used keys out of all keys with an “expire” field set)
volatile LFU (Evicts the least frequently used keys out of all keys with an “expire” field set)
Transaction Management
Memcached: Doesn’t support transactions
Redis: Support transactions
When to go for Memcached?
Memcached is recommended when dealing with smaller and static data. When dealing with larger data sets, Memcached has to serialize and deserialize the data while saving and retrieving from the cache and require more space to store it. When dealing with smaller projects, it is better to go with the Memcached due to its multi-threading nature and vertical scalability. Clustering requires a considerable amount of effort to configure the infrastructure.
When to go for Redis?
Redis supports various data types to handle various types of data. Its clustering and data persistence features make it a good choice for large applications. Additional features like Message queue and transactions allow Redis to perform beyond the cache-store.
In addition to the above-mentioned features, Redis supports the below features as well
Message queuing support (Pub/sub)
snapshots for data archiving/restoring purpose
Lua scripting
Conclusion: Redis and Memcached both can perform very well as a Cache store. Which one to pick varies from project to project.
It is wise to consider the pros and cons of the providers right from the inception phase to avoid changes and migrations during the project.
Hope you enjoyed the article. Please share your thoughts/ ideas in the comments box. Thank you for reading it.
References:
https://medium.com/@Alibaba_Cloud/redis-vs-memcached-in-memory-data-storage-systems-3395279b0941 | https://medium.com/techmonks/redis-vs-memcached-which-one-to-pick-401b0c3cbf94 | ['Anji'] | 2020-09-24 15:31:52.802000+00:00 | ['Cache', 'Microservices', 'Redis', 'Memcached'] |
Learning SQL the Hard Way | Joins in SQL
Till now, we have learned how we can work with single tables. But in reality, we need to work with multiple tables.
So, the next thing we would want to learn is how to do joins.
Now joins are an integral and an essential part of a MySQL Database and understanding them is necessary. The below visual talks about most of the joins that exist in SQL. I usually end up using just the LEFT JOIN, and INNER JOIN, so I will start with LEFT JOIN.
The LEFT JOIN is used when you want to keep all the records in the left table(A) and merge B on the matching records. The records of A where B is not merged are kept as NULL in the resulting table. The MySQL Syntax is:
SELECT A.col1, A.col2, B.col3, B.col4
FROM A
LEFT JOIN B
ON A.col2=B.col3
Here we select col1 and col2 from table A and col3 and col4 from table B. We also specify which common columns to join on using the ON statement.
The INNER JOIN is used when you want to merge A and B and only to keep the common records in A and B.
Example:
To give you a use case lets go back to our Sakila database. Suppose we wanted to find out how many copies of each movie we do have in our inventory. You can get that by using:
SELECT film_id,count(film_id) as num_copies
FROM sakila.inventory
GROUP BY film_id
ORDER BY num_copies DESC;
Does this result look interesting? Not really. IDs don’t make sense to us humans, and if we can get the names of the movies, we would be able to process the information better. So we snoop around and see that the table film has got film_id as well as the title of the film.
So we have all the data, but how do we get it in a single view?
Come Joins to the rescue. We need to add the title to our inventory table information. We can do this using —
SELECT A.*, B.title
FROM sakila.inventory A
LEFT JOIN sakila.film B
ON A.film_id = B.film_id
This will add another column to your inventory table information. As you might notice some films are in the film table that we don’t have in the inventory . We used a left join since we wanted to keep whatever is in the inventory table and join it with its corresponding counterpart in the film table and not everything in the film table.
So now we have got the title as another field in the data. This is just what we wanted, but we haven’t solved the whole puzzle yet. We want title and num_copies of the title in the inventory.
But before we can go any further, we should understand the concept of inner queries first. | https://towardsdatascience.com/learning-sql-the-hard-way-4173f11b26f1 | ['Rahul Agarwal'] | 2020-09-28 11:21:58.749000+00:00 | ['Sql', 'Data Science', 'Programming', 'Machine Learning', 'Productivity'] |
7 (more) tips to quickly improve your UIs | Originally published at marcandrew.me on October 22nd, 2020.
Creating beautiful, but practical UIs takes time, with many design revisions along the way. I know. I’ve been there many times before myself.
But what I’ve discovered over the years is that by making some simple adjustments you can quickly improve the designs you’re trying to create.
In this follow up article (Here’s Parts 1, 2, and 3), I’ve once again put together a small, and easy to put into practice selection of tips that can, with little effort, help improve both your designs (UI), and the overall user experience (UX).
In this part I’ve focused primarily on Typography because I know how beneficial a good understanding of type can be when it comes to producing beautiful, practical designs.
Let’s dive on in… | https://uxdesign.cc/7-more-tips-to-quickly-improve-your-uis-1c2e8f446777 | ['Marc Andrew'] | 2020-11-18 10:21:11.039000+00:00 | ['UI', 'Product Design', 'Design', 'UI Design', 'Visual Design'] |
GitHub Sponsors & Gitcoin Grants =❤️ 🤖 | GitHub Sponsors & Gitcoin Grants =❤️ 🤖
Support your OSS Work with Gitcoin Grants on Github’s new Sponsors feature
The Gitcoin team was thrilled to see today that GitHub formally launched Sponsors, a way for users of GitHub to find ways to financially support their favorite projects through a variety of platforms of their choice.
This is what it looks like in GitHub’s UI:
We’d like to invite users of Gitcoin Grants to set up GitHub Sponsors on their repos. Here’s how to do it:
Go to the repository associated with Gitcoin Grants. Click ‘Settings’.
3. Scroll down to “Features’ and check “Sponsorships” and then “Set up Sponsor Button”
4. A text editor will appear on the next page that allows you to enter information about how to sponsor your repository. Paste the following text into the text editor:
custom: <gitcoin_grants_url>
and replace <gitcoin_grants_url> with your Gitcoin Grants URL.
Success!
Users will now see a “Sponsor” button on your repo:
Grow Open Source
GitHub Sponsors shows that open source sustainability is a real priority for GitHub and for Microsoft, and that’s something we’re very excited to see. We’re also excited that we’re able to engage with them on this initiative and look forward to seeing how their work on open source sustainability evolves moving forward. Here’s to a brighter future for open source software.
To learn more about Gitcoin, click below. We welcome you on our journey to grow open source while changing the way we work. | https://medium.com/gitcoin/gitcoin-grants-github-sponsors-b516192c048 | ['Kevin Owocki'] | 2019-05-23 19:08:25.228000+00:00 | ['Open Source', 'Ethereum', 'Gitcoin', 'Sustainability'] |
The History of the Eames Chair | The History of the Eames Chair 360modern Follow Jan 10 · 4 min read
The history of the Eames Chair must start with Charles and Ray Eames and their partnership with the Herman Miller Furniture Company. The chair was released in 1956 with the official name Eames Lounge (670) and Ottoman (671). The chair was the first one designed by the Eameses for the high-end market and has become one of the most famous chair designs ever created.
The exact reasons why the Eames Chair design became such a powerful and lasting design throughout the years are hard to pinpoint. However, as with all fine art, whether it be music, sculpture, painting, or design, there is something within the pieces that resonates with the viewers of the time and it continues to resonate throughout the ages. The Eames Chair has this je ne sais quoi. Now, the design is admired and desired for more than just its artistry, but also for its place in modern design history.
An Introduction to Charles and Ray Eames
Let’s step back from the chair for a moment to look deeper into the two people who created it. This husband and wife team created many stylish designs to be mass-produced at an affordable price. Their goal was to bring a fine interior design option to the average consumer in America.
However, the Eames Chair was something new for the talented duo. In fact, when the chair debuted on the Arlene Francis Home Show in 1956, it was described as “quite a departure” compared to their earlier designs.
At first, the Eames Chair was just a concept and a project taken on by the design team. They wanted to create a high-end, luxury chair. It needed to be stylish, chic, modern, and, most of all, comfortable. Charles and Ray Eames wanted the design to make the user feel as snug as if they were a baseball in a worn leather mitt.
Employees at the Herman Miller factory polish the molded plywood shells in the seventies.
Photo courtesy of Herman Miller and Dwell.com
Since the design couple was well-versed in mass-production pieces, they worked with both plastic and plywood when designing this chair. Both of these materials are common today, but in the 1950s, the Eameses were some of the first designers to use plywood and plastic in their designs.
Three pieces of molded plywood were used to create the Eames Chair: a base, a separate headrest, and a backrest. All three pieces were also covered in a rosewood veneer with later versions using walnut, cherry, and other finishes. Black or brown leather cushions were used to complete the design and a matching ottoman completed the set.
A 1959 advertisement for the Lounge set emphasizes its comfort. Another ad from the era reads “A good chair, nowadays, is hard to find,” and suggests that it’s “the only modern chair designed to relax you in the tradition of the good old club chair.” Charles took on the project because he was “fed up with the complaints that modern isn’t comfortable.”
Photo courtesy of Herman Miller and Dwell.com
Herman Miller Company sold the Eames Lounge Chair and it hit the market for the first time in 1956. This modern, stylish, and functional chair offered a beautiful look at a high-end furniture piece using plywood in a new way. It quickly rose in popularity and is one of the first chairs to be mass produced at a luxury price point.
The Eames Chair has become so popular it has been featured in many ways throughout the years. The TV show Frasier featured the chair and ottoman in the apartment of Frasier Crane. It’s referred to in the show as “the best-engineered chair in the world.”
Shark Tank, another TV show, replaced all the chairs on set with Eames Lounge Chairs after eight seasons. They wanted to create a more modern-looking set.
A version of the historic Eames Chair has also been featured in the Museum of Modern Art in New York City, the Henry Ford Museum in Dearborn, Michigan, and the Museum of Fine Arts Boston. Many mid-century modern homes also feature the Eames Chair and it’s a very popular choice for many celebrities choosing MCM as their design style.
To celebrate the 50th anniversary of the Eames Chair in 2006, Herman Miller released new models of the chair using a sustainable Palisander rosewood veneer. The chair is still in production today and remains one of the most influential furniture designs in American history. | https://medium.com/360modern/the-history-of-the-eames-chair-ddee6b92aee2 | [] | 2020-01-10 23:43:58.671000+00:00 | ['Design', 'Mid Century Modern', 'Eames Chairs', 'Modernism'] |
Medium’s New Icon Has A Hidden Meaning? | Humor
Medium’s New Icon Has A Hidden Meaning?
It’s time for a change, again.
Photo by Devin Avery on Unsplash
I was shocked when I opened my Medium page this afternoon ( 564 times so far today!) The old comforting, solid, and stolid M logo/icon had gone!
Was this a prank or a joke? Well, it’s not April 1st. It must be something else. Is it related to a conspiracy theory, deep state, or the swamp?
My partner asked me why I was so upset. She was on the phone trying to comfort me.
“As far as I can see, it’s three circles. Starting with the biggest one, then a Medium one and finally a small one which seems to be hiding behind the corner. It’s weird, disconcerting and downright upsetting.” “Maybe it just means Large, Medium and Small to cater for all sizes”, my partner said. “You know, the big fish in the pond, then the medium ones like you and finally the small one for beginners who are just dipping their toes in the water.”
We chatted away for a while and I told her that this was just one of the major changes at Medium. The claps had all gone. They were still there but it was like clapping in an empty room. One clap was the same as 50. They did not count at all, so why not just put a Like as on Facebook and be done with it.
I must say my forefinger on my right hand will enjoy the break because holding it down for 50 claps was very tiring. I bet many readers will be glad too and they will stop being criticized as being too mean and miserly.
I feel sorry for Roz Warren who used to have great fun adding the right number of claps to make a nice round number. I mean 500 is so much more impressive than 481. I hope she can find a new hobby soon.
Curation is gone. You will be lucky now if you get “distributed.” That word is so cold and impersonal. There is no indication either where your article is being sent and under what topic. It sounds like a courier delivery.
I know that thousands and thousands of writers have been released from curation jail and are celebrating — all keeping their distance, of course. Must be such a relief after being in jail all that time cooped up with cranky and frustrated writers.
Then all the earning algorithms have been changed. It’s all based on being “relational.” I don’t like that word at all. I believe it means scratching each other’s backs (reading each other’s stories).
Medium delights in changing their featured articles every 5 minutes or so. This is to make sure that you have more than enough to read and can dedicate your entire life to Medium. Makes sense, I suppose. I mean they are paying us a pittance.
We have to forge relationships. Read and write basically. That’s all it takes. Sounds like elementary school.
I bet there will be new rules on responses soon. How many claps to give? How many words?
My partner logged into the Medium app on her phone and I heard her shout with joy.
“The new logo is so pretty — I just love it!” she whooped.
If you need more laughs check out my new book The Brits Are Bonkers | https://medium.com/the-haven/mediums-new-icon-has-a-hidden-meaning-2d34ccfdf820 | ['Robert W. Locke'] | 2020-10-17 14:29:31.245000+00:00 | ['Writers On Writing', 'Medium Icon', 'Humor', 'Writing', 'Medium'] |
How to Simulate a Pandemic in Python | Introduction
What’s a better time to simulate the spread of a disease than during a global pandemic? I don’t have much more to say — let’s jump right into programming a simple disease simulation.
In real life, there are hundreds of factors that affect how fast a contagion spreads, both from person to person and on a broader population-wide scale. I’m no epidemiologist but I’ve done my best to set up a fairly basic simulation that can mimic how a virus can infect people and spread throughout a population.
In my program, I will be using object-based programming. With this method, we could theoretically customize individual people and add in more events and factors — such as more complicated social dynamics.
Keep in mind that this is an introduction and serves as the most basic model that can be built on top of.
Variables/Explanation
Fundamentally, our program will function around a single concept: any given person who is infected by our simulation’s disease has the potential to spread it to whoever they meet. Each person in our “peopleDictionary” will have a set number of friends (gaussian randomization for accuracy) and they may meet any one or more of these friends on a day to day basis.
For our starting round of simulations, we won’t implement face masks or lockdowns — we’ll just let the virus spread when people meet their friends and see if we can get that iconic pandemic “curve” which the news always talks about flattening.
So, we’ll use a Person() class and add a few characteristics. Firstly, we’ll assume that some very tiny percentage of characters simulated will already have immunity to our disease from the get-go, for whatever reason. I’m setting that at 1% (in reality, it’d be far lower but because our simulation runs so fast, a large portion like this makes a bit more sense). At the start of the simulation, the user will be prompted to enter this percentage.
Next, we have contagiousness, the all-important factor. When a person is not infected, this remains at 0. It also returns to 0 once a person ceases to be contagious and gains immunity. However, when a person is infected, this contagious value is somewhere between 0 and 100%, and it massively changes their chance of infecting a friend.
Before we implement this factor, we need to understand Gaussian Distribution. This mathematical function allows us to more accurately calculate random values between 1 and 100. Rather than the values being distributed purely randomly across the spectrum, most of them cluster around the median average point, making for a more realistic output:
As you can see, this bell-shaped function will be a lot better for our random characteristic variables because most people will have an average level of contagiousness, rather than a purely random percentage. I’ll show you how to implement this later.
We then have the variables “mask” and “lockdown” which are both boolean variables. These will be used to add a little bit of variety to our simulation after it is running.
Lastly, we have the “friends” variable for any given person. Just like contagiousness, this is a Gaussian Distribution that ends up with most people having about 5 friends that they regularly see. In our simulation, everyone lives in a super social society where on average a person meets with 2 people face to face every day. In real life, this is probably not as realistic but we’re using it because we don’t want a super slow simulation. Of course, you can make any modifications to the code that you like.
There are also a couple of other variables that will be used actively in the simulation and I’ll get to those as we go on!
Step-by-Step Walkthrough
So let’s get coding this simulation! First, there are three imports we have to do:
from scipy.stats import norm
import random
import time
SciPy will allow us to calculate values within the Gaussian Distribution we talked about. The random library will be for any variables we need that should be purely random, and the time library is just for convenience if we want to run the simulation slowly and watch the spread of the disease.
Next, we create our Person() class:
# simulation of a single person
class Person():
def __init__(self, startingImmunity):
if random.randint(0,100)<startingImmunity:
self.immunity = True
else:
self.immunity = False
self.contagiousness = 0
self.mask = False
self.contagiousDays = 0
#use gaussian distribution for number of friends; average is 5 friends
self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0))
def wearMask(self):
self.contagiousness /= 2
Why are we passing the variable startingImmunity to this class exactly? Remember how we could enter what percentage of the population would have natural immunity from day 1? When the user gives this percentage, for every person “spawned” into our simulation we’ll use random to find out if they’re one of those lucky few to already be immune — in which case the self.immunity boolean is set to True, protecting them from all infection down the line.
The remaining class variables are self-explanatory, except self.friends, which is the Gaussian Distribution we talked about. It’s definitely worth reading the documentation to get a better idea of how this works!
def initiateSim():
numPeople = int(input("Population: "))
startingImmunity = int(input("Percentage of people with natural immunity: "))
startingInfecters = int(input("How many people will be infectious at t=0: "))
for x in range(0,numPeople):
peopleDictionary.append(Person(startingImmunity))
for x in range(0,startingInfecters):
peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
daysContagious = int(input("How many days contagious: "))
lockdownDay = int(input("Day for lockdown to be enforced: "))
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
After setting up our class, we need a function to initiate the simulation. I’m calling this initiateSim() and it’ll prompt the user for four inputs — population, natural immunity population, contagious people at day 0, and how many days a person will stay contagious for. This daysContagious variable should actually be random — or even better, dependent on any number of personal health conditions, such as immune compromisation — but let’s keep it like this for a basic simulation. I found from testing that it is most interesting to run the simulation with a 4–9 day contagious period.
We spawn the inputted number of people into the simulation. To start the disease, we pick people at random to be our “startingInfecters”. As you can see, we’re assigning a Gaussian variable to each one for their level of contagiousness! (Any time a person is made contagious in the simulation we’ll repeat this process.)
We return the number of days someone will stay contagious for, like mentioned.
Now, this simulation will be done day by day, so let’s set up a function:
def runDay(daysContagious, lockdown):
#this section simulates the spread, so it only operates on contagious people, thus:
for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]:
peopleCouldMeetToday = int(person.friends/2)
if peopleCouldMeetToday > 0:
peopleMetToday = random.randint(0,peopleCouldMeetToday)
else:
peopleMetToday = 0
if lockdown == True:
peopleMetToday= 0
for x in range(0,peopleMetToday):
friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)]
if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False:
friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion))
The runDay function takes daysContagious for reasons explained later. In our first for loop, we’re using a list comprehension to find the people who are capable of spreading the disease — that is, they are contagious and have friends. We’re then calculating the number of people they could meet on that day. The maximum is 50% of their friends, and then we’re using a standard random.randint() to generate how many they actually do meet on that day.
Then we use another embedded for loop to randomly select each friend that was met from the peopleDictionary[]. For the friend to have a chance of being infected, they can’t be immune to the disease. They also have to have a contagiousness of 0 — if they’re already infected, the encounter won’t influence them. We then use the infecter’s contagiousness percentage in a random function to find out if the friendInQuestion will be infected. Finally, if they do get infected, we go ahead and assign them a Gaussian Distribution variable for their contagiousness!
I added in a simple print statement as a marker which will allow us to follow the simulation in the console as it is running. At the end of our program, we’ll add functionality to save the results to a text file anyway, but it’s cool to see little tags that tell you who is infecting who.
Next part of our runDay() function:
for person in [person for person in peopleDictionary if person.contagiousness>0]:
person.contagiousDays += 1
if person.contagiousDays > daysContagious:
person.immunity = True
person.contagiousness = 0
print("|||", peopleDictionary.index(person), " |||")
Basically, all we’re doing here is finding all the people who are contagious and incrementing their contagiousDays variable by 1. If they’ve been contagious for more days than the daysContagious time the user selected, they will become immune and hence their contagiousness drops to 0. (Again, another print marker to show that the given person has gained immunity.)
I know I could have put this in the previous for loop but not to make my programming too dense, I separated it. Sue me.
Finally, to tie it all together, we need to do a bit of admin:
lockdown = False
daysContagious, lockdownDay, maskDay = initiateSim()
saveFile = open("pandemicsave3.txt", "a")
for x in range(0,100):
if x==lockdownDay:
lockdown = True
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
print("DAY ", x)
runDay(daysContagious,lockdown)
write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "
"
saveFile.write(write)
print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close()
This is pretty self-explanatory. We get the daysContagious value by initiating the simulation, we open our save file, then cycle through the days up to day 100. Each day we use a list comprehension to get the number of people contagious and write it to our save file. I also added one final print statement so we can track the disease’s progression in the console.
And that’s it! I only explained the basics of the code, but let’s talk about the extra variables that you may have noticed…
Lockdown variable
Adding a lockdown variable is quite simple. First, add this in before the section where we cycle through each of the friends a person meets (see code above):
if lockdown == True:
peopleMetToday = 0 for x in range(0, peopleMetToday):
Now, you want to select when the lockdown is enforced? No problem. Add a user prompt tight inside your initiateSim() function.
lockdownDay = int(input("Day for lockdown to be enforced: "))
return daysContagious
return lockdownDay
Return it, and update the function call. Then, we need to define our lockdown boolean, and set it to true when we reach the correct date:
lockdown = False
daysContagious, lockdownDay = initiateSim()
saveFile = open("pandemicsave2.txt", "a")
for x in range(0,100):
if x == lockdownDay:
lockdown = True
print("DAY ", x)
You can see that I just added 3 more lines into where we manage the simulation. Simple and easy, then you will want to pass the lockdown boolean to your runDay() function and make sure the runDay() function can accept it:
runDay(daysContagious, lockdown)
And:
def runDay(daysContagious, lockdown):
That’s the lockdown added. See the results section to find out how the implementation of a lockdown affected the spread of the disease!
Facemasks
Finally, we want to add facemasks. I could add all sorts of ways that this changes how a disease spreads, but for us, we’ll just use it to decrease each person’s contagiousness. All we have to do is give the Person() class a function that tells them to wear a face mask:
def wearMask(self):
self.contagiousness /= 2
Yep, just halving their contagiousness is they wear a mask. Update initiateSim() so we can ask the user for what date the masks should come into use:
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
And update our call:
daysContagious, lockdownDay, maskDay = initiateSim()
Finally, we’ll edit the section where we cycle through the days so that if the day reaches maskDay, then we tell every person to run their wearMask() function:
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
If only it was this easy in real life, right?
Well what do you know, we’ve created a simple pandemic simulation with the ability to simulate each individual person, change attributes of the virus, enforce lockdowns, and make people wear face masks. Let’s look at our results:
Results
I’m putting all the data gathered from my text save files into Excel.
5000 people, 1 starting infecter, 1% starting immunity, 7 days contagious, no lockdown or masks:
As expected, a nice smooth curve — almost mathematically perfect. By the end of the simulation, every has gained immunity and the cases drop to a 0, which continues until all the days have completed.
Now let’s see what happens to the previous result when you implement some countermeasures:
Now what we have here is really interesting. Take the blue line. This is the simulation without any countermeasures, just like our previous result. However, when we implement a lockdown on day 15, it has a huge effect on the orange line; the spread of the disease is curbed before it can really take off, and look at that gradual curve back down again — that’s where there are no new cases and people are gradually becoming immune!
We can then compare that to the gray line, where we implement lockdown just 5 days later than orange. It has a drastically lower effect because that five-day delay really made a difference to the number of cases.
Finally, take a look at the yellow line. This is where we implement face masks, and it’s probably the most interesting simulation of all. You can see at day 15, there is a sudden change in the gradient of the line which affects how fast the disease spreads. It probably would have increased much more rapidly without the face masks! About day 21, there is a peak, and thanks to the masks, it is substantially less than the blue line, where there were no countermeasures! There is also a tiny secondary peak, and the overall summit of the curve lasts longer than any other simulation. Can you figure out why?
Next Steps
Just to clarify, this was supposed to be a simple simulation. It is, of course, very basic with very limited parameters and functionality. However, it is incredible to see how much we can learn from a simulation that takes up barely a hundred lines of code. It really puts into perspective the impact lockdowns and face masks had.
I encourage anyone reading this with a programming mindset to go out and improve my code. I’d recommend the following features:
Face masks randomly (Gaussian?) affect contagiousness
Not everyone obeys lockdown, and even for those who do, there is a chance of an infection happening, say, during a grocery shopping trip
A certain percentage of people wear face masks, and this varies on a day to day basis
More social dynamics, or parameters in general.
The idea of communities.
If anyone does take on the challenge of upgrading this code, I’d love to see what results you get from playing around with the factors. Thanks for reading!
Full code:
from scipy.stats import norm
import random
import time peopleDictionary = [] #simulation of a single person
class Person():
def __init__(self, startingImmunity):
if random.randint(0,100)<startingImmunity:
self.immunity = True
else:
self.immunity = False
self.contagiousness = 0
self.mask = False
self.contagiousDays = 0
#use gaussian distribution for number of friends; average is 5 friends
self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0))
def wearMask(self):
self.contagiousness /= 2
def initiateSim():
numPeople = int(input("Population: "))
startingImmunity = int(input("Percentage of people with natural immunity: "))
startingInfecters = int(input("How many people will be infectious at t=0: "))
for x in range(0,numPeople):
peopleDictionary.append(Person(startingImmunity))
for x in range(0,startingInfecters):
peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
daysContagious = int(input("How many days contagious: "))
lockdownDay = int(input("Day for lockdown to be enforced: "))
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
def runDay(daysContagious, lockdown):
#this section simulates the spread, so it only operates on contagious people, thus:
for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]:
peopleCouldMeetToday = int(person.friends/2)
if peopleCouldMeetToday > 0:
peopleMetToday = random.randint(0,peopleCouldMeetToday)
else:
peopleMetToday = 0
if lockdown == True:
peopleMetToday= 0
for x in range(0,peopleMetToday):
friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)]
if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False:
friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion))
for person in [person for person in peopleDictionary if person.contagiousness>0]:
person.contagiousDays += 1
if person.contagiousDays > daysContagious:
person.immunity = True
person.contagiousness = 0
print("|||", peopleDictionary.index(person), " |||")
lockdown = False
daysContagious, lockdownDay, maskDay = initiateSim()
saveFile = open("pandemicsave3.txt", "a")
for x in range(0,100):
if x==lockdownDay:
lockdown = True
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
print("DAY ", x)
runDay(daysContagious,lockdown)
write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "
"
saveFile.write(write)
print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close()
Thanks for Reading!
I hope you found this entertaining and possibly inspiring! There are so many ways that you can improve this model, so I encourage you to see what you can build and see if you can simulate real-life even closer.
As always, I wish you the best in your endeavors!
Not sure what to read next? I’ve picked another article for you:
Terence Shin | https://towardsdatascience.com/simulating-the-pandemic-in-python-2aa8f7383b55 | ['Terence Shin'] | 2020-12-21 03:35:16.322000+00:00 | ['Data Science', 'Programming', 'Simulation', 'Python', 'Pandemic'] |
Stop Worrying and Create your Deep Learning Server in 30 minutes | Setting up Amazon EC2 Machine
I am assuming that you have an AWS account, and you have access to the AWS Console. If not, you might need to sign up for an Amazon AWS account.
First of all, we need to go to the Services tab to access the EC2 dashboard.
2. On the EC2 Dashboard, you can start by creating your instance.
3. Amazon provides Community AMIs(Amazon Machine Image) with Deep Learning software preinstalled. To access these AMIs, you need to look in the community AMIs and search for “Ubuntu Deep Learning” in the Search Tab. You can choose any other Linux flavor, but I have found Ubuntu to be most useful for my Deep Learning needs. In the present setup, I will use The Deep Learning AMI (Ubuntu 18.04) Version 27.0
4. Once you select an AMI, you can select the Instance Type. It is here you specify the number of CPUs, Memory, and GPUs you will require in your system. Amazon provides a lot of options to choose from based on one’s individual needs. You can filter for GPU instances using the “Filter by” filter.
In this tutorial, I have gone with p2.xlarge instance, which provides NVIDIA K80 GPU with 2,496 parallel processing cores and 12GiB of GPU memory. To know about different instance types, you can look at the documentation here and the pricing here.
5. You can change the storage that is attached to the machine in the 4th step. It is okay if you don’t add storage upfront, as you can also do this later. I change the storage from 90 GB to 500 GB as most of the deep learning needs will require proper storage.
6. That’s all, and you can Launch the Instance after going to the Final Review instance settings Screen. Once you click on Launch, you will see this screen. Just type in any key name in the Key Pair Name and click on “Download key pair”. Your key will be downloaded to your machine by the name you provided. For me, it got saved as “aws_key.pem”. Once you do that, you can click on “Launch Instances”.
Keep this key pair safe as this will be required whenever you want to login to your instance.
7. You can now click on “View Instances” on the next page to see your instance. This is how your instance will look like:
8. To connect to your instance, Just open a terminal window in your Local machine and browse to the folder where you have kept your key pair file and modify some permissions.
chmod 400 aws_key.pem
Once you do that, you will be able to connect to your instance by SSHing. The SSH command will be of the form:
ssh -i "aws_key.pem" ubuntu@<Your PublicDNS(IPv4)>
For me, the command was:
ssh -i "aws_key.pem" ubuntu@ec2-54-202-223-197.us-west-2.compute.amazonaws.com
Also, keep in mind that the Public DNS might change once you shut down your instance.
9. You have already got your machine up and ready. This machine contains different environments that have various libraries you might need. This particular machine has MXNet, Tensorflow, and Pytorch with different versions of python. And the best thing is that we get all this preinstalled, so it just works out of the box. | https://towardsdatascience.com/stop-worrying-and-create-your-deep-learning-server-in-30-minutes-bb5bd956b8de | ['Rahul Agarwal'] | 2020-04-30 07:33:30.234000+00:00 | ['Programming', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning'] |
The Magic Trick of Machine Learning — The Kernel Trick | Welcome to our machine learning magic show!
You must have heard that magicians never reveal their secrets. Machine learning has several tricks that seem like magic. In this blog, we hope to reveal one secret of machine learning called the Kernel Trick. If you have encountered the problem of datasets too big to use your models on, models too complex to use your data on, or nonlinear data, then this trick will be useful for you. When used wisely, this trick can speed up algorithms, reduce memory usage, and uncover hidden patterns.
Sounds exciting and want to know more? Focus, otherwise the magic trick might elude you.
Intuition: Why is it needed?
Let’s start with a summary of the notation used and the concepts we already know.
Summary of Notations
L2 Regularized Regression
Consider the least-squares regularized objective function where we want to learn weights that minimize the following expression:
Cost (objective) function of L2 regularized least squares
Equivalently, this can be written in matrix form as:
Recalling from calculus I, to minimize an expression f(x), we need to set its derivative to zero f´ (x) = 0 and solve for x. The equivalent multidimensional case x is to calculate the gradient of the function and set it to zero vector ∇f(x) = 0 and solve the linear system of equations. For the least-squares objective function (1), the solution in matrix form would be:
The gradient of cost function and normal equations
which is known as the “normal equations”. The solution to the normal equations is:
The solution to normal equations
Once the optimal weights (w) are obtained, to make predictions on the test data Xtest, we need to calculate y = Xtest · w.
If you are not convinced, check the dimensions of the matrix operations to see that it is a sound derivation.
Check the dimensions of the matrix operations
What is the cost of computing linear regression?
Let us analyze the cost of solving the normal equations and obtaining w. For this, let us first consider the cost of the dot product of two vectors u and v. The dot product of two vectors is defined as:
dot products
As we can see, the pattern is to multiply each element of the vector and then add them. So, a dot product of an n-dimensional vector requires n multiplications and n-1 additions. Assuming the cost of multiplication and addition are constant, the total number of operations in computing the dot product is n+n-1. In big-O notation, this is O(n).
Training (learning weights w)
To learn weights w, we need to perform matrix operations. Below is the breakdown of each matrix operation involved in computing w.
–– Multiplication: Xᵀ X
The result of the matrix multiplication Xᵀ X is a d×d matrix. There will be a total of d² elements in this matrix. Each element of the resulting matrix is a dot product of two n-dimensional vectors. So, the cost of forming Xᵀ X is O(d²·n).
–– Addition: Xᵀ X+λI = B
Matrix addition does not change the size of the matrix; thus, the result will be d×d matrix. One element of the first matrix will be added to one element of the second matrix, a total of d² summations. So, the cost of adding two matrices is O(d²).
–– Matrix inversion: (Xᵀ X +λI)⁻ ¹ = B⁻ ¹ = C
The basic matrix inversion algorithm is Gauss-Jordan elimination (row reduction). Solving a d×d system of equations via Gauss-Jordan elimination is known to be O(d³).
–– Multiplication : (Xᵀ X +λI)⁻ ¹ Xᵀ = C Xᵀ = D
Similar to multiplying Xᵀ X, calculating C Xᵀ is the result of (d×d)(d×n) matrices which yields a d×n matrix whose complexity is O(dn·d) or O(d²n).
–– Multiplication : (Xᵀ X +λI)⁻ ¹ Xᵀy = Dy = w
Similar to multiplying Xᵀ X, calculating Dy is the result of (d×n)(n×1) matrices which yields a d×1 vector whose complexity is O(d·n).
–– Adding it all up the cost of all the operation yields,
Which simplifies to,
The complexity of regularized linear regression
As can be observed, the complexity depends on n and d. Let’s analyze the two scenarios.
If n>>d, this means that our data (matrix) has lots of examples (rows). In this case, we have lots of data points but few features. The number of examples dominates; therefore, the cost of performing least-squares is linear in the number of examples.
Not bad if we have tens or hundreds of examples. But what if we have 100,000? What about millions? What about billions? It’s not bad either since the computation cost is O(n), which is about the same as if we were to read n data points.
If d>>n, this means that our data (matrix) has lots of features (columns). In this case, we have lots of features but few examples. The number of features dominates; therefore, the cost of performing least-squares is cubic in the number of features.
Even with a few features, the computation cost is high. What if we have 1000 features? What about 100,000? What about millions? It would be highly infeasible with a cost of O(d³)!
But that’s not all folks! In computing the weights w, we have to form XᵀX, which will result in a (d×d) matrix. Can we even store in memory such a large matrix if d is very, very large? | https://medium.com/sfu-cspmp/the-magic-trick-of-machine-learning-the-kernel-trick-b4b21787805a | ['Sachin Kumar'] | 2020-04-01 02:17:17.189000+00:00 | ['Kernel Trick', 'Big Data', 'Blog Post', 'Data Science', 'Linear Algebra'] |
Christian Counselors (The Blind Shepherds) | Aesop’s Fables: The Wolf and The Shepherd
The healing journey brings with it many moments of contemplation, where you ponder what went wrong or why certain relationships didn’t last. My disenchantment with the Christian counseling I sought out along the way has led to some of the most profound soul searching of my life. That isn’t because I had a massive shift in my belief system. My disappointment was due to the asinine approach taken, the judgmental tone, and the refusal to change course when the original method proved unsuccessful. It also wasn’t because they were all fundamentalists who took every word of the Bible literally. They were all fairly liberal evangelicals, who prided themselves on their open-mindedness.
Many people go to therapy, hoping to change, but not change too much. They don’t want someone to upset their sense of normalcy entirely, which is what I was doing when I pursued a Christian counselor. The last thing I wanted was to have some smug secular humanist that tried to pick at my religious traditions. I envisioned a Christopher Hitchens type, sitting in the armchair, shooting me a bemused look if my faith was to come up. I’ve matured enough to know that’s not how therapy goes, but in my young 20s, I was vulnerable.
Modern Christianity, liberal evangelism, and Catholic social justice dogma, in particular, all have an unhealthy tendency to enable and encourage co-dependency inadvertently. I recall the many times I tried to put up boundaries with people in my life, and I was made to feel as if I was bull-headed, and they attempted to talk me out of it. The scriptures that were often cited when I was trying to cut cords of dependency were “Forgive them, Lord, for they know not what they do” (Luke 23:34) or “Forgive as you have been forgiven” (Colossians 3:13).
Dr. DeLauro, the most consequential of the several Christian-minded therapists I worked with, came from a background of pastoral counseling. I could tell he was a product of Catholic school right away, for I often felt as if I was in the principal’s office. There was always an air of self-righteous judgment when I admitted to a rather ugly feeling. Perhaps his reaction was rooted in the theological view that good deeds get you into heaven, or their understanding of the human body, which Catholics have unfortunately been told is something sinful and shameful. It was as if he was instructing me that, even if you resent these notions, still follow them anyway, regardless if the paradox between your mind and body is tearing you up on the inside.
I felt as if I had been exposed to the chronic virus of Catholic guilt, which I now realize defined much of his worldview. As the Bible says, “Love the Lord your God with all of your heart, all of your soul, and all of your mind.” We can’t only focus on our thoughts as they do in CBT and interpersonal psychotherapy. I do not doubt that Dr. DeLauro had been through his share of heartbreak and tribulations in life, but it seems to me that he never really accepted himself as a child of God. He accepted his circumstances, his flaws, his tribulations, but he never accepted himself in spite of those things. All he was focused on was how to redirect negative thought patterns when those issues presented themselves. The sentiment of feeling like “I’m worthless, it’s all my fault, I’m unlovable,” is not just an issue of thinking patterns; they are an absence of self-love. The Catholic mentality turns the act of suffering into a virtue, but when does this become an exercise in masochism? It’s as if they don’t think the layperson is worthy of the answers required to end their suffering.
I was always struck by how strangely Dr. DeLauro reacted whenever I brought up issues regarding abuse of power. Whether it was in my own life or just a casual observation of the culture, there was a dogmatic insistence on forgiveness, acceptance, and just simply moving on. While I am not suggesting that those concepts are wrong in any way, for a psychologist to push those so incessantly runs the danger of equating any attempt to seek accountability, as akin to being unforgiving or bitter. By that logic, the efforts to uncover the Catholic Church abuse scandal or the #MeToo movement were negative, because those seeking accountability hadn’t truly forgiven their perpetrators.
Tales of Accountability: The story of the Boston Globe’s investigation of child abuse by Catholic priests.
My guess is that his passive attitude towards such movements seeking accountability is because the older generations thought consequences were out of the question. All you could do was accept what happened and move on, the perpetrators were never going to be made to answer for their crimes. There was no Human Resources department, no investigative team of journalists wanting to interview you, no internet hashtag to show solidarity with survivors, they were alone. While many victims are having their voices heard for the first time, I imagine many are agitated at being reminded of old wounds, they’ve learned to cope. That doesn’t mean they’ve healed, but healing is a scary endeavor that takes immense courage, and some would rather let sleeping dogs lie. There is no judgment, but those who do want to find peace and resolve shouldn’t be shamed for their choices either. | https://medium.com/invisible-illness/christian-counselors-aka-the-blind-shepards-cc2928bc8d91 | ['Quinton Heisler'] | 2020-02-04 03:43:39.483000+00:00 | ['Christianity', 'Religion', 'Therapy', 'Mental Health'] |
Decision Tree Algorithm In Machine Learning | A decision tree is a non-parametric supervised machine learning algorithm. It is extremely useful in classifying or labels the object. It works for both categorical and continuous datasets. It is like a tree structure in which the root node and its child node should be present. It has a child node that denotes a feature of the dataset. Prediction can be made with a leaf or terminal node.
Recursive Greedy Algorithm
A recursive greedy algorithm is a very simple, intuitive algorithm that is used in the optimization problems.
What a recursive greedy algorithm does that at every step you have a choice. Instead of evaluating all choices recursively and picking the best one choice, go with that, Recurse and do the same thing. So basically a recursive greedy algorithm picks the locally optimal choice hoping to get the best globally optimal solution.
Greedy algorithms are very powerful in some problems, such as Huffman encoding, or Dijkstra’s algorithm, which is used in data structure and algorithms. We will be using this algorithm for the formation of the tree.
Step for learning the decision tree:
step 1: Start with an empty tree
step 2: Select a feature to split the data
For each split of the tree :
step 3: If nothing to do more, predict with the last leaf node or terminal node
step 4: Otherwise, go to step 2 & continue (recurse) to split
decision tree example
For example — let’s say, start with an empty tree and I pick a feature to split on. In our case, we split on credit. So we decided to say, take my data and split on which data points have excellent credit, which ones have fair credit and which ones have poor credit, and then for each subset of data, excellent, fair, poor, I continue thinking about what to do next. So I decide in the case of excellent credit, there was nothing else to do. So I stop, but in the other two cases, there was more to do and what I do is what’s called recursion. I go back to step two but only look at the subset of the data that has credit fair and then only look at a subset of data that has credit poor. Now if you look at this algorithm so far, it sounds a little abstract, but there are a few points that we need to make more concrete. We have to decide how to pick the feature to split on. We split on credit in our example, but we could have split on something else like the term of the loan or my income. And then since we having recursion here at the end, we have to figure out when to stop recursion, when to not go, and expand another node in the tree.
Problem 1: Feature split selection
Given a subset of dataset m(a node in a tree)
For each feature h(x):
Split data of M according to feature h (x) Compute classification error split
Chose feature h*(x) with lowest classification error
classification error is the number of mistakes in node and divides by the number of the data points in that node. For example — In the above diagram in the root node we made 18 as a risky loan that would be considered as a mistake and the total number of the data points in the root node will 40 so we can calculate the classification error of the root node. The second thing we will calculate the classification error of other nodes and try to see which feature has the lowest error that would our best split node.
Problem 2: Where will we do stopping the condition?
The first stop in the condition is stopped splitting when all the data agree on the value of y.
when already split on all the features. we can say there is nothing left in our dataset.
The Common Parameters of the Decision Tree
criterion: Gini or entropy, (default=gini)
The function to measure the quality of a split. Supported criteria are Gini for the Gini impurity and “entropy” for the information gain. It decides which node to splitting starts.
max_depth: int or None, (default = None)
The first hyperparameter is to tune in a decision tree is max_depth.
max_depth is what name suggests the maximum depth of the tree that you allow the tree to grow to
The deeper the tree, the more splits it has & it captures more information about the data.
however, in general, a decision tree overfits for large depth values. The tree perfectly predicts all of the train data, but it fails to capture the pattern in new data.
so you have to find the right max depth using hyperparameter tuning either grid search or random search to arrive at the best possible values of max_depth.
min_samples_split: int or float, (default = 2)
An internal node will have further splits (also called children)
min_samples_split specifies the minimum number of sample required to split an internal node.
we can either specify a number to denote the minimum number or a fraction to denote the percentage of samples in an internal node.
min_samples_leaf: int or float (default = 1)
A leaf node is a node without any children(without any further splits).
min_samples_leaf is the minimum number of samples required to be at a leaf node.
This parameter is similar to min_sample_splits, however, this describes the minimum number of samples at the leaf, the base of the tree.
This hyperparameter can also avoid overfitting.
max_features: int, float, string (default = None)
max_features represents the number of features to consider when looking for the best split.
We can either specify a number to denote the max_features at each split or a fraction to denote the percentage of features to consider while making a split.
we also have options such as sqrt, log2, None.
This method is used to control overfitting. In fact, it is similar to the technique used in a random forest, except in random forest we start with sampling also from the data and we generate multiple trees.
Code For Decision Tree Algorithms:
We will be using a dataset from the LendingClub. A parsed and clean form of the dataset is available here. Make sure you download the dataset before running the following command.
The train and validation dataset can found here.
Thanks for reading…..
Recommended Article | https://medium.com/ai-in-plain-english/decision-tree-algorithm-in-machine-learning-8aecef85ae6d | ['Bhanwar Saini'] | 2020-09-30 14:26:53.296000+00:00 | ['Python3', 'Programming', 'Artificial Intelligence', 'Data Science', 'Machine Learning'] |
Introducing Kubeflow to Zeals | Hi there, this is Allen from Zeals Japan. I work as a SRE / gopher which mainly responsible for microservices development.
Background
Story
Nowadays machine learning is everywhere and we do believe it will still be trending in the next few years. Data scientists are working on large datasets on a daily basis to develop models that help the business in different areas.
What’s wrong?
We are not an exception as our machine learning team is working on different datasets and across multiple areas including deep learning (DL), natural language processing (NLP) and behaviour prediction to improve our product, but as we are handling massive amounts of data, they soon realized working on local or using cloud provider notebook like colab or kaggle dragging down their productivity significantly.
Unable to scale and secure more resources when they are handling heavier workload
Limited access to GPU
If you are using cloud notebook result will not persist automatically but will reset once you idle or exit
Hard to share notebook to with your co-workers
No way to use a custom image, need to setup the environment every time
Hard to share common dataset among the team
Researching
Current Implementation
Originally we were using a helm chart to install a jupyterhub on our kubernetes cluster and we were having a hard time managing the resources and shared datasets using shared volume.
As a part of the infrastructure team, we need to adjust the resources frequently for the ML team which is not so ideal and obviously dragging down both team’s productivity.
Tools?
There are multiple solutions available in the community kubespawner and helm chart we originally used.
kubespawner
Stars: 328 (2020–08–12)
Pros
Able to spawn multiple notebook deployment separated by namespace
Extremely customizable configuration based on python API
Ability to mount different volumes to different notebook deployment
Cons
Community is small
Lacking support on cloud native features, such as setting up network, kubernetes cluster, permission etc still need to handle manually
Lacking support of authorization
zero-to-jupyterhub-k8s
Stars: 740 (2020–08–12)
Pros
Official support, helm chart that’s published by jupyterhub
Easy to setup and manage by helm
With good authorization support such as github and google OAuth
Cons
Limited support on individual namespace
Hard to declare and mount volumes based on notebook usage
Lacking support on cloud native features, such as setting up network, kubernetes cluster, permission etc still need to handle manually
Kubeflow
Stars: 9.2k (2020–08–12)
Pros
Good support from both author and community
Good support for different cloud platform
Not limited to notebook, but also got other tools that help with machine learning process such as pipeline and hyperparameters tuning
Able to easily separate namespace between different user without changing any code
Can easily mount multiple volumes based on the notebook usage
Dynamic GPU support
Cons
Very huge stack and hard to understand and customize
Need to be in an individual cluster, running cost is higher
Steep learning curve, compare to a plain notebook, using Kubeflow also requires you to have knowledge of kubernetes when you using tools like pipelines and hyperparameters tuning
What We Chose?
Kubeflow is our pick. From the above comparison, we can easily see that Kubeflow has got so many features that I think we will need in the future. The entire solution also came as a box so looks like it can be setup quite easily.
I quickly found that this might be the one I am looking for and I can’t wait to try it.
Recently they released the first stable version 1.0 back in march, and I think it’s a good time for us to try it.
Installation
Try it out first!
At this stage, I haven’t decided to proceed with Kubeflow, but as an infrastructure person, We always need to test the tool before we introduce it to others.
The installation is quite simple for Kubeflow if you are running on cloud. They have out of the box installation script for each cloud provider and you just need to simply run it.
Setting up the project
Since we are running on GCP, I’ll use that as an example, but you can also find the cloud provider you are using or even if you are hosting on-premise cluster you will find your page here.
It’s good for you to create a new GCP project when you trying on something so it will be isolated from other environment.
Installing CLI
Following the steps here to setup OAuth so Kubeflow CLI can get access to the GCP resources.
First we need to install the Kubeflow CLI, you can find the latest binary on github releases
Setup the GCP bases
After that just some standard gcloud configuration
Note that multi-zone is not yet supported if you want to use GPU, we’re using asia-east-1 here since it is the only region that has K80 GPU support right now (2020 July).
Spinning up the cluster
Spinning up the cluster simply running
Customizing the deployment
For simplicity we are directly applying here, but if you want to customize the manifest it’s also possible by running
Verify the installation
Get the kube context
Accessing the UI
Kubeflow will automatically generate an endpoint in this format, it can take a few minutes before it is accessible.
That’s all, pretty easy! Now you can access the link and check on UI
Setting up notebook serverCreating the notebook server
Navigate to Notebook servers -> New server
You can see there are tons of configurations we can make!
Settings for the notebook server
Breaking down a bit
Image
Able to use prebuilt tensorflow notebook server or custom notebook server image
You can install and prebuilt common dependencies images and everyone now has access to the same setup!
CPU / RAM
Workspace volume
Each notebook create a new workspace volume by default. This ensure you won’t loss your process if you are away or the pod accidentally shutdown
You can even share workspace volume if you configure it to ReadWriteMany if you want to share with you team as well
Data volumes
Now it’s super easy to share dataset by just using data volumes, scientists just need to choose which dataset they want to use
You can even mount multiple dataset in a same notebook server
Configurations
This is used for store credentials or secrets, if you are using GCP, the list will default with Google credentials so you can access gcloud command or SQL dataset / big query to access even more data.
GPUs
Now it’s dynamic! but don’t forget to turn it off one you finished using, otherwise it may blow up your bill!
Running our first experiment
Setup
I created a notebook server with all default parameters, running on tensorflow-2.1.0-notebook-cpu:1.0.0 image.
I want to build a simple salary prediction model following fastai tutorial.
Since the image doesn’t come with fastai, simply install it
Training the model
We simply copy the code from the tutorial
We successfully trained a model!
We can simply save the current checkpoint in the workspace and retrain it next time!
Hyperparameters Tuning
Katib
Not limited to notebook servers, Kubeflow also has tons of other modules that are very convenient to data scientists, katib is one of the modules that you can use.
Katib provides both Hyperparameter Tuning and Neural Architecture Search , we will try out hyperparameter tuning here.
Writing the job
Using katib is extremely easy, if you are familiar with Kubernetes manifests it will be even easier for you. Katib uses Job on Kubernetes and repeatedly runs your job until it hits the target value of maximum runs.
We will use the same model salary prediction but this time we do want to tune those input values.
Training script
Collecting metrics
Katib will automatically collect the train metrics from stdout, so we only need to print it out.
in the args we pass lr , num_layers and emb_szs as hyper parameters.
Job definition
Explanation:
We using objectiveMetricName: accuracy as the target metrics and the target value is goal: 0.9
lr: random from 0.01 to 0.03
num_layers: random from 50 to 100
emb_szs: random from 10 to 50
We also configured the max count maxTrialCount: 12
Result
The job will automatically start once you submit it.
The result will update on each job finished and you can see the result in HP -> Monitor
Previously we didn’t even conduct HP tuning using only jupyter notebook, either you write a very huge loop that makes it run for a decade, or simply use your 6th sense to decide the HP.
Conclusion
Kubeflow is very good out of the box tool that allows you to set up the analysis environment without any pain. It provide several powerful modules like
Also there are more features that we haven’t touched in this article as well, such as:
Namespaced permission management
Sharing notebook among the team or outsiders
Continuous training and deployment for machine learning model
Continuous ETL integration with cloud storage or data warehouse
All of those are very common requirements from data scientists and it fit for most of the company as well.
We are still in the middle of transition so didn’t manage to cover all features on Kubeflow, will definitely want to write more about it after we explore more on it.
We are hiring!
We are the industry leader of chatbot commerce in Japan, Our company is based in Tokyo, if you are talented engineer and you are interested in our company! Simply to drop an application here we can start with some casual talk first.
Opening Roles
(Sorry for that it’s still japanese right now, we are working on translating it to English right now) | https://medium.com/zeals-tech-blog/introducing-kubeflow-to-zeals-c41b6199d2b9 | ['Allen Ng'] | 2020-09-15 11:24:59.270000+00:00 | ['Machine Learning', 'Deep Learning', 'Kubernetes', 'Kubeflow', 'Pytorch'] |
How I Make Steady Money Daily On Medium | 1. Publish regularly.
Duh! As I said, none of my articles has hit it big. The most views I have is 305 with a 60% read ratio. I have to publish regularly to make sure I keep up income.
I publish 4 times a week and do my best to avoid weekends, but sometimes it can’t be helped. There’s the myth that publishing on weekends is not as effective, which I have to tell you I have noticed that this is true. Publishing between Monday-Thursday is probably best.
2. Spread your work on socials.
This is not the key to getting a good amount of views/reads, but it does help. There’s a lot of Facebook groups out there that I am part of that are extremely interactive and you can publish links to your work on there;
Medium Writers Lounge
Medium Magic
Medium Writers Boost
There’s also Quora, Reddit, LinkedIn, Slack, Twitter and whatever else you use to promote your work.
As well as promotion, being interactive in general will only do you well. Clapping for other writers and reading successes does bring me joy.
3. Write about writing.
I have noticed that my articles focusing on writing are the most successful. This is probably the niche I should follow. But this isn’t me as a writer. I like the talk about everything and anything, so if I have an idea, I’ll write about it.
You do not have to follow one particular niche. I most certainly do not, but I have noticing that writing about writing does draw people in.
4. Submit to publications
Submitting to publications means that many others will naturally come across your work on the publication homepage, generating more views and reads thus making you some dollars/cents.
Saying this, there have been cases where people have self-published and editors have reached out asking if they can publish it.
At the end of the day, a good piece of writing is a good piece of writing.
5. Practice your writing.
I haven’t really taken time to focus on bettering my writing. I tend to just write, run it through Grammarly and Microsoft Word and if nothing is red or blue, Thunderbird is go.
Being six weeks into Medium, I can see the improvements from when I first started to where I am now. This does not mean that improvements cannot be made and I know that sooner or later I will have to focus on this.
I want to make my content engaging. I want to make it a good, beneficial read for someone. I want someone to just enjoy my work. The only way to do this is to improve.
6. Check your stats everyday.
Many people probably advise against this, but I don’t. Every morning, check your stats even if you haven’t published. That’s ONCE a day, by the way. Becoming obsessed with stats isn’t good and it doesn’t make the numbers increase.
You want the dollar signs to go up, not down or stay stagnant. My advice is to make a note per day of how much you have and just work out the difference a day, then work out the average (divide by how many days in that month) of how much you earned that month. | https://medium.com/illumination/how-i-make-steady-money-daily-on-medium-4754420a27d3 | ['Shamar M'] | 2020-12-22 09:24:58.093000+00:00 | ['Advice', 'Money Management', 'Writing', 'Money', 'Writing Tips'] |
Local Architecture in a Globalized World | Local Architecture in a Globalized World gestalten Follow Sep 25 · 6 min read
Today, architects around the world face an unprecedented series of challenges. Issues such as the rapid growth of populations, societal and political instability, and climate change present those working on the built environment with new levels of complexity. This impacts the myriad decisions that go into any architectural assignment: selecting materials and organizing labor, the use of space, and the interaction between a building and its surroundings.
Globalization brings faster and more streamlined knowledge-sharing and integrative solutions, but also the risk of standardization and homogenization. A particular truth in ever-expanding cities, where skyscrapers and high-tech construction systems define our image of what cities “should” be, dominating not only the skyline but the headlines and prevailing academic discourse. In this context, the progression of architecture continues to be shaped by an overtly Western perspective. Size, technological advancement, territorial dominance, material innovation, and engineering prowess are often privileged in the professional conversation, according to a Western idea of what constitutes progress. Even beyond these borders, in regions where colonization has marked the architectural landscape, systemization, and mass production-the pillars of our integrated international culture-threaten to neuter local knowledge and tradition.
Fernando and Humberto Campana, the siblings behind design studio Campana Brothers, are committed to using native materials and reviving traditional handicrafts in their practice. For a family home in São Paulo, they blurred the distinction between outside and inside using materials and techniques that maintain a continuous dialogue with nature, space, and natural light. (Photo: Leonardo Finotti, Beyond the West)
Architectural practitioners are, then, presented with a unique opportunity. Beyond satisfying the basic human need for shelter, the profession now has the potential to profoundly reshape the way we live in the 21st-century. The tools of the global economy-the powers of communication and production-equip designers and builders to make decisions that will affect our future livelihood and that of the planet. Our latest release Beyond the West explores the diversity of global architectural cultures and, in doing so, proves that this approach is possible, and indeed will flourish.
The book investigates architecture that challenges our current grasp of the discipline, looking beyond the Western world to discover alternative solutions to globally relevant issues such as sustainability, transport and migration, material innovation, and even wellness. It aims to uncover the architecture of regional cultures and unpack what localization can mean in a global context by investigating how regional social, cultural, and economic conditions can produce intuitive and original architectural strategies. Beyond the West looks to thriving practices and projects that are scarcely recognized in this sphere of architecture, we highlight what progress looks like in Asia, Africa, and the Americas, with a focus on vernacular applications. We discover how everyday needs are better met when approached from a place of authenticity-one that serves the requirements of a specific situation at a particular time.
Architecture must respond closely to its environment to resonate with the landscape around it and the people who use it. It benefits humans when attention is paid to local surroundings-to weather patterns, economic restrictions, and cultural traditions. That is not to say that Western ideas have no place outside of their borders. Our featured projects and architects do not work in blind isolation; they understand the benefits of a globally integrated world and utilize knowledge gleaned in other parts of the globe.
For Kuala Lumpur’s Chempenai House, WHBC Architects created a green living approach in a concrete jungle. Mimicking the local ecosystem, the structure is designed to self-cool and enhance vegetation growth. (Photo: Ben Hosking, Beyond the West)
We also recognize and applaud the many projects in the West that adhere to principles of regionality, sustainability, and respect for context and environment. We have chosen, however, to place our focus outside the West, uncovering projects with a sensitivity to local strictures in countries such as Brazil, Burkina Faso, Vietnam, and elsewhere. A deep consideration for the local climate, resources, and cultures can initiate tremendous advances in the built environment.
The projects featured in the book-from a low-impact mountainside bungalow modeled on Sri Lankan watch huts to an isolated Namibian desert retreat inspired by the nest of a local bird-are examples of a careful, research-driven, and localized approach. In many cases, tradition and intuition are the driving forces behind the overall concept. Working with available materials and in harmony with the surrounding terrain, architects find inspiration in traditional knowledge and skills. Bricks, stones, or bamboo from the surrounding landscapes are made or cut by local hands, creating local employment, and instilling local pride. These are appropriate responses when challenges include transport logistics, limitations, and the availability of materials.
Art, nature, local materials, and traditional techniques coexist in this unconventional structure that resists the definition of “gallery” because it is entirely devoid of straight lines. SFER IK Museion by Roth Architecture is an interdisciplinary creative space in Tulum, Mexico. (Photo: Fernando Artigas, Beyond the West)
Many of the practices producing these trailblazing projects model new ways of thinking and working by their very make up. They often have younger and more gender-balanced workforces, and they employ open-ended, democratic decision-making processes with a distinctly contemporary and thoughtful working culture. This facilitates innovative and original problem-solving, and may, in time, contribute to a shift in mindset within the discipline as a whole.
Diversity and attentiveness to new voices are crucial for the development of contemporary architectural practices, just as localism and consideration for the environment are for individual projects. This book does not present a comprehensive list of localized architecture; instead, it offers an intriguing glimpse into this design category beyond Western borders, and we hope that it becomes a starting point for further exploration of these regions and the individual architects affecting change. Our goal is for the book to spark curiosity and encourage readers to explore the immense opportunities that arise when we cast our vision beyond the architectural cultures of Europe and North America.
The “New Andean Architecture” of Freddy Mamani has shaped the identity of El Alto in Bolivia, where the former bricklayer has fashioned over 60 buildings. Mamani’s “cholets” (a name combining “chalet” with “cholo,” a derogatory term for an indigenous person) are vibrantly colored edifices decorated in geometric acrylic paneling, glossy chrome, and reflective glass. (Photo: Tatewaki Nio, Beyond the West)
The featured projects offer solutions to some of the challenges facing our planet, and together they represent the possibility of a better future. The architects showcased, often working in response to rapid urban growth, climate change, and political and economic instability, have doggedly drawn on their training, the knowledge of their peers, and their intuition to develop unique local solutions. This grounded, curious approach should inspire other members of the profession. It is a local call to arms for an international industry.
Tracy Lynn Chemaly and Faye Robinson with an introduction to Beyond the West, our latest release exploring a global architecture movement linked to locality. | https://medium.com/gestalten/local-architecture-in-a-globalized-world-d3b093085cb | [] | 2020-09-25 14:22:52.485000+00:00 | ['Architecture', 'Globalization', 'Design', 'Building', 'Architects'] |
Drug Deal with God | Creative Nonfiction Contest Finalists
Drug Deal with God
Mixing LSD and Weed
Photo by Matt Flores on Unsplash
I waved my hand in front of my face. Tracers followed across the space, smudging the air in an arc. I’d dropped acid before, but this seemed more intense.
“Whoa. Are you seeing what I’m seeing?” I asked.
Ed, ever the philosopher, replied, “Can anyone ever see what someone else is seeing?”
With LSD, I thought I could experience a higher consciousness or tap into dormant parts of my brain. I was taking Intro to Philosophy, and we’d just studied Descartes — the whole idea of not being able to trust our senses. I fixated on the concept of everything coming to us via our senses and our senses could be deceived, combining it with I think, therefore I am. I had proof I existed, but could not prove the outside world existed.
The LSD melded the two concepts. I wondered if my senses were being deceived at that time, in that room. Was I really sitting on Ed’s weight bench in his apartment? Was Ed even real?
Photo by Jr Korpa on Unsplash
We’d met when I worked at Farell’s Ice Cream Parlour — Ed the assistant manager. Blond, smart, funny, with a George Washington nose — too big to be devastatingly handsome by American standards — but cute enough. I was in love. I took him to be my boyfriend. He took my virginity. I was disappointed. I thought there would be a bigger orgasm with intercourse. I’d been doing that on my own for years.
He moved to Tucson, and I followed, thinking I would marry him, and we’d have big-nosed babies. But he wanted to see other people — we could still be friends and sometimes lovers.
Our friend Traci passed the bong.
“Hey, what if none of you are really here?” I took a tiny bong hit. My head swirled. My mouth dried into a desert. I exhaled and watched a trail of faeries dance into the room.
Ed took a hit. “Does it really matter?”
“Yeah.” Green stairs appeared in my vision, climbing on Ed, on Traci, on the wall. My horizon moved, rocking back and forth with nothing to hold on to. “Let’s get out of here. I’m creeping out.”
“Okay, okay. Let’s go to a movie? Pile in the jeep. We’ll go see what’s playing.”
I wanted Ed to hold me. Kiss me. Save me from this eerie feeling. But he wouldn’t, and I wouldn’t ask. A few months earlier, we’d painted Ed’s Jeep with what was supposed to look like camouflage, but ended up looking like cowhide. Every time I saw his jeep on campus, I knew he was fucking Hazel. It hurt. Why didn’t he want me anymore?
Photo by GoaShape on Unsplash
My consciousness drifted in and out of my body. I thought it was morning. I thought it was night. I thought I was going crazy. Time distorted and mixed events nonlinearly. Camel colored mud slithered over our skin, cool then itchy. We ate pizza. We knocked on a trailer door, needing a phone. We were at Ed’s again? Or before? A man wore a yellow hardhat. Jäger shots out of paper cups. We were too dirty to come inside. A backyard with a garden hose, a dog barked — his yap like scraping metal. A winch pulled the Jeep out of the muck as I rubbed tan mud on my arms. The horn was stuck upside down. Water droplets on the grass turned to emeralds and diamonds. The Jeep wailed like a beached whale. We four-wheeled in a muddy construction site. Fun until we flipped.
“Get out of the Jeep, Tammy.” Ed parked in front of my cottage.
“There’s something wrong with me. I think I’m going crazy.” Fear swirled inside me. My blood felt prickly under my skin. I was hot and cold intermittently. “Please don’t leave me. The acid hasn’t worn off. It should have worn off by now.” I hated my desperation.
“There’s nothing wrong with you. We’ve been up for twenty-four hours. Go sleep.”
He didn’t understand. Something was wrong with my brain, and if I lost control, if I slept and gave my brain over to my subconscious, I might not come back. I clung to the Jeep. I would not budge.
“Fine.” He peeled out, issuing a golden dust cloud.
He drove for what seemed like miles from my cottage, turned off the Jeep, pocketed the keys, and left. Jogged away. I cried. I shivered. I leaned over and threw up pink pizza guts. I recognized the grocery across the street and then the gas station. Neither were open. Time didn’t make sense. I walked and walked and walked. Found my tiny house.
I laid on my couch waiting to die. I made a deal with God, if he let me live, I would never do drugs again. | https://medium.com/inspired-writer/drug-deal-with-god-8314f9479499 | ['Tam Francis'] | 2020-12-21 13:02:21.480000+00:00 | ['Drugs', 'Perception', 'Philosophy', 'Nonfiction', 'Weed'] |
A day spent at the Singapore Cloud and Datacenter Convention 2019 | According to forbes.com, more data has been created in the past two years than in the entire previous history of the human race. And by 2020, one third of all data will pass through the cloud.
Data centers are mushrooming, cloud services are flourishing.
On the 11th of July, I attended the Cloud and Datacenter Convention at the Sand Expo Convention Centre in Singapore, where around 1000 professionals and 30 exhibitors gathered to share and discuss future thinking and lessons learned in cloud and data innovation. Here’s my summary of some key ideas I took away pre-, during and post-convention.
Sustainability was a key topic where aspects of cost, talent and technology were discussed.
The Natural Resources Defense Council (NRDC) estimates that data centers consume up to 3% of all global electricity production. Of the total operational cost of running a data center, 40% of the energy is for powering and cooling the massive amounts of equipment in the center. To keep the cost of running a datacenter down, a cooling system is required and is one of the most important factors. Liquid cooling systems and other innovative ways of wiring to promote better airflow were methods being discussed. Cooling systems are especially important for countries near the equator like ours.
The cost of building the data centers is another big concern. More and more businesses are depending on data centers and cloud services and no one can afford downtime. Modern data centers are built to withstand winds of 200 km/h and 9.0 magnitude of earthquakes.
The rapid growth of data centers leads to higher demand for talents not only to build and maintain the data centers, but also to manage them. It is a segment that has not been previously emphasised. Almost half of the talent in the field now will be retiring in 8 years, and most of the jobs available 5 years down the road may not exist yet. Schools are not currently offering courses that would supply the industry with sufficient and suitable talents with only a few courses in the United States and Europe available. As a speaker pointed out, these tend to be either too US-centric or EU-centric.
There is a real talent crunch. Students might not see the needs of the future so reaching out to students before they embark on their study might lure in more future talent.
This leads to handing over some managing tasks to artificial intelligence (AI) with China taking the lead in conducting research in this field.
Data is just some 1s and 0s if users do not make meaningful data analysis. Data is widely available now and using it with responsibility would be a virtue that we need to instill in the younger generation.
Regulation and data-sovereignty were topics I found very interesting. Data-sovereignty does not promise data security is what most speakers agreed to during one of the panels. Also, regulations in one industry shall enable another industry rather than hindering its development. For example, data collected in the transportation industry shall enable the development of road infrastructure.
As for the future of cloud services and data centers, more and more applications are cloud native. Edge-computing and hyperscale data centers are gaining attention too. Data processing for internet of things (IoT) needs more frequent connection between the devices and the cloud for better performance. Processing of data will be able to achieve greater results using devices with better frequency of updating data between the database and the devices.
My conclusion to the vast amount of information I received in 6 hours?
Talent crunch is real and similar in the Blockchain world. Education and training are important. As a trainer, I am glad that I am actively participating in the development of this segment for both online and offline training. In my opinion, it is a good move for NEM to develop its online training portal to help train others in using our blockchain technology.
Peer-to-peer networks like Blockchain need cloud services. How would data centers affect Blockchain? How could AI, IoT and Blockchain be effectively integrated through services by data centers? When technologies converge, they are more powerful. | https://medium.com/nemofficial/a-day-spent-in-singapore-cloud-and-datacenter-convention-2019-5b20359cf6ad | ['Nem Official', 'Editors'] | 2019-09-24 12:44:18.687000+00:00 | ['Event', 'Cloud Computing', 'Nem Blockchain', 'Nem Foundation', 'Technology'] |
Why Do More Buying Choices Cause Unhappiness? | Have you ever felt unhappy about a purchase you made despite spending hours reading product descriptions and reviews, comparing dozens of options, and finally choosing what you perceived to be the best deal? I faced the same problem many years ago, when I still lacked the knowledge of effective shopping techniques around buying choices.
We make shopping mistakes because of how our brain is wired and because retailers use human psychology to their advantage by manipulating the shopping process, particularly in digital contexts. Amazon and other retailers want us to spend as much time as possible on their websites to tempt us with a variety of add-ons and options and to cause us FOMO (fear of missing out) on the best possible deal. This drains our time and wallets — and even our happiness. It’s a good thing you can learn more about the psychological dangers of shopping through cutting-edge research in behavioral economics and cognitive neuroscience.
Choose + Buy = Happiness?
Tom’s wife usually does the grocery shopping in the family, but she had the flu so Tom went instead. Selecting the fruits and veggies went fine, but he hit a wall when he got to the bread section. There were over 60 varieties to choose from. Tom examined the ingredients and made comparisons; he wanted to get it right, after all. After 10 minutes of deliberation, he picked one that seemed like the perfect choice.
However, he had to repeat the process for the rest of the packaged goods. Different brands offered a host of choices, and his wife’s usual shopping list that said “bread” or “cheese” didn’t help. By the time he was finished shopping and paid for everything, he was tired and miserable.
Why did Tom have this kind of experience? Shouldn’t he be happy that there were many choices in the supermarket? After all, mass media presents the narrative that abundance of choice equates with happiness.
According to neuroscience and behavioral economics research, the real story is more complicated than that. While having some options make us feel good, once we get beyond that small number, we feel less and less happy the more choices we get.
For example, in one study, shoppers at a food market saw a display table with free samples of 24 different types of gourmet jam. On another day in the same market, the display table had 6 different types of jam. The larger display attracted more interest, but people who saw the smaller selection were 10 times more likely to purchase the jam, and they felt better doing so compared with those who had to select among the larger display.
This phenomenon was later named “choice paralysis”, referring to the fact that after a certain minimal number of choices, additional options cause us to feel worse about making a decision and also makes us unlikely to decide in the first place. This applies to both major and minor decisions in life, such as when choosing a retirement plan or something as simple as ice cream flavors.
Loss Aversion & Post-Purchase Rationalization of Buying Choices
Why do more choices cause unhappiness? Well, one typical judgment error we make because of the wiring in our brains is called loss aversion. Our gut reactions prefer avoiding losses to making gains. This is probably because of our evolutionary background; our minds evolved for the savanna environment, not for our modern shopping context. Due to this, when we have lots of options, we feel anxious about making the wrong choice and losing out on the best one.
Even having the opportunity to change your mind can be problematic. As it turns out, the benefit of having the option to exchange a product or get a refund is a myth. Another counterintuitive behavioral economics finding shows that people prefer to have the option to refund their purchases but feel more satisfied if the shopping decision is irreversible.
This is due to a phenomenon called post-purchase rationalization, which is also called choice-supportive bias. Research finds that after making a final decision, we try to justify it. We focus on the positives and brush off the negative aspects. After all, if you’re a smart person, you would not make a bad purchase, right? However, if the choice can be reversed, this post-purchase rationalization doesn’t turn on, and we’ll keep thinking about whether it was the right choice.
In-Person vs. Online Shopping in Buying Choices
Online shopping in many ways facilitates a more unhappy shopping experience. Let’s start with choices. According to research, most consumers have the same process of online decision making. The shopping process divides into two stages: first, lightly screen a large set of products to come up with a smaller subset of potential options; second, perform an in-depth screening of the items in this subset.
The much wider selection of products online, compared to a brick-and-mortar store, gives online shoppers the opportunity to examine a greater number of potential options. We know we tend to like more options, believing (wrongly) that the more options we examine, the happier we will feel about our final choice. As a result, we make ourselves less happy by examining even more products online, without even seeing the damage we’re doing to our happiness.
Another counterintuitive problem: it’s easier to return items you purchased online than in a store. For a store, you have to drive back there, wait in line and explain what had gone wrong, and then head back home. By contrast, most large online retailers will ask you to print a return label from their website and then ship the item back to them. Oftentimes, they would also pay for the shipping fee. This process is easier and takes much less time, thus many shoppers tend to see their online shopping decisions as tentative, therefore making themselves unhappy.
Another challenging aspect of online shopping concerns data privacy and security. Shoppers feel unhappy about the extensive tracking of their data online. Smart consumers know about and feel concerned about the risks involved in online shopping because of how online retailers store and sell their information.
How Can Your Buying Choices Promote Happiness?
Digging into research on factors that made my shopping a more unhappy experience years ago helped me improve my buying decisions. When choosing what to buy, the number one technique involves satisficing as opposed to maximizing. This is backed up by extensive research, both involving in-person and online shopping.
Maximizing behavior refers to finding the perfect option when shopping. Maximizers exhaust all available options to make sure that they get the best deal in terms of performance, price, and so on. They have high expectations, and they anticipate that the product will fulfill this promise.
It’s the opposite for satisficers. They set certain minimal criteria that need to be met, then search for the first available product that meets this criteria. They look for products that are “good enough” and those that can get the job done, even without the bells and whistles or savings which they might have found in an extended search.
Research shows that maximizing behavior results in less happiness, less satisfaction, and more regret than satisficing. This finding applies especially in societies that value individual choice highly, such Western Europe and the United States. In societies that place less of a focus on individual choice such as China, maximizing has only a slight correlation with unhappiness, yet still contributes to it.
To be happier, satisfice and limit your choices! Make a shortlist that compares a reasonable number of options and doesn’t include every product available. There’s no such thing as the perfect deal. Buying something that gets the job done, without excessive searching, is going to make you happier in the long run.
When you’re shopping in person, avoid Tom’s problems by skipping big supermarkets with a gazillion options of every product. Instead, go to grocery stores with a small selection of acceptable products, whether Aldi for cheaper price or Trader Joe’s for higher quality. You don’t need 40 types of butter, do you? Just 4 will do. If you really need to go to the supermarket, save yourself from the hassle of choosing from so many varieties by going for the store brand every time.
When shopping in-person and especially online, it helps to get objective information in advance to limit your options. Make sure to use credible product reviews and media sources for these.
You will also probably feel happier about a purchase by ignoring free return or refund offers, unless the product is defective. Treat each shopping decision as final and irreversible, and get post-purchase rationalization working for you. Combine this with satisficing to get great results, because when you focus on “good enough”, your brain automatically highlights the positives, downplays the negatives, and lowers your expectations.
Key Takeaway
More buying choices lead to less happiness. To make better shopping decisions, satisfice and limit your options. There is no such thing as the perfect deal, so look for products that are good enough. — -> Click to tweet
Questions to Consider (please share your answers below)
When was the last time you were dissatisfied with a purchase despite spending hours comparing products and reading online reviews?
Is there anything in the article that will help you become a satisficer?
Which next steps will you take based on reading this article?
Adapted version of an article originally published in Top10.com
Image credit: Pixabay/StockSnap
— -
Bio: Dr. Gleb Tsipursky is an internationally-recognized thought leader on a mission to protect leaders from dangerous judgment errors known as cognitive biases by developing the most effective decision-making strategies. A best-selling author, he is best known for Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters (Career Press, 2019), The Blindspots Between Us: How to Overcome Unconscious Cognitive Bias and Build Better Relationships (New Harbinger, 2020), and Resilience: Adapt and Plan for the New Abnormal of the COVID-19 Coronavirus Pandemic (Changemakers Books, 2020). He has over 550 articles and 450 interviews in Inc. Magazine, Entrepreneur, CBS News, Time, Business Insider, Government Executive, The Chronicle of Philanthropy, Fast Company, and elsewhere. His expertise comes from over 20 years of consulting, coaching, and speaking and training as the CEO of Disaster Avoidance Experts, and over 15 years in academia as a behavioral economist and cognitive neuroscientist. Contact him at Gleb[at]DisasterAvoidanceExperts[dot]com, Twitter @gleb_tsipursky, Instagram @dr_gleb_tsipursky, LinkedIn, and register for his free Wise Decision Maker Course.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/why-do-more-buying-choices-cause-unhappiness-ede92ffb262f | ['Dr. Gleb Tsipursky'] | 2020-11-03 17:22:41.862000+00:00 | ['Behavioral Economics', 'Psychology', 'Shopping', 'Cognitive Bias'] |
New Tools for Funders: Supporting DEI in Journalism | By Angelica Das, Democracy Fund, and Katie Donnelly and Michelle Polyak, Dot Connector Studio
As part of Democracy Fund’s efforts to address diversity, equity, and inclusion (DEI) in journalism, Dot Connector Studio has developed two tools — the Journalism DEI Tracker and the Journalism DEI Wheel — to help funders and journalists understand the complete landscape of the field, including resources and strategies for advancing DEI within journalism.
Our recent report, Advancing Diversity, Equity, and Inclusion in Journalism: What Funders Can Do, revealed that DEI within journalism is an under-funded area, and recommended that funders share more resources on this topic across a diverse pool of grantees. These two tools are designed to help funders do just that. The Journalism DEI Tracker catalogs information and resources on DEI in journalism, and the Journalism DEI Wheel allows funders and stakeholders to focus on particular solutions for advancing DEI within journalism by demonstrating the range of strategies and focus areas to consider.
To put it simply, the Journalism DEI Tracker tracks the who and the what of the field; the Journalism DEI Wheel captures the how.
1. The Journalism DEI Tracker
The Journalism DEI Tracker is a regularly-updated online database that identifies organizations, news outlets and projects, and educational institutions working to support DEI in journalism across the country. It also collects resources related to diversity, equity, and inclusion in journalism. Foundations can use the Journalism DEI Tracker as a first-step guide for identifying prospective grantees, as well as to find useful resources to share with current grantees. Journalism organizations and other stakeholders can use it to find opportunities for professional development, recruitment, collaboration, and resources to improve their coverage.
The Journalism DEI Tracker includes:
Professional organizations that support women journalists and journalists of color
News outlets and projects led by and serving women journalists and journalists of color
Professional development and training opportunities for women journalists and journalists of color (grants, scholarships, fellowships, and leadership training)
(grants, scholarships, fellowships, and leadership training) Academic institutions with journalism and communications programs to include in recruitment efforts to ensure a more diverse pipeline (Historically Black Colleges and Universities, Hispanic Serving Institutions, and Tribal Colleges)
(Historically Black Colleges and Universities, Hispanic Serving Institutions, and Tribal Colleges) Resources for journalism organizations to promote respectful and inclusive coverage (industry reports, diversity style guides, curricula, and toolkits)
2. The Journalism DEI Wheel
Designed to be complementary to the Journalism DEI Tracker, the Journalism DEI Wheel is meant to help funders in particular inform grantmaking by seeing the bigger picture on a higher level, with useful examples and resources for further illumination. Funders can explore the spokes of the Journalism DEI Wheel to see how DEI in journalism is currently being addressed across key areas: education and training; organizational culture; news coverage; engagement; distribution; innovation; evaluation; the larger journalism industry; and funding.
Each area is divided into smaller points of intervention. For example, if you click on “Education/Training,” you will see opportunities to advance DEI in journalism through high school programs, college programs, scholarships, internships, fellowships, mid-career programs, and executive training. Click on any one of these to learn more and find specific examples, including lists of relevant initiatives on the Journalism DEI Tracker.
The Journalism DEI Wheel demonstrates that there are many areas for addressing DEI in journalism. A funder may be focused on one aspect — say, improving news coverage — but not considering other aspects that may be related, such as improving newsroom culture. Of course, no single funder can — or should! — address every possible point of intervention, but viewing the range of possibilities can help illuminate gaps in current portfolios and identify new opportunities.
Not all areas are equally resourced. For example, there is a dearth of publicly-available resources available for journalism organizations when it comes to DEI in hiring, leadership, and general organizational culture. This is particularly disconcerting when we know that there are well-documented leadership gaps in the broader nonprofit field for people of color, women, and LGBTQ individuals. There is a clear need for leaders of DEI-focused journalism organizations to have up-to-date information on not just legal requirements, but also best practices in hiring, evaluation, and promotion. And, as our recent report shows, there is a clear need for funders to support such efforts.
We hope you will use these tools to inform your work, spark conversations among colleagues, and continue to promote this critically important work. We welcome your feedback: let us know how the tools are working for you, and how we can continue to improve them.
Email us at EJlab@democracyfund.org with any additions, corrections, or suggestions for improvement. | https://medium.com/the-engaged-journalism-lab/new-tools-for-media-funders-supporting-dei-in-journalism-47f3e9e4a202 | ['Angelica Das'] | 2019-10-24 15:39:16.220000+00:00 | ['Diversity', 'Tools', 'Equity', 'Journalism', 'Inclusion'] |
Review: ParseNet — Looking Wider to See Better (Semantic Segmentation) | 1. ParseNet Module
ParseNet Module
Actually, ParseNet is simple as in the figure above.
Normalization Using l2 Norm for each channel
At the lower path, at certain conv layer, Normalization using l2 norm is performed for each channel.
At the upper path, at certain conv layer, we perform global average pooling of those feature maps at that conv layer, and perform normalization using l2 norm. Unpooling is just replicating that values of that global averaged pooled vector to be the same size with the lower path so that they can be concatenated.
Features are in different scale at different layers
The reason of having the L2 norm is that, because the earlier layers usually have larger values than the later layers.
The above example showing that the features at different layers have different scales of values. After normalization, all features will have the same value range. And they are all concatenated together.
And a learnable scaling factor γ for each channel is also introduced after normalization:
3. Results
3.1. SiftFlow
SiftFlow Dataset
By adding and normalizing pool6 + fc7 + conv5 + conv4 using ParseNet module to FCN-32s, 40.4% mean IOU is obtained which is better than FCN-16s.
3.2. PASCAL Context
PASCAL Context Dataset
By adding and normalizing pool6 + fc7 + conv5 + conv4 + conv3 using ParseNet module to FCN-32s, 40.4% IOU is obtained which is better than FCN-8s.
We can also see that, without normalization, it does not work well for the ParseNet module.
3.3. PASCAL VOC 2012 | https://medium.com/datadriveninvestor/review-parsenet-looking-wider-to-see-better-semantic-segmentation-aa6b6a380990 | ['Sik-Ho Tsang'] | 2019-03-20 15:57:51.438000+00:00 | ['Convolutional Network', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning'] |
How to Get Back in the Habit of Reading Books After Graduating from University | How to Get Back in the Habit of Reading Books After Graduating from University
Since leaving university, I’ve read 883 books
Image by <a href=”https://pixabay.com/users/stocksnap-894430/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2593394">StockSnap</a> from <a href=”https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2593394">Pixabay</a>
School may have killed your passion for the printed word, but it can be brought back to life.
You have not been condemned to some kind of a bookless desert-future where you’ll never read anything longer than 140 characters ever again.
There’s a spark left where your love of books used to be, and I’m going to help you find it, and re-ignite it into a Guy Montag-sized flame.
Since leaving university, I’ve read 883 books (I counted lol), and my, shall we say, “Passion for Proust,” my “Hankering for Hemingway,” my “Desire for DeLillo” (sorry), remains undiminished.
Now, if you haven’t even looked at a physical book since you last closed your final exam booklet; if you used to love reading, but you can’t get excited about books any more because of all the required reading you’ve had to do for school; or if it’s been so long since you’ve had your life changed by a book that you weren’t forced to read that you’ve resigned yourself to your bookless fate — then this article’s for you.
If you’re a really hard case, we may have to resort to some extreme measures, but as Kafka said, reading great books is like taking an axe to the frozen sea inside us. So what we’re going to do is thaw you out a little bit, and welcome you back into the warm, inviting world of books and literature.
But first, some really good news:
You’re way ahead of most people already.
See, a lot of would-be readers struggle with some pretty nasty confidence issues resulting from a lifetime of non-reading. Books have never really been a big part of their lives, and thus the so-called Great Books take on this sort of mythical quality that can be quite intimidating.
I’ve even felt it before, and I’m no stranger to the Classics. I mean, I swear I’ve never even heard of James Patterson (I’m kidding; I’m not actually a snob, I just play one on the internet).
But people like you and I probably got into university because we loved to read.
We were pretty damn good at it, ahead of most of our classmates probably, and I bet that, in the beginning, no one had to force you to pick up a book instead of a remote.
You were (are) a reader. Reading was just what you did.
Half the battle is simply realizing that your reading instincts have never really left you in the first place. They may have been dormant for a little while, but they’re still there. Books can become a part of your life again. Here’s how. | https://medium.com/the-innovation/how-to-get-back-in-the-habit-of-reading-books-after-graduation-125b94611789 | ['Matt Karamazov'] | 2020-12-22 16:32:46.471000+00:00 | ['Life Lessons', 'Self Improvement', 'Reading', 'Education', 'Books'] |
Huellas del Coronavirus | in In Fitness And In Health | https://medium.com/conexo-vc-es/huellas-del-coronavirus-3b4bd8ee39a1 | ['Isaac De La Peña'] | 2020-09-09 01:45:12.954000+00:00 | ['Covid 19', 'España', 'Venture Capital', 'Portugal', 'Coronavirus'] |
Why What React? & Its basics… | React from Facebook
HELLO FRIENDS!!!
In this article, we are going to learn and answer the below-mentioned questions…
1. What is React?
2. Why React?
After that, we will be taking a look at the basics of React.
What is React
React.js is an open-source JavaScript library that is used for building user interfaces specifically for single-page applications.
React allows us to create reusable UI components. React was first created by Jordan Walke, a software engineer working for Facebook. React first deployed on Facebook’s newsfeed in 2011 and on Instagram.com in 2012.
React allows developers to create large web applications that can change data, without reloading the page.
This corresponds to the view in the MVC template. It can be used with a combination of other JavaScript libraries or frameworks, such as Angular JS in MVC.
React JS is also called simply to React or React.js.
In simple words,
React is a library to build UIs which are fast, scalable and simple.
It uses different components to build a single full-fledged app.
This whole process can be easily understood as a puzzle that is to be solved and to solved we need to put the correct component at the correct place once all the components are placed in the right place the app gets completed.
But the catch is that the components are to be built by the developers. | https://medium.com/quick-code/why-what-react-its-basics-5abf9a6caa2f | ['Bhavishya Negi'] | 2020-05-03 00:53:53.012000+00:00 | ['React', 'Reactjs', 'Web Development', 'Programming', 'India'] |
Creative Destruction or Just Destruction? An Analysis of Fortune 100 Companies in 1955 and 2020 | The small number of companies on the Fortune 500 or 100 for both 1955 and 2020 is not due to creative destruction, and it does not symbolize the strength of the U.S. economy, as some claim. For instance, the American Enterprise Institute’s (AEI) analysis found that only 52 companies on the Fortune 500 in 1955 were still on the list in 2020, a conclusion I do not dispute. It claims that “The fact that nearly nine of every 10 Fortune 500 companies in 1955 are gone, merged, reorganized, or contracted demonstrates that there’s been a lot of market disruption, churning, and Schumpeterian creative destruction over the last six decades.” It continues: “The constant turnover in the Fortune 500 is a positive sign of the dynamism and innovation that characterizes a vibrant consumer-oriented market economy, and that dynamic turnover is speeding up in today’s hyper-competitive global economy[1].”
But is creative destruction the reason for the small number of companies remaining on the Fortune 500, creative destruction that is led by young vibrant American firms introducing highly productive new technologies? I admit that these are my words and not the words of the AEI. Nevertheless, behind the AEI’s claim that the small number of remaining companies is due to “market disruption, churning, and Schumpeterian creative destruction” is the notion that American companies are doing the disrupting and thus Americans are benefiting from this creative destruction through higher productivity, incomes and standard of living.
We know that the last part of the last sentence is not true. Productivity data demonstrates a clear and persistent growth slowdown over the last 80 years, as many readers, particularly those who are followers of Robert Gordon will know[2]. This trend has continued, and a judging panel noted “that the 2010s were the worst decade for productivity growth since the early 19th century[3], despite the positive impact of globalization on productivity. The continued slowdown suggests there was also a slowdown in technological and innovative output in recent decades, an issue we can also address using the change in companies in the Fortune 100 between 1955 and 2020.
I categorized each company in the Fortune 100 into sectors, and in some cases industries, in order to see how the list of companies have changed between 1955 and 2020[4]. The reasons for the rise and fall of these industries are then discussed, drawing from many historical sources. From this analysis, this article concludes that the main reason for the small number of remaining companies is due to the evolution of American economy away from manufacturing to services, an evolution that is more due to foreign competition and financial engineering than to creative destruction by small American companies.
As shown in Table 1, the number of manufacturing and oil companies fell from 74 and 14 to 20 and 9 respectively while the number of financial/insurance, information and communication technology, health care, and retail companies rose from 3 in total to 22, 17, 12, and 11 respectively (total of 62). The ICT companies represent the biggest disruption by new American startups, innovation, venture capital and IPOs[5] while the manufacturing companies declined mostly because of foreign competition and they were replaced by old banks and insurance companies, many of which were founded in the 19th century. In fact, the average age of companies increased from 63 in 1955 to 100 years in 2020, not a sign of newly founded startups disrupting old line companies. M&A were also a big driver of change with the rise and fall of conglomerates, hostile takeovers, and other financial engineering, which reduced the number of oil and manufacturing companies.
Delving into these trends in more detail, the number of oil, tire, auto, steel, food, chemical companies also fell due to foreign competition, consolidation, and some technological change. The number of oil companies dropped from 16 to 6 from consolidation, and not from technological change, despite the rise of fracking. Exxon and Mobil merged as did Chevron and Texaco with the latter two firms acquiring Union Oil, Unocol, and Pure Oil along the way. Sinclair Oil and ARCO were acquired by British Petroleum and BP America was no longer considered an entity for the Fortune 500 in 2020. There were no fracking companies in the Fortune 100 in 2020
The number of auto and tire companies fell from 9 to 2 from both consolidation and foreign competition, but again not from technological change. Electric vehicles had less than 2% of the market in 2019 and Tesla is still far from being a Fortune 500 much less a Fortune 100 company. Instead, component and tire suppliers were either acquired or driven out of business by Japanese and other competition. Firestone was acquired by Bridgestone, Uniroyal and Goodrich by Michelin, and Goodyear still exists at a much smaller scale. Although there was some innovation by Japanese companies in terms of manufacturing techniques, there were no large product innovations and overall, it was Japanese companies doing the innovation and not American ones.
Steel was also decimated by foreign competition, initially by Japan and later by China, causing the number of companies in the Fortune 100 fell from six to zero. Bethlehem and National Steel went bankrupt and the others (Armco, Youngstown sheet & Tube) still exist as much smaller companies; Republic Steel and Jones & Laughlin merged to form LTV. Unlike the auto, rubber, and oil industries, however, the basic oxygen furnace and continuous casting were examples of creative destruction, which resulted in big productivity advantages for the Japanese and European producers.
Other metals such as aluminum and lead were also impacted by foreign competition, particularly from China, and also creative destruction by plastics. For instance, plastic bottles have replaced a significant fraction of aluminum cans and glass bottles (and also reduced demand for steel in many assembled products). The results is that the number of other metal producers dropped from four to zero, including Alcoa and Reynolds Aluminum leaving the list. As an aside, several glass bottle and canning companies also fell of the list between 1955 and 2020. Despite the growth in plastic usage, however, the number of chemical companies fell from eight to one. Dow and Dupont merged and acquired Union Carbide along the way. Monsanto was acquired by Bayer and thus is no longer an American company.
The number of food companies also dramatically decreased, falling from 20 to four, probably because more meals are eaten outside or are delivered to homes. Mergers impacted on General Foods, Kraft, Standard Brands, and Pillsbury, yet one resulting entity, Kraft Heinz, is not in the Fortune 100. The two food companies in the Fortune 100, Tyson Food and Archer Daniel Midland, might be considered disruptors because of Tyson’s innovations in chicken processing using assembly lines and ADM’s emphasis on intermediate food products.
Other new companies include retail, healthcare, finance/insurance, drug, and information & communication technology. The last two are certainly the case of new technologies disrupting old ones, and retail, might also be considered an example of disruption. The number of drug companies rose from zero to four and the number of ICT companies rose from 6 to 17. Their stories are told elsewhere so there is no need to tell them here. Nevertheless, the increases from 6 to 17 does not tell the whole story because the 6 on the 1955 list were analog telephone (AT&T), radio/TV (RCA, CBS), and early computer (IBM, Sperry) companies, nothing like the semiconductor, personal computer, software and Internet companies that were to follow.
Retail companies might also be considered examples of technology disruption because they used information technology to increase product variety and manage increasingly complicated supply chains. Companies such as Walmart, Costco, Walgreens, Kroger, Home Depot, Target, Lowe’s Albertsons, and Best Buy sell us a remarkable number of different products at low prices, courtesy of computers, software and other devices. Retailers such as Albertsons might also be considered food disruptors because they have brought a greater variety of food to consumers.
The healthcare, finance and insurance companies have been the biggest replacements for the manufacturing companies that dominated the 1955 list. They represented more than half the companies in the 2020 list, but most are old companies. Most of the finance and insurance companies can trace their roots back more than 100 years with some going back to before America’s Civil War. These companies have slowly grown over time benefiting from the deregulation that allowed them to cross state lines and thus become huge national banks, investment companies, and insurance providers. Just as healthcare now represents about 18% of GNP[6], and finance and insurance about 8%[7], companies from these industries represent 34% of the Fortune 100.
In summary, the American Enterprise Institute and many others have misinterpreted the reasons for the small number of 1955 companies that still remain on the 2020 list. The evolution of the Fortune 100 is not a symbol of creative destruction, it merely reflects the evolution of the American economy, from manufacturing to services, driven mostly by foreign competition, hostile takeovers, and the rise and fall of conglomerates. Although some of this was driven by innovation, particularly in the ICT and drug sectors, innovation played a small role in the decline in the number of steel, auto, tire, chemical, and oil companies on the list.
Does this tell us something about the future? It probably tells us that we can expect continued changes in the companies, but little changes in products and processes and thus few improvements in productivity. Analyses of startups lead to the same conclusions[8]. If we want a better future, we need to rethink how R&D and innovation are done.
[1] https://fee.org/articles/comparing-1955s-fortune-500-to-2019s-fortune-500/
[2] The Rise and Fall of American Growth, Robert Gordon, 2016, Princeton University Press
[3] https://www.ft.com/content/8d7ef9b2-24b4-11ea-9a4f-963f0ec7e134
[4] Here are the companies for 1955 (https://archive.fortune.com/magazines/fortune/fortune500_archive/full/1955/) and those for 2020 (https://fortune.com/fortune500/2020/search/). I added Ford to the 1955 list because for some reason Fortune failed to put it on the list.
[5] https://medium.com/@jeffreyleefunk/the-most-valuable-startups-founded-since-1975-none-have-been-founded-since-2004-8bc142b67051
[6] https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical#:~:text=U.S.%20health%20care%20spending%20grew,spending%20accounted%20for%2017.7%20percent.
[7] https://fred.stlouisfed.org/series/VAPGDPFI
[8] https://medium.com/@jeffreyleefunk/what-will-happen-to-todays-privately-held-unicorns-valued-at-1-4-trillion-13f507797487
https://medium.com/@jeffreyleefunk/why-are-todays-startup-unicorns-doing-worse-than-those-of-the-past-1c8ece718ab0 https://medium.com/@jeffreyleefunk/are-there-any-industries-in-which-ex-unicorns-are-profitable-747eca652170
https://medium.com/@jeffreyleefunk/how-successful-are-todays-startup-unicorns-893043f32d24 | https://medium.com/swlh/creative-destruction-or-just-destruction-an-analysis-of-fortune-100-companies-in-1955-and-2020-91a36f60287d | ['Jeffrey Lee Funk'] | 2020-10-28 05:58:16.849000+00:00 | ['Disruption', 'Innovation', 'Startup', 'Venture Capital', 'Technology'] |
The Top Ten Most Profitable Fitness Apps Markets | Phones are no longer simply a means of communication, like they were five years ago. Now, they help us pay for purchases. They can be used as a ticket on public transport or used as a door key, car alarm, video recorder, or portable PC. They even let us talk to someone named Siri.
Mobile apps have significantly changed our way of life, and they have also helped some to get rich.
In our second research project (after the mobile games market), we aimed to assess which markets are the most promising for developers. We took the market of mobile fitness apps as an example.
76% of revenues in the fitness app market are generated by the first 10 countries:
It’s interesting to compare the top mobile app markets with the sports activity rating (this rating is based on information from each country’s sporting events, which have their own “weight” according to special criteria: the scale of the event, its impact on the world of sport and society, participation of representatives of different countries and continents, etc.).
KEY FACTS ABOUT THE FITNESS APP MARKET
According to Newzoo, the revenue of the mobile app market reached $44.8bn in 2016 (61.8bn according to App Annie) and is projected to grow to $80.6bn in 2020 (to 139.1bn in 2021, according to App Annie). 81–82% of the revenue was gained by games.
Categories of non-gaming apps in order of decreasing revenue (Newzoo):
social networks (usually dating sites),
entertainment (videos),
music,
books,
education,
productivity,
photography,
medicine,
health and fitness.
According to Statista, the volume of the mobile fitness app market was $1.778bn in 2016, and it is expected to grow to $4.1bn in 2021. Thus, according to App Annie, the percentage of fitness apps in revenue was about 15% in 2016, and the share will be approximately 12% in 2021, which is not insignificant.
The market of mobile fitness apps is evolving in large part due to the fact that more and more people seek a healthy lifestyle. As a result, new devices helping people stay in shape have appeared and the Internet of Things is growing. In 2015, according to the Globe-Go company, apps in the Health and Fitness category came in second by increased amount of time spent with them, giving way only to music apps.
KEY PLAYERS IN THE ONLINE-FITNESS MARKET
The main players in the online fitness market (according to Statista)are shown below. All of them are engaged in the production of sportswear and sports footwear, fitness bracelets and other gadgets that generate the main revenue. In addition, all companies have their own apps for Google Play and the App Store, which also produce additional proceeds.
Despite the fact that the market for fitness bracelets, trackers, smart clothes and watches is larger in volume than the fitness apps market, the latter is growing faster and is projected to catch up with the physical devices market in 2021.
FOUR KEY MOBILE FITNESS APPS
We compared the data of four apps which regularly hold one of the top revenue positions in the Health and Fitness category in the App Store and Google Play.
1. Sweat: Kayla Itsines Fitness by The Bikini Body Training Company is owned by Australian fitness trainer and entrepreneur Kayla Itsines, number 51 in the list of the richest young entrepreneurs of Australia.
Despite the fact that Kayla didn’t launch the Android version of her app until April 2016, she managed to earn $14.56m in 2016 (more than 30% of the company’s $46m total revenue). According to Bloomberg, the Sweat app earned in 2016 more than any other app in the Health & Fitness category.
The app provides nutrition recommendations and a series of aerobic exercises in the Bikini Body Guide (BBG) to practice at home.
According to the Facebook Audience tool (data for June 2017), 99% of Kayla’s page followers in Facebook are women.
2. Calorie Counter & Diet Tracker by MyFitnessPal is the revenue leader in the shops of mobile apps among a vast list of apps of this company. The MyFitnessPal platform is owned by an American manufacturer of Under Armour sportswear and footwear.
The company’s revenue, according to the annual report, amounted to $4.83bn in 2016, and 1.6% of this revenue was from the Connected Fitness Platform ($80.447m, an increase of 66% compared to the previous year), which includes all apps of the company. $8.8m was from by the Calorie Counter & Diet Tracker app.
The app makes it easier to count the calorie content of dishes and make a plan to reduce calories for weight loss.
According to the Facebook Audience tool, 77% of followers on the app’s Facebook page are women.
3. Headspace: Guided Meditation and Mindfulness by the US company Headspace. The company was founded in 2010 in the UK by Rich Pierson and Andy Puddicombe, a former Buddhist monk and meditation expert.
The Android version of the app wasn’t launched until in 2016, but the company managed to earn $5.78m by the end of year.
The app contains auxiliary materials for home meditation, nutrition recommendations and materials for dealing with depression and anxiety (especially during pregnancy).
According to the Facebook Audience tool, 70% of followers on the company’s Facebook page are women.
4. Runtastic Results: Workout & Strength Training by the Austrian company Runtastic (bought by Adidas in 2015 for $239m). The company has 20 different mobile apps, and Runtastic Results is the leader among them in terms of revenue. In 2015, the company earned €11m (according to the annual Adidas report). The revenue of the entire Adidas company was €19.291bn in 2016.
In 2016, the Runtastic Results Workout app made Adidas $3.6m.
The app contains a set of exercises for working out at home without special equipment and recommendations for nutrition.
According to the Facebook Audience tool, women constitute only 45% of the official Runtastic company page followers on Facebook .
LOCALIZATION AND REGIONAL SPECIFIC FEATURES
A few interesting facts about these countries in the context of mobile apps:
In China, a sleep tracker is one of the most profitable fitness apps.
Fitness apps for meditation are popular in the India market.
The successful launch of an Android app in China is only possible if you agree at least with ten local markets, or even better with twenty ones.
Localization into the languages of the 10 countries with the largest mobile app revenues will allow coverage of not only these countries, but also others (e. g. Bengali is the official language of the People’s Republic of Bangladesh). We have included Spanish in the list of languages. After English, it is the second most popular language in the USA, and it is included in the traditional localization list for EFIGS + CJK mobile platforms.
Allcorrect company is one of the leading localizer of mobile apps. We have localized more than 400 mobile apps and will be happy to help you with the localization of your mobile app. If you have any research-related questions or suggestions, please write us at order@allcorrect.com — we will be happy to answer. | https://medium.com/software-and-games-localization/the-top-ten-most-profitable-fitness-apps-markets-448dbbdded6c | ['Allcorrect Blog'] | 2017-07-11 12:41:08.536000+00:00 | ['Apps', 'Mobile Apps', 'Mobile App Development', 'Fitness', 'Mobile Marketing'] |
Epic App Update! Send crypto anywhere in the world in seconds | You have probably heard the news that Crypterium is now running the fastest crypto transactions in the world.
Crypto transfers in 1 second to people who don’t even have crypto wallets? It’s a reality now.
Send crypto to your mom, your friend, your ex-girlfriend from Iceland. You can even give crypto as a present to someone’s birthday.
Download Crypterium App in Apple Store or Google Play!
About Crypterium
CCrypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash. Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds, reliably and for a fraction of a penny.
Join our Telegram news channel or other social media to stay updated!
Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedIn | https://medium.com/crypterium/send-crypto-anywhere-in-the-world-in-seconds-1cf9f04febbb | [] | 2018-11-10 09:56:12.508000+00:00 | ['Cryptocurrency', 'Bitcoin', 'Finance', 'Technology', 'Mobile App Development'] |
3 Lessons on Customer Empathy: A Recap of Unbounce’s Call to Action Conference | 1. Use SEO to Solve People’s Problems, and Google will Reward You.
Though marketers use Google on the daily, it’s easy to forget that people turn to search to find an answer in a time of need — not for sport. When we try to use SEO to force searchers into a funnel, we aren’t solving their problem, and Google can see their disappointment when this happens (thanks to computer learning!)
As Wil Reynolds, Founder of SEER Interactive, puts it, “When I do my job well, I’m solving problems that people are searching for.”
To illustrate the difference between forcing searchers into a funnel and solving their problems, I did a search for “compare SEO companies” and came up with some surprising results:
Notice the paid results at the top are all SEO companies — not comparisons of SEO companies. These ads don’t answer my question, and what’s worse, the first two companies aren’t even local results. “Searchberg.com” is based in New York, and “Zebratechies.com” is based in India — how is this relevant when I live in Vancouver?
Below the paid results are the organically ranked results, and thankfully, they actually provide an answer to my question. These results are a great indicator of the kind of content you need to provide searchers to rank organically for this search term, and this content is worth imitating.
To recap: If you want to rank organically, you have to solve people’s problems.
2. Focus on “Jobs to Be Done” Vs. Traditional Personas.
“Personas can’t tell us what was happening in a person’s life that led to a decision.”
— Claire Suellentrop, Founder, Love Your Customers
Customer personas are the standard tool used to understand your target market and what motivates them to buy. The only problem is, they often focus on shallow demographic details that paint a limited picture of a customer’s motivations for buying. I’ve created a sample customer persona based on me:
Sample Sam:
Career: Marketing Coordinator Age: 20 Something Marital Status: Single (no kids) Urban Locations Online Behaviours: Browses Reddit religiously. Prefers Instagram over other social media platforms. Relies on online ratings for online shopping and refers to them for in-store purchase. Goals and Challenges: Become a better marketer. Balance work, friends, family, and career development.
While this might tell you a little bit about what my motivations are and what’s important to me, it can’t truly inform you about my buying habits.
Specifically, from the persona above, can you infer why I bought these shorts?
I’ll save you the effort: you can’t.
Instead of solely focusing on creating customer personas, identify a customer’s “job to be done” — the task they’re trying to accomplish by buying a particular product or service.
In my case, I’m going to Europe in a month, and in an effort to tan my legs, remain comfortable while walking/hiking all day, and possibly look not dumpy when out for dinner, these shorts satisfied my needs.
When you focus on what job a customer is trying to accomplish, you learn a lot more about what motivates anybody to buy your product. Claire Suellentrop, my newfound hero for imparting this wisdom, has conveniently outlined how to do this:
To further personalize this, “help me” can be replaced with any other relevant term:
In my case, my motivation for buying shorts looks like this:
“When I’m switching between hiking in the countryside and exploring the city, equip me so I can feel appropriately dressed no matter where my travels take me.”
Now doesn’t that paint a better picture?
3. Use Positioning to Provide the Context in Which Your Value is the Most Obvious to Customers.
Positioning is one of the most important aspects of marketing strategy, and, as April Dunford notes, “bad positioning can kill even a great product.” At this stage, it is imperative you understand your customers’ pain points inside out and show them exactly how your product is the right solution.
So, how is this done? Well, it can be simpler than you think:
Listen to what your customers say about how your product addresses their pains. Integrate this feedback into your value propositions, positioning statement, and content. Forever.
Following this method establishes your product in the context of a solution to a specific problem they already know they have, making your product relevant to their lives.
So, what does this look like in practice? Speaker, Amy Harrison, Founder of Harrison Amy Copywriting, provides an example with Corsodyl, a mouthwash for gum disease. Albeit unconventional, Corsodyl does a perfect job of telling customers how it addresses their pains:
Customers immediately know what Corsodyl does. Furthermore, the language used is plain enough that it looks exactly like a searcher’s input into Google, which means customers will easily relate to the messaging. In this way, Corsodyl has provided the context in which the value of their product is immediately obvious to the customer. | https://medium.com/insights-from-the-incubator/3-lessons-on-customer-empathy-a-recap-of-unbounces-call-to-action-conference-d2dbef5d843e | ['Samantha Grandinetti'] | 2017-08-30 20:47:19.504000+00:00 | ['Marketing', 'Digital Marketing', 'Digital Marketing Agency', 'Digital Marketing Tips', 'Customer Focus'] |
React Native at Airbnb: The Technology | This is the second in a series of blog posts in which we outline our experience with React Native and what is next for mobile at Airbnb.
React Native itself is a relatively new and fast-moving platform in the cross-section of Android, iOS, web, and cross-platform frameworks. After two years, we can safely say that React Native is revolutionary in many ways. It is a paradigm shift for mobile and we were able to reap the benefits of many of its goals. However, its benefits didn’t come without significant pain points.
What Worked Well
Cross-Platform
The primary benefit of React Native is the fact that code you write runs natively on Android and iOS. Most features that used React Native were able to achieve 95–100% shared code and 0.2% of files were platform-specific (*.android.js/*.ios.js).
Unified Design Language System (DLS)
We developed a cross-platform design language called DLS. We have Android, iOS, React Native, and web versions of every component. Having a unified design language was amenable to writing cross-platform features because it meant that designs, component names, and screens were consistent across platforms. However, we were still able to make platform-appropriate decisions where applicable. For example, we use the native Toolbar on Android and UINavigationBar on iOS and we chose to hide disclosure indicators on Android because they don’t adhere to the Android platform design guidelines.
We opted to rewrite components instead of wrapping native ones because it was more reliable to make platform-appropriate APIs individually for each platform and reduced the maintenance overhead for Android and iOS engineers who may not know how to properly test changes in React Native. However, it did cause fragmentation between the platforms in which native and React Native versions of the same component would get out of sync.
React
There is a reason that React is the most-loved web framework. It is simple yet powerful and scales well to large codebases. Some of the things we particularly like are:
Components: React Components enforce separation of concerns with well-defined props and state. This is a major contributor to React’s scalability.
React Components enforce separation of concerns with well-defined props and state. This is a major contributor to React’s scalability. Simplified Lifecycles: Android and, to a slightly lesser extent, iOS lifecycles are notoriously complex. Functional reactive React components fundamentally solve this problem and made learning React Native dramatically simpler than learning Android or iOS.
Android and, to a slightly lesser extent, iOS lifecycles are notoriously complex. Functional reactive React components fundamentally solve this problem and made learning React Native dramatically simpler than learning Android or iOS. Declarative: The declarative nature of React helped keep our UI in sync with the underlying state.
Iteration Speed
While developing in React Native, we were able to reliably use hot reloading to test our changes on Android and iOS in just a second or two. Even though build performance is a top priority for our native apps, it has never come close to the iteration speed we achieved with React Native. At best, native compilation times are 15 seconds but can be as high as 20 minutes for full builds.
Investing in Infrastructure
We developed extensive integrations into our native infrastructure. All core pieces such as networking, i18n, experimentation, shared element transitions, device info, account info, and many others were wrapped in a single React Native API. These bridges were some of the more complex pieces because we wanted to wrap the existing Android and iOS APIs into something that was consistent and canonical for React. While keeping these bridges up to date with the rapid iteration and development of new infrastructure was a constant game of catch up, the investment by the infrastructure team made product work much easier.
Without this heavy investment in infrastructure, React Native would have led to a subpar developer and user experiences. As a result, we don’t believe React Native can be simply tacked on to an existing app without a significant and continuous investment.
Performance
One of the largest concerns around React Native was its performance. However, in practice, this was rarely a problem. Most of our React Native screens feel as fluid as our native ones. Performance is often thought of in a single dimension. We frequently saw mobile engineers look at JS and think “slower than Java”. However, moving business logic and layout off of the main thread actually improves render performance in many cases.
When we did see performance issues, they were usually caused by excessive rendering and were mitigated by effectively using shouldComponentUpdate, removeClippedSubviews, and better use of Redux.
However, the initialization and first-render time (outlined below) made React Native perform poorly for launch screens, deeplinks, and increased the TTI time while navigating between screens. In addition, screens that dropped frames were difficult to debug because Yoga translates between React Native components and native views.
Redux
We used Redux for state management which we found effective and prevented the UI from ever getting out of sync with state and enabled easy data sharing across screens. However, Redux is notorious for its boilerplate and has a relatively difficult learning curve. We provided generators for some common templates but it was still one of the most challenging pieces and source of confusion while working with React Native. It is worth noting that these challenges were not React Native specific.
Backed by Native
Because everything in React Native can be bridged by native code, we were ultimately able to build many things we weren’t sure were possible at the beginning such as:
Shared element transitions: We built a <SharedElement> component that is backed by native shared element code on Android and iOS. This even works between native and React Native screens. Lottie: We were able to get Lottie working in React Native by wrapping the existing libraries on Android and iOS. Native networking stack: React Native uses our existing native networking stack and cache on both platforms. Other core infra: Just like networking, we wrapped the rest of our existing native infrastructure such as i18n, experimentation, etc. so that it worked seamlessly in React Native.
Static Analysis
We have a strong history of using eslint on web which we were able to leverage. However, we were the first platform at Airbnb to pioneer prettier. We found it to be effective at reducing nits and bikeshedding on PRs. Prettier is now being actively investigated by our web infrastructure team.
We also used analytics to measure render times and performance to figure out which screens were the top priority to investigate for performance issues.
Because React Native was smaller and newer than our web infrastructure, it proved to be a good testbed for new ideas. Many of the tools and ideas we created for React Native are being adopted by web now.
Animations
Thanks to the React Native Animated library, we were able to achieve jank-free animations and even interaction-driven animations such as scrolling parallax.
JS/React Open Source
Because React Native truly runs React and javascript, we were able to leverage the extremely vast array of javascript projects such as redux, reselect, jest, etc.
Flexbox
React Native handles layout with Yoga, a cross-platform C library that handles layout calculations via the flexbox API. Early on, we were hit with Yoga limitations such as the lack of aspect ratios but they have been added in subsequent updates. Plus, fun tutorials such as flexbox froggy made onboarding more enjoyable.
Collaboration with Web
Late in the React Native exploration, we began building for web, iOS, and Android at once. Given that web also uses Redux, we found large swaths of code that could be shared across web and native platforms with no alterations.
What didn’t work well
React Native Immaturity
React Native is less mature than Android or iOS. It is newer, highly ambitious, and moving extremely quickly. While React Native works well in most situations, there are instances in which its immaturity shows through and makes something that would be trivial in native very difficult. Unfortunately, these instances are hard to predict and can take anywhere from hours to many days to work around.
Maintaining a Fork of React Native
Due to React Native’s immaturity, there were times in which we needed to patch the React Native source. In addition to contributing back to React Native, we had to maintain a fork in which we could quickly merge changes and bump our version. Over the two years, we had to add roughly 50 commits on top of React Native. This makes the process of upgrading React Native extremely painful.
JavaScript Tooling
JavaScript is an untyped language. The lack of type safety was both difficult to scale and became a point of contention for mobile engineers used to typed languages who may have otherwise been interested in learning React Native. We explored adopting flow but cryptic error messages led to a frustrating developer experience. We also explored TypeScript but integrating it into our existing infrastructure such as babel and metro bundler proved to be problematic. However, we are continuing to actively investigate TypeScript on web.
Refactoring
A side-effect of JavaScript being untyped is that refactoring was extremely difficult and error-prone. Renaming props, especially props with a common name like onClick or props that are passed through multiple components were a nightmare to refactor accurately. To make matters worse, the refactors broke in production instead of at compile time and were hard to add proper static analysis for.
JavaScriptCore inconsistencies
One subtle and tricky aspect of React Native is due to the fact that it is executed on a JavaScriptCore environment. The following are consequences we encountered as a result:
iOS ships with its own JavaScriptCore out of the box. This meant that iOS was mostly consistent and not problematic for us.
Android doesn’t ship its own JavaScriptCore so React Native bundles its own. However, the one you get by default is ancient. As a result, we had to go out of our way to bundle a newer one.
While debugging, React Native attaches to a Chrome Developer Tools instance. This is great because it is a powerful debugger. However, once the debugger is attached, all JavaScript runs within Chrome’s V8 engine. This is fine 99.9% of the time. However, in one instance, we got bit when toLocaleString worked on iOS but only worked on Android while debugging. It turns out that the Android JSC doesn’t include it and it was silently failing unless you were debugging in which case it was using V8 which does. Without knowing technical details like this, it can lead to days of painful debugging for product engineers.
React Native Open Source Libraries
Learning a platform is difficult and time-consuming. Most people only know one or two platforms well. React Native libraries that have native bridges such as maps, video, etc. requires equal knowledge of all three platforms to be successful. We found that most React Native Open source projects were written by people who had experience with only one or two. This led to inconsistencies or unexpected bugs on Android or iOS.
On Android, many React Native libraries also require you to use a relative path to node_modules rather than publishing maven artifacts which are inconsistent with what is expected by the community.
Parallel Infrastructure and Feature Work
We have accumulated many years of native infrastructure on Android and iOS. However, in React Native, we started with a blank slate and had to write or create bridges of all existing infrastructure. This meant that there were times in which a product engineer needed some functionality that didn’t yet exist. At that point, they either had to work in a platform they were unfamiliar with and outside the scope of their project to build it or be blocked until it could be created.
Crash Monitoring
We use Bugsnag for crash reporting on Android and iOS. While we were able to get Bugsnag generally working on both platforms, it was less reliable and required more work than it did on our other platforms. Because React Native is relatively new and rare in the industry, we had to build a significant amount of infrastructure such as uploading source maps in-house and had to work with Bugsnag to be able to do things like filter crashes by just those that occurred in React Native.
Due to the amount of custom infrastructure around React Native, we would occasionally have serious issues in which crashes weren’t reported or source maps weren’t properly uploaded.
Finally, debugging React Native crashes were often more challenging if the issue spanned React Native and native code since stack traces don’t jump between React Native and native.
Native Bridge
React Native has a bridge API to communicate between native and React Native. While it works as expected, it is extremely cumbersome to write. Firstly, it requires all three development environments to be properly set up. We also experienced many issues in which the types coming from JavaScript were unexpected. For example, integers were often wrapped by strings, an issue that isn’t realized until it is passed over a bridge. To make matters worse, sometimes iOS will fail silently while Android will crash. We began to investigate automatically generating bridge code from TypeScript definitions towards the end of 2017 but it was too little too late.
Initialization Time
Before React Native can render for the first time, you must initialize its runtime. Unfortunately, this takes several seconds for an app of our size, even on a high-end device. This made using React Native for launch screens nearly impossible. We minimized the first-render time for React Native by initializing it at app-launch.
Initial Render Time
Unlike with native screens, rendering React Native requires at least one full main thread -> js -> yoga layout thread -> main thread round trip before there is enough information to render a screen for the first time. We saw an average initial p90 render of 280ms on iOS and 440ms on Android. On Android, we used the postponeEnterTransition API which is normally used for shared element transitions to delay showing the screen until it has rendered. On iOS, we had issues setting the navbar configuration from React Native fast enough. As a result, we added an artificial delay of 50ms to all React Native screen transitions to prevent the navbar from flickering once the configuration was loaded.
App Size
React Native also has a non-negligible impact on app size. On Android, the total size of React Native (Java + JS + native libraries such as Yoga + Javascript Runtime) was 8mb per ABI. With both x86 and arm (32 bit only) in one APK, it would have been closer to 12mb.
64-bit
We still can’t ship a 64-bit APK on Android because of this issue.
Gestures
We avoided using React Native for screens that involved complex gestures because the touch subsystem for Android and iOS are different enough that coming up with a unified API has been challenging for the entire React Native community. However, work is continuing to progress and react-native-gesture-handler just hit 1.0.
Long Lists
React Native has made some progress in this area with libraries like FlatList. However, they are nowhere near the maturity and flexibility of RecyclerView on Android or UICollectionView on iOS. Many of the limitations are difficult to overcome because of the threading. Adapter data can’t be accessed synchronously so it is possible to see views flash in as they get asynchronously rendered while scrolling quickly. Text also can’t be measured synchronously so iOS can’t make certain optimizations with pre-computed cell heights.
Upgrading React Native
Although most React Native upgrades were trivial, there were a few that wound up being painful. In particular, it was nearly impossible to use React Native 0.43 (April 2017) to 0.49 (October 2017) because it used React 16 alpha and beta. This was hugely problematic because most React libraries that are designed for web use don’t support pre-release React versions. The process of wrangling the proper dependencies for this upgrade was a major detriment to other React Native infrastructure work in mid-2017.
Accessibility
In 2017, we did a major accessibility overhaul in which we invested significant efforts to ensure that people with disabilities can use Airbnb to book a listing that can accommodate their needs. However, there were many holes in the React Native accessibility APIs. In order to meet even a minimum acceptable accessibility bar, we had to maintain our own fork of React Native where we could merge fixes. For these case, a one-line fix on Android or iOS wound up taking days of figuring out how to add it to React Native, cherry picking it, then filing an issue on React Native core and following up on it over the coming weeks.
Troublesome Crashes
We have had to deal with a few very bizarre crashes that are hard to fix. For example, we are currently experiencing this crash on the @ReactProp annotation and have been unable to reproduce it on any device, even those with identical hardware and software to ones that are crashing in the wild.
SavedInstanceState Across Processes on Android
Android frequently cleans up background processes but gives them a chance to synchronously save their state in a bundle. However, on React Native, all state is only accessible in the js thread so this can’t be done synchronously. Even if this weren’t the case, redux as a state store is not compatible with this approach because it contains a mix of serializable and non-serializable data and may contain more data than can fit within the savedInstanceState bundle which would lead to crashes in production. | https://medium.com/airbnb-engineering/react-native-at-airbnb-the-technology-dafd0b43838 | ['Gabriel Peal'] | 2018-06-25 15:57:00.044000+00:00 | ['Android', 'React', 'React Native', 'iOS', 'Mobile'] |
Tips to advance towards a career in Machine Learning | Machine Learning is a branch of Artificial Intelligence that gives a machine the ability to automatically learn and improve from an experience, rather than being explicitly programmed. ML is amongst the hottest career choices in 2020. Companies like Google, Amazon, Microsoft, Oracle, and many others are making a transition towards Machine Learning and Artificial Intelligence.
There are various career options in the field of Machine Learning such as Machine Learning Engineer, Machine Learning Analyst, NLP Data Scientist, Data Sciences Lead, and Machine Learning Scientist.
To help the aspiring Machine Learning enthusiasts, here are some tips which will help you build a career in Machine Learning. So, without any further ado, let us jump on to the prerequisites of Machine Learning.
Understanding the Prerequisites
The realm of machine learning comes with its own sets of requirements as well as qualifications. A career in machine learning is a steep one and requires you to have basic skill sets to start your journey into the industry.
To advance your career in Machine Learning, you must have exposure to the following skills:
Stats and probability
Algorithms
Applied mathematics
Hands-on experience in programming languages like Python, C++, Java or R.
And, Distributed Computing
To kick start your career, start working on the skills you lack. Get a book on probability, brush up your coding skills, and work on your weaker areas. Along with that, you can also take up a substantial course on artificial intelligence and machine learning.
Take online courses
To advance your career in machine learning, you have to broaden your skill-set as much as possible. And as a heads up, you can start with the online courses and then to hone your skills and then brush your knowledge, you can participate in various competitions in this field such as Kaggle, Analytics Vidhya
There is another popular approach available, you can accelerate your learning with the help of bootcamps
Practice as much as possible
If you want to master the Machine Learning skills, you need to practice on datasets. You must know the process of working on a problem end-to-end. Learn to map that process onto a tool and practice the process on a dataset in a targeted way.
Work on real-world machine learning problems to see how it can actually be used in fields like education, science, technology, and medicine. You can take the machine learning problems from Kaggle.com right now to build your machine learning model.
Working on different projects will give you the required experience for a job as it will make your resume stand out. The certificate from your institution and the number of projects you have worked on matters a lot. Now that you have worked on building basic knowledge and are able to understand what you are learning, it is time to start working on some machine learning projects that can help you furnish your skills.
Try collaborating with your mates for projects to constantly upskill yourself as it will help in giving you practical exposure and which will help you in meeting industry requirements.
Build a machine learning portfolio
Designers and artists use a portfolio to showcase examples of prior work to prospective clients and employers. Your machine learning portfolio should be a compilation of your completed independent projects, each of which uses machine learning in some way. The portfolio should be accessible, small, completed, independent, and most importantly, understandable.
A set of your finished projects will offer a knowledge base for you to reflect on and leverage as you push into projects further from your comfort zones. You can have a code repository such as GitHub or BitBucket as well to list your projects.
Such sites encourage you to provide a readme file to clearly describe the purpose and findings for each project. You can even add images, graphs, videos, and links in your code repositories. Give instructions to download the project and recreate the results as well. Want people to re-run your work? Try making it as accessible and easy as possible.
Bottom line
Now that you have an idea of how to build a career in Machine Learning, it is time for you to pull up your socks. Know your strengths and work towards the weaknesses to stay ahead of your competitors. Enroll yourself in a Machine Learning course and build a strong foundation for a rewarding career in Machine Learning. | https://medium.com/codingninjas-blog/machine-learning-is-a-branch-of-artificial-intelligence-that-gives-a-machine-the-ability-to-352370bb53ef | ['Coding Ninjas'] | 2020-02-03 12:52:03.070000+00:00 | ['Data Science', 'Artificial Intelligence', 'Machine Learning'] |
Designing for accessibility is not that hard | Designing for accessibility is not that hard
Seven easy-to-implement guidelines to design a more accessible web ❤️
Digital accessibility refers to the practice of building digital content and applications that can be used by a wide range of people, including individuals who have visual, motor, auditory, speech, or cognitive disabilities.
There’s a myth that making a website accessible is difficult and expensive, but it doesn’t have to. Designing a product from scratch that meets the requirements for accessibility doesn’t add extra features or content; therefore there shouldn’t be additional cost and effort.
Fixing a site that is already inaccessible may require some effort, though. When I used to work at Carbon Health, we checked the accessibility of our site using the AXE Chrome Extension. We found 28 violations that we needed to solve on the home page alone. It sounded complicated, but we discovered that these problems were not that hard to correct; it was just a matter of investing time and research to solve them. We were able to get to zero errors in a couple of days.
I want to share with you some of the simple steps we took so you can also make your sites more accessible. These principles focus on web and mobile accessibility.
But before we get started, let’s talk about why that’s important.
Why designing for accessibility? 🤔
As designers, we have the power and responsibility to make sure that everyone has access to what we create regardless of ability, context, or situation. The great thing about making our work accessible is that it brings a better experience to everyone.
There are over 56 million people in the United States (nearly 1 in 5) and over 1 billion people worldwide who have a disability. In 2017, there were 814 website accessibility lawsuits filed in federal and state courts. These two pieces of data alone should convince us of the importance of designing for accessibility.
There is also a strong business case for accessibility: studies show that accessible websites have better search results, they reach a bigger audience, they’re SEO friendly, have faster download times, they encourage good coding practices, and they always have better usability.
These seven guidelines are relatively easy to implement and can help your products get closer to meet level AA of the Web Content Accessibility Guidelines (WCAG 2.0), and work on the most commonly used assistive technologies — including screen readers, screen magnifiers, and speech recognition tools.
1. Add enough color contrast 🖍 | https://uxdesign.cc/designing-for-accessibility-is-not-that-hard-c04cc4779d94 | ['Pablo Stanley'] | 2018-06-29 21:14:50.301000+00:00 | ['User Experience', 'Accessibility', 'UX', 'Design', 'Design Thinking'] |
This Is How to Declutter Your Brain so You Can Achieve Higher-Level Thinking | This Is How to Declutter Your Brain so You Can Achieve Higher-Level Thinking
And produce results you didn’t think you were capable of.
Photo by lifecoachcode
Your brain is like your home. If there’s garbage everywhere, you will feel stressed and function at a drastically lower level.
Higher-level thinking looks like this:
Your brain comes up with ideas. A few of them are good.
You can access effortless levels of creativity — helpful if you’re a writer, an entrepreneur, musician, comedian, or actor.
You don’t walk around with huge levels of brain fog that clouds your sense of judgment.
When your brain is cluttered, you blow up at every tiny thing. You need a clear mind to achieve extraordinary results.
First, let’s toss out the mess from your brain:
1. What you can’t control
There’s a lot in your life you can’t control. The fastest way to clutter your brain with B.S. is to think a lot about what you can’t control. A question to change your life:
“What are you going to do about it?”
This question acts as a pattern interrupt. You break up the story of that which you can’t control when you force yourself to take personal responsibility. What you can’t control directs you when you let it.
2. Junk food for your brain
Anger and frustration are junk food for your brain. They don’t get you anywhere. Turn anger and frustration into action. Don’t worry about revenge — focus on results. Results make anger look stupid, rather than you.
3. People-centric thinking
Thinking about people is low-level thinking. Your brain is smarter when you focus on ideas. Humans are a messed up species and critiquing them is hilarious when you think deeply about it.
You really want people to act the right way, be kind, respect your views, be generous towards you, believe the same as you, get on with everybody? Seriously, this is an immature view of the world. Don’t worry about judging people. This is brain clutter.
Change yourself to change the world.
4. Money driven obsessions
Money is an obsession. When you think money will solve all your problems you mess up your potential. Money solves nothing. America has lots of money. Do they have many problems these days?
The time you spend thinking about money can be better utilized.
5. Your grandmother’s inheritance
Yep, throw the old girl’s stuff away (joking). The junk you hoard takes up space in your physical world.
Try to think clearly while you’re standing in the middle of a rubbish dump that smells like a public toilet after a hundred beer guts have taken a half-time moment of liquid relief. You can’t. It’s hard to put your finger on it.
Inspirational thoughts that drive you towards action aren’t going to happen amongst a pile of junk. Your brain is too busy focusing on where all your physical possessions belong. Other people’s stuff is the hardest to throwaway.
There’s a broken sense of connection, or you feel like you owe it to them to hold onto it all. Meanwhile, upstairs in your brain, your thinking becomes cluttered the same way your physical world is.
Putting stuff away declutters your mind. | https://medium.com/the-ascent/this-is-how-to-declutter-your-brain-so-you-can-achieve-higher-level-thinking-67de03bc9116 | ['Tim Denning'] | 2020-12-13 17:03:29.796000+00:00 | ['Psychology', 'Self Improvement', 'Inspiration', 'Learning', 'Work'] |
The Toughest Essay | Photo by Green Chameleon on Unsplash
Write an essay about the one thing you love,
Was the toughest assignment I ever got.
To compartmentalise you into
Introduction, body, and conclusion
Was not possible —
You needed no introduction,
The largest part of the essay
Was too small to describe you,
And I’d hate our love to have a conclusion.
I would require a lifetime
To write a narrative essay
That would tell our story
Without missing the minutest of details.
Our love was too vast
To paint a picture through
A descriptive essay;
It simply wouldn’t fit the canvas.
To write an expository essay
Was an onerous task.
Facts, statistics, and numbers
Wouldn’t be proof enough of the way I felt.
Only a fool would need sound reasoning
In the form of a persuasive essay
To make our love look convincing;
Our love was never debatable.
I could write a thousand words about you,
But only three would matter the most. | https://krishnabetai.medium.com/the-toughest-essay-b41567e78ed0 | ['Krishna Betai'] | 2019-04-27 18:28:48.153000+00:00 | ['Love', 'Writing', 'Poetry', 'NaPoWriMo', 'National Poetry Month'] |
Take Out — Out Take. * Extra-long comic!* When you’re afraid… | * Extra-long comic!*
When you’re afraid ordering in will lead to your last supper. | https://backgroundnoisecomic.medium.com/take-out-out-take-e3af0b4efefd | ['Background Noise Comics'] | 2020-04-05 17:51:26.612000+00:00 | ['Humor', 'Anxiety', 'Comics', 'Quarantine', 'Coronavirus'] |
The Human Genome Is Full of Viruses | Viruses are powerful, ancient, and vital to our existence, but they are extremely simple constructions. They tend to be nothing more than a few pieces: a protein capsid, which is a simplistic and protective shell; a protein called a polymerase, which carries out most of the functions related to replicating the viral genome; and a sequence of nucleotides — either RNA or DNA — that encode for the previously mentioned viral proteins. The image below shows one of the ways that these viral components can be assembled into a unified whole. Unlike a human genome, a viral genome can be thought of as a self-contained model of the entire viral form. Within its RNA or DNA, a virus contains all the instructions necessary to create an entirely new body for itself and to replicate those same instructions. The simplicity and self-contained nature of viruses makes them phenomenal tools for biological engineering and medicine.
Viruses (specifically bacteriophages) as imaged with an electron microscope. Image Credit: Wikimedia Commons
Viruses are so simple that they don’t always need their own body to survive; they have circadian rhythms like all living things. We experience these rhythms through cycles of sleep and wakefulness, whereas viral rhythms occur as periods of dormancy between rounds of infection. Viruses don’t technically have a body during their dormant phase — they are nothing more than a string of letters in the book of the genome. But, as soon as something disturbs their sleep (like a mutation or a new virus invading the host) viruses can awaken and rebuild their physical bodies from a purely genetic form. When the wrong (or right, depending on your perspective) protein manages to leak out of a dormant viral gene, it is like the virus is suddenly awake again. A new physical body means that it has all the tools necessary to replicate.
Even beyond these rhythmic cycles, certain kinds of viruses don’t need a physical form at all. These disembodied viruses are called transposable elements, or transposons. True viruses have a body made from proteins, but transposons are mobile genetic elements — sequences of DNA that physically move in and out of genomes. For this reason, they are often referred to as “jumping genes.” Transposons do very much the same thing as true viruses, i.e. they copy and paste themselves throughout genomes. They are so similar to true viruses that some endogenous retroviruses (ERVs) are themselves transposons. As stated above, ~8% of the human genome is made up of ERVs, but nearly 50% of the human genome is made of transposons! Humans are basically just big piles of viral-like sequences.
Transposable elements (transposons) are sequences of DNA that literally jump in and out of the genome. Image Credit: Harvard University
Transposons have a disturbing capacity to disrupt important genes by inserting themselves into the DNA sequences. It’s like if a series of words in a book could physically move around from page to page — these words would have a high likelihood of jumping into the middle of a sentence, thereby making it nonsensical. Amazingly, transposons preferentially insert themselves into important and functional genes — as if those jumping words wanted to disrupt the most interesting parts of the book rather than the index or bibliography. This is a powerful evolutionary strategy, since transposons are much more likely to get “read” by a cell if they jump into the middle of an important (and therefore, active) gene.
Transposons can very easily mess up important genes that we need to survive, so it has been theorized that epigenetic mechanisms evolved to stop transposons from moving around the genome. Furthermore, since transposons can rapidly alter DNA sequences, they are thought to play a major role in the processes of evolution and speciation (how a species evolves into a new form). In plants, transposons become highly active in response to stressful conditions, and this could act as a rapid source of short-term mutation when the environment starts pressuring you to survive or die. In addition, an animal’s genome changes when they are domesticated (like going from a wolf to a dog, or from an aurochs to a cow), and a majority of these changes occur in transposon sequences. No one is really sure why or how this happens, but it is clear that viruses play a very important role in rapid genetic change. | https://medium.com/medical-myths-and-models/the-human-genome-is-full-of-viruses-c18ba52ac195 | ['Ben L. Callif'] | 2020-05-22 15:31:58.341000+00:00 | ['Viral', 'Epigenetics', 'Genetics', 'Biology', 'Science'] |
Why Being Angry In The Age of Trump Is Deadly | Image: what’s up.co.nz
Why Being Angry In The Age of Trump Is Deadly
I need to save myself, fast!
Yes, it’s about Donald Trump, and the trappings that come with managing a hostile climate that breeds ill-will towards assigned enemies, representing the utter devastation of Black lives.
And of course it didn’t take a maniacal president with a penchant for radicalizing White terrorists for anyone of us to be fully aware of the real and present danger that police brutality exacts, or to comprehend the traitorous levy of a woefully biased judicial system.
But when your existence is an alarm clock that goes off every hour on the hour without any reprieve, due to the mechanisms of high-priced platforms that were built for this season of chaos, on the scale that enables the validation of evilness from the highest office in the land — there’s the threat of slowly but surely lose your shit.
Aside from the growing and unreasonable habit of tracking Twitter accounts belonging to enemies of the state with roaring clap backs that weirdly lighten the burden of discontent, there’s also the overall feeling of emotional and physical fatigue.
Getting older is a blessing but it’s also the brutal lesson in how some things will never change. You can only control so much. And even when you give all you’ve got, the outcome won’t match that investment. You keep trying to embrace daily reminders of how your best days are still ahead, but that’s a challenging task when the demise of another year is at hand.
You do the best you can to keep your head up.
As a writer, I find solace in the words that I create with the discipline that has helped to inspire other genres of expression, thereby opening up a whole new world that needed arousal.
But the mental disease of unrelenting despair that’s briefly alleviated with the Buddhist chants and CBD tinctures is a personal pain that has to be addressed accordingly sooner rather than later. And while I was planning on seriously exploring options for therapy in the new year, the awfulness of today frightens me into going with the “sooner” route.
The tense conservation that morning with my mother has become a frequent interaction that transpires from accumulated years of eventfulness. The good, the bad, the ugly and uglier starts demanding the attention that was delayed. And since the next decade looms with the danger of what unfinished business can deliver, it’s healthy to clean house when you can.
All I wanted was to head to the gym before the brisk walk to the shopping plaza to pick up necessary items. Visiting my parents in their neck of the woods is daunting when you’re not driving. As a longtime resident of New York City, I was accustomed to trekking or jogging everywhere with the least amount of obstacles.
But those privileges aren’ transferable to the rest of the country because drivers always have the right of way, which explains why it takes forever to cross the street. And when the light does change, you must run to make that small window and avoid getting hit.
The lack of sidewalks is also a bummer, and the apartment complex where my parents live tried to rectify that issue by carving out a slab that can barely fit a full-sized adult.
I noticed that as I made my way out of the entrance and towards the busy street to wait for the light. As I stood watching the cars go by, the feelings of irritation were settled in my sight. You think a lot when you’re at that age where everything is just as big a deal as you fear.
As I made my way down the hilly part after crossing the street, I was at least able to note that it was a gorgeous fall day. The two Black girls with their Chick-fil-A bags had joined my procession and walked ahead of me, and because their casual pace didn’t gel with my speed, I decided to step down from the fashioned sidewalk and come to street level.
After living in cities that traditionally devalue the safety of anything walking, I’ve been trained not to assume that drivers give a damn. This means keeping close to the curb when I’m close to to the street. And so I made sure to stay within the confines of the painted slab that indicated I was far enough away from the slow moving vehicles.
Suddenly there was a startlingly loud horn from behind and the car drove forward with the irate driver yelling that I was on the sidewalk.
The two Black girls immediately commented on the driver’s rudeness, and their reaction keyed into my shock and resentment, especially when I knew that he was being an asshole for the fuck of it. I definitely wasn’t the moving object purposely blocking his view.
The burning anger was intensified by the fact that he was a White male, and coupled with the Black girls echoing and confirming the obvious hostility, I was inspired to flip the bastard off.
His advantage was being able to drive off, while I kept walking in the same direction.
I knew I was on a mission even before I could stop myself.
The sweltering inferno had devoured me in seconds, and the only way to cool off was to let it all out. He parked where I could get to him, and as he got out of his car, and I got a glimpse of his gym attire, for some reason, there was even more fury breathing out of me.
I stood in the middle of the parking area and gave him the finger again, which encouraged his curses, and motivated me to say something that I’ve never said before, at least not on a Saturday morning, in a family friendly shopping center.
I won’t say exactly what it was because I’m too mortified, but it was in reference to his race.
As soon as the words fell out, I found the cold chill that I thought would make me feel better.
By the time I returned to myself, my whole body was covered in sweat and I was shaking uncontrollably. Getting that angry has happened before, but it was usually regulated to the discipline of saving my energy for those who deserve it, instead of bonafide idiots.
The fact that I wasn’t willing to just keep it moving, and insisted on hunting my prey, for the duty of letting him know why he was a piece of shit, which had more to do with the color of skin, was an unnerving realization that shook me to the core.
First off, I could’ve gotten blasted away by this dude, and the authorities would blame me for my murder based on bystanders, who would readily attest to how the enraged Black woman forced the burly White guy to defend himself.
And then there’s the glaring evidence of how the internalized data from the “summer of hate” when White folks were calling the cops on Black folks, occupying the spaces that belong to every human, must’ve stuck around for the manifestations that you don’t see coming.
We think we’re okay.
As long as the daily rituals are fulfilled without disruptions , there’s no incentive for check-ins, just to re-affirm that we’re not stealthily falling apart at the seams.
I’m not regretful for the first “fuck you,” but I hate that it escalated to the point where I became unrecognizable.
At the same time, I’m willing to give myself a break, as a Black woman, who can’t afford to act out like her White counterparts without strong the likelihood that I could be arrested or worse.
But most importantly, it’s apparent that I have to proactively seek the self-care that will keep me centered and protected in the knowledge of how I’m too precious to resort to street fights with grown adult males, who have won even before the gauntlet opens.
I am most certainly angry, and this disposition retains the heat that explodes before I can duck for cover.
I don’t want to be the Black woman who downplayed how those temperatures can rise just in time for the encounters that will turn a mid-morning stroll into the journey that won’t bring me back, for reasons that weren’t worth the gamble.
Everything is about race, it always has been.
I was convinced the White guy who sternly blew that horn at me, did so as a way to embarrass and demean, and since it was blatantly unprovoked, it instinctively felt like a racist attack.
Maybe I was wrong to rush to that conclusion, but either way, I shouldn’t have invited more shit to the pile with that confrontation. And I absolutely shouldn’t have said what I said because I know better.
There are things I have to change and probably scrub away from my calendar of activities in order to prepare for the bigger assignment of tending to the exposed nerves, that are so much more entangled than I could’ve imagined.
These times are not ordinary or livable, and while we do our best to navigate through the terrain of lawlessness with the tools of disengagement, the falsehood of our mental state only buckles when real life drives though — fast and furious.
What happened makes it’s clear that drastic adjustments must be made.
Once I re-entered the complex after the mess, and made my way down the driveway to the apartments, paying extra attention to the painted stain of a sidewalk, I walked past a lovely young Black woman, who gave me the most amazing smile, which I happily reciprocated
Only then did the hot tears stream down.
Yeah, I have to save myself, fast! | https://nilegirl.medium.com/why-being-angry-in-the-age-of-trump-is-deadly-4aee5f7c887d | ['Ezinne Ukoha'] | 2019-10-21 16:42:38.739000+00:00 | ['Mental Health', 'Therapy', 'Race', 'BlackLivesMatter', 'Social Media'] |
How to Reimagine Your Boring Brand | We live in a time of change, where financial markets, capital investments, and entire industries can shift rapidly. Where startups disrupt old ways of doing things. And where the consumer is in charge, expecting consistent content delivered across devices.
So it’s no surprise that businesses can fall behind or fail to distinguish themselves in such a fluid environment. If you’re looking to reinvent your brand, you want to start the process by asking lots of questions — leaving your assumptions in your wake.
One question your company will need to ask is: Do we truly understand how customers are using our products? Even an innovative company might be surprised at that answer.
SurveyMonkey began their brand update with a survey that revealed a big surprise: people weren’t using their product as a survey tool. Rather, they were using it to unleash creative thinking. Such lightbulb moments led them to a new mission: Power the Curious.
Reimagining a brand isn’t something to be taken lightly. Here are some pointers I’ve gleaned from working on branding projects for Element Three clients.
Understand Your Starting Point
Whether you’re revamping a venerable brand or rebranding after a wave of acquisitions, you need to find your starting point by researching the industry. This should be an objective process of discovering what’s new in the space, where things are now, and where things are headed. You can read my approach to industry research here.
Gather Internal Perspectives
Another important aspect of a brand overhaul is understanding the perspectives of your employees and key internal stakeholders. A marketing agency can serve as an objective third party in this area by developing an employee survey or conducting interviews with key internal stakeholders.
Employees are in a position to contribute valuable ideas that can help you overcome the status quo. As this Fast Company article says, “bad ideas are the seeds of great ideas.”
You might find an internal consensus that your company needs a new name. If that’s the case, keep in mind that naming is a beast of a process in and of itself. Call off the committee and read this post.
Know Your Audience
Customer research can take a number of different forms: from online surveys to phone interviews to in-person observations of them using your product. Interviews can be a powerful means of gathering information because you can ask follow-up questions to intriguing or unexpected responses. Let’s look at a couple of companies that made the most of insights gained from interacting with their consumers.
Pabst Blue Ribbon might not come to mind when you think of great beer, but you’ve got to hand it to them for their comeback. When their sales bottomed out in 2001, they turned to an Atlanta agency called Fizz for help, according to the HubSpot blog. There wasn’t any money for traditional advertising, so Fizz went straight to the customer to find out why they drank PBR. As it turned out, the PBR fans were “early hipsters” who avoid mainstream brands. So with this information, the agency spurred PBR’s comeback with a strategy of sponsoring events like skating parties and art gallery openings.
Lego experienced an amazing comeback of its own — one that’s been hailed as the greatest turnaround in corporate history, according to The Guardian newspaper. This revival stemmed from really understanding their audience: kids and their families. The Danish company is said to conduct the world’s largest ethnographic study of children, something they refer to as “camping with consumers.” Lego’s Global Insights group travels the world to observe how kids play with Lego and learn why some sets are more popular than others. Here are two takeaways:
Understand Your Audience Segments
Lego learned that boys tend to be more interested in the battle of good versus evil, while girls tend to want figures with greater detail and realism. Such insights helped Lego, whose customer base was largely boys, break into the girls’ market with Lego Friends, a different type of product that appeals to what they learned girls want.
Stick to Your Strengths
Lego boss Vig Knudstorp inherited theme parks, clothes, jewelry, and video games when he took over in 2001, the Guardian article said. Knudstorp sold assets — like the theme parks — for which Lego had no expertise. And under his leadership Lego has also decided to find partnerships for new projects such as movies, rather than trying to create everything themselves.
Know Your Adversaries
Another aspect of updating your brand involves clearly understanding how your competitors are positioned, what they’re saying to the marketplace, and where they’re saying it. You likely have strong feelings about certain competitors and how you stack up. But do you realize that the companies you compete with online are often a different group than the ones you compete with offline?
When we conduct a competitive audit at E3, we research how the client compares to key competitors in terms of:
Clarity of message: How easy is it to follow and understand what they’re saying?
Design and UX: What is the visual identity?
Channel efficacy: How effective is the marketing across channels?
Reputation: What’s the sentiment among customers, employees, and the industry?
Differentiators: What sets them apart?
Synthesize Your Findings
After all your research is done, it’s time to synthesize your findings. You’ll need to assess what your customers have told you they care about, what your key stakeholders think you should be communicating, and what you’re actually communicating to the marketplace.
You’ll want to consider the main themes that emerged in the research. Ask yourself questions like:
Which themes keep coming up?
Were there surprising bits of information?
How do emerging trends relate to the customer? My company? Competitors?
What are the areas of greatest opportunity?
Where are the potential threats?
How are competitors positioned? What about us?
How are we different from everyone else?
What’s a positioning we can truly own?
Answering these questions and identifying what’s lacking in your current messaging can help you craft a story that’s unique, authentic, and bold — one that can grab attention across online and offline channels.
Find Your Bold Story
Your story is what sets you apart, what pulls people in. It must be relatable. That means it’s okay to admit your failures and give people a window into the struggles you’ve experienced as a company; such transparency can help you connect with consumers on an emotional level.
Some stories take companies back to their roots. Others like Target find a new position that they can truly own. Target, which was going head-to-head with big-box stores like Wal-Mart, differentiated itself by becoming a fashion-forward brand that maintained its low prices. The style revamp extended beyond the products to the design of the stores.
Wherever your story takes you, it’s crucial to get key internal stakeholders on board with your direction early on. An effective internal launch can serve as a rallying cry that gets employees excited before your new story is broadcast to the world; that’s why it’s important to get the internal launch right, especially in a large company.
Element Three has significant experience in working on branding projects with clients in a variety of industries. Check out the work we did to modernize Indiana’s municipal advocacy organization.
Perspective from Derek Smith, Senior Writer and Element Three | https://medium.com/element-three/how-to-reimagine-your-boring-brand-beed650ed0c6 | ['Element Three'] | 2018-08-03 15:17:02.889000+00:00 | ['Brand Strategy', 'Marketing', 'Brands', 'Branding'] |
It’s not just a snap | It’s not just a snap
How photographs convey multiple layers of stories
Photo by Marco Xu on Unsplash
Up until 2 years ago, photographing just meant clicking a shutter to me. Then, I fell madly in love with it.
I found myself alone with an old mirrorless camera in my hand, wandering the streets of Bologna with no clue about what it meant to take a picture, except for the few basics notions about the exposure triangle: at that time, I had no idea that these cold numbers said very little about a photograph.
There is a whole world behind it, an entire narrative arc beginning with the choice of a subject and ending with an image, which of course is the sum of the story it’s meant to convey and the story of how it was conceived. Now that I look back, it’s safe to say that someone so much into verbal and written storytelling as I am could not help falling in love with photography too.
I can’t say I’m an expert right now, but if these two years of practice have taught me anything (aside from the fact that you never stop learning something you’re passionate about) it is that the technical aspect is just a small part of the whole story. It’s just an instrument to tell a story, not the story itself. I see it as the pen and paper (or the screen and keyboard) for a writer.
Because that single image, captured in a fraction of a second, tells a lot about who took it and about who looks at it as well.
First layer: planning
There is no photo without planning. Before a portrait photography session begins, the photographer sets up the scene and places the subject inside, choosing each setting in accordance with the theme of the shooting session.
In landscape photography, it’s not possible to build the entire scene from scratch as in the case of an indoor portrait or a still-life picture, but planning still plays a strong role. For starters, several apps and websites allow the photographer to get a glimpse at the spots around them, and about the best day and hour to capture them. A landscape photographer may get on the spot in advance (even a few hours earlier) and begin to experiment with the composition: this way, everything will be in place by the time the picture is meant to be taken.
Believe it or not, even street photography is not a completely instantaneous process: while this discipline might look like it’s based on instinct and luck, a good street photographer is so experienced he can kind of anticipate what has the potential to become a good setting for a photo.
If I have to compare it with a step in the writing process, it corresponds to the “What if” moment, when we begin to explore the possibility for a story to be told. From that point on, the writer/photographer will take the steps allowing them to show the world the story they painted inside their head.
Second layer: shooting
It’s the moment when photographers look at the scene in front of them and measure it against their expectations. It might meet them, exceed them, or it can even fall short, and this is where the on-site adjustments take place. In the case of a writer, it’s when he/she sits down and puts words on paper, and they have a chance to see how they feel in the outside world.
The way the light enters the scene might be slightly different from what we expected, maybe it’s brighter or dimmer. If it’s an outside shot, weather plays quite an important role, and it can change quickly. These small adjustments are the story here, together with the choices about composition.
“Composition” is the process through which the photographer decides what to include in the picture and what to leave out, and how to arrange the elements “making the cut” so that the observer’s eyes are directed where they need to be.
It’s the reason why so many times, after we’ve released the shutter, we feel disappointed by the result. What we saw with our eyes was so good, then why doesn’t the image in the camera preview live up to the real scene? To keep up with the metaphor with the writing process, this happens because at a certain point we have to decide what to sacrifice: we can’t fit the entire scene inside the picture or inside a book, not even with the widest-angle lenses in the world or with an endless supply of ink and paper. Some digression will add something to the story, some others will spoil it and produce a confusing soup.
Third layer: postprocessing
Here comes what can be regarded as the style we use to tell a story, even if talking about postprocessing is kinda like walking on ice: everyone has got their opinion about it, and it usually lies at either end of the spectrum. For some people, post-processing is a way of enhancing the picture so that it reveals its true potential, for others it’s cheating.
I tend to prefer the former point of view: the digital negative is the raw story as if it was told by a machine, while post-processing expresses how I put it on paper, the words I choose to make it mine and to present it to the world.
The amount and nature of post-processing deemed reasonable for an image is something quite personal, and it could potentially turn a photograph upside down. A general rule of thumb is that if the composition is good there will be no need for sensitive tuning such as boosting saturation and contrast or enhancing the colors in an unnatural way to have the picture convey something. A vocabulary too polished can spoil a story, in the same way as an excess of post-processing can spoil a picture. It will make it stand out for sure, but it’s not necessarily a good thing.
Spot-on and discrete post-processing, on the other hand, can help the spatial arrangement of elements in the scene in enhancing what the photographer wants the viewer to see while setting up a certain mood, like an accurate choice of words will add a personal touch to the story.
Fourth layer: the viewer’s eye
As much as the photographer can try to convey a message with the choices made when planning, shooting, or editing, everyone who’ll ever see the picture will have their own personal impression about it.
Photo by Paul Skorupskas on Unsplash
If the photographer has made a good job with the composition, the viewer will see exactly what they wanted them to see, but everyone will have their own takeaway about what they observe because there are no clear words to express it. This, in my opinion, is the biggest difference with respect to the writing process, and it is fascinating to think about how our brain can come up with a story (or many stories) just looking at an image.
Here is an example:
This picture is far from being technically perfect, but it is an attempt at telling a story. Even a short piece of flash fiction would have given us a setting and a precise story development, while in the case of this picture this is up to us. We can make up our own hypotheses just looking at it.
What is the figure on the left saying to the one on the right? Are they fighting, or just deciding where to go from there? Where will that road take them?
(Spoiler: in a place full of chestnuts). | https://medium.com/photo-dojo/its-not-just-a-snap-7a86090e8d04 | ['Giulia Picciau'] | 2020-09-12 05:50:04.350000+00:00 | ['Stories', 'Storytelling', 'Photography'] |
Sprouting Nightmares Sometimes Grow into Magnificent Treasures | She woke up in a puddle of warm liquid. Trisha realized she’d lost control of her bladder. But it was too early. How could her water break twenty-four weeks before she was due? The nurses and doctor at the emergency room rushed her into a hospital bed and didn’t say much.
After an ultrasound, a pelvic exam, and a blood test, the doctor advised Trisha,
“Maam, we have to share unfortunate news with you. We’re going to have to induce a miscarriage unless you wish to go home and wait for it to happen. Your amniotic fluid is extremely low, and your baby's health is in danger. ”
Trisha glared at the doctor and said, “I’m not going anywhere. That’s why I pay all that money for insurance. I’m having my baby.”
The cold empty hospital room felt like an isolated desert with death all around her, but Trisha remembered her roots. Strength and resolve flooded her heart and veins. The Mwezi family stayed firm through any storm, and the love in her heart for her unborn child remained unshakable.
She knew the answer but asked anyway, “Doctor, will my baby live? How can I be in labor this early?”
“We’ve never seen a baby, at this stage, survive a preterm premature rupture.”
The pressure of the following silence exploded in Trisha’s ears. She would not give up on her baby or herself. The doctor recommended termination, but Trisha opted for a high-risk treatment.
GenoMedz Pharmaceuticals was doing a research/trial on a steroid replacement that offered a drug to promote organ growth, bone growth, and health without the harmful effects of steroids. They called the drug a SARMS(Selective Androgen Receptor Modulators). SARMS are more selective and easily absorbed than steroids. They do not increase testosterone but can be targeted for more specific uses.
Her doctor said, “Trisha, I think this could work. Tests have shown this SARM, Lava, is very safe. Many other SARMS have been developed and approved for a wide variety of medical uses.”
“What does Lava stand for?”
“It’s a long scientific name even I have trouble with. Sign here and we can begin the treatment.” She extended her clipboard to Trisha. The doctor wasted no time.
Trisha went home the next morning. Dr. Andrea Jefferson instructed her to drink a gallon of water a day to get her amniotic fluids up. The SARMS, Lava, would help speed up the baby’s development, and if she could make it to 23 weeks, she could return to the hospital for bed rest until delivering her baby.
Week 30 of her pregnancy arrived much more quickly than she anticipated. Resting in the hospital bed, the last 7 weeks was the hardest thing she’d ever done. Fortunately for her, AmSend( We Send Your Money Fast Service Inc.) gave her medical leave; executives have their perqs. Trisha knew it was time to deliver Robyn.
She felt the baby’s need to get out. Her lower back felt like a giant crushed her spine between its huge hands, and every joint in her body was on fire. Her water broke again, but this time it was only a trickle. Then came the contractions. Whoever dreamed up deep breathing was an effective way to deal with contractions was a sadist, Trisha thought. | https://medium.com/the-partnered-pen/sprouting-nightmares-sometimes-grow-into-magnificent-treasures-261f31ec0c66 | ['Greg Prince'] | 2020-10-21 03:53:47.053000+00:00 | ['Short Story', 'Fiction', 'Horror', 'Storytelling', 'Racism'] |
Taming data inconsistencies | Vamsi Ponnekanti | Pinterest engineer, Infrastructure
On every Pinner’s profile there’s a count for the number of Pins they’ve both saved and liked. Similarly, each board shows the number of Pins saved to it. At times, Pinners would report that counts were incorrect, and so we built an internal tool to correct the issue by recomputing the count. The Pinner Operations team used the tool to fix inconsistencies when they were reported, but, over time, such reports were growing. It wasn’t only a burden on that team, but it could have caused Pinners to perceive that Pinterest wasn’t reliable. We needed to determine the root of the problem and substantially reduce such reports.
Here I’ll detail why some of those counts were wrong, the possible solutions, the approach we took and the results we’ve seen.
Addressing the problem
After digging into the issue, we found a number of reasons counts appeared wrong, including:
Pinner deactivations: If user A liked a Pin saved by user B, and user B deactivated their account, the Pin was not shown to user A, but the Pin was still included in the count. (Note: there’s a very small number of Pinners who deactivate and reactivate their accounts multiple times.)
Spam/Porn filtering: If a Pin was suspected to have spam or porn content, it wasn’t shown, but was still reflected in the count. The scheme to identify spam/porn content is continually evolving, and domains are added or removed from the suspect list almost every day.
Non-transactional updates: Some counts were updated asynchronously in a separate transaction to optimize the latency of operations such as Pin creation. In some rare failure scenarios, it was possible the count wasn’t updated.
Possible solutions
Option 1: Fix the root causes
We first looked at ideas for fixing the root causes. Whenever a Pinner deactivated/reactivated their account, we could update the counts in the profiles of all Pinners who may have liked their Pins.
Likewise, whenever there’s a change in porn/spam filtering schemes, we could update the counts in the profiles of the owners of those impacted Pins, and in the profiles and boards of the Pinners who’ve liked and saved such Pins.
To address the problem caused by non-transactional updates, we would need to update the count in same transaction, which may increase latency of operations such as Pin creation.
This solution wouldn’t only be expensive, but it also wouldn’t fix the inconsistencies that already exist.
Option 2: Offline analysis and repair
Another approach commonly used in similar situations is offline analysis and repair, where we could dump data from our online databases to Hadoop daily, and they’d be queryable from Hive. We could have an offline job(s) that finds the inconsistencies, and another background job(s) fixing those inconsistencies in the online databases. Since online data could’ve changed after offline analysis was performed, it would need to be revalidated before updating the count in the online database.
We found this to be a good solution and used a similar method to fix inconsistencies in our HBase store. However, based on that past experience, we knew the effort to build it was no small task.
Option 3: Online detection and quick repair
We also thought about online detection and quick repair. If a Pinner detects an inconsistency in the data on their profile, our system should also be able to detect it. For example, if a Pinner scrolls through all of the Pins they’ve liked, the system could check if the count of liked Pins shown matched the displayed count in their profile. This way of detection is simpler than the previous solutions, with small overhead.
Once an incorrect count is detected, we could queue a job to fix the count so we could display the correct count the next time. We already had a framework, PinLater, that could queue arbitrary jobs for later execution. Typically, such jobs run within seconds of queuing.
The job to fix the count would have information about which counter to fix (such as user ID or board ID), as well as information about the stored count (i.e. the displayed count) and the actual count. This job could use the same logic as our internal tool to recompute and fix the counts.
The count fixing job would check if both the stored count and actual count are still same as at the time of queueing. If so, it would update the stored count to the actual count. If the stored count or the actual count are different from what they were at the time of queuing, the count isn’t updated as it indicates some change of state, such as new Pins that may have come in.
However, this solution doesn’t detect all inconsistencies. For example, if a Pinner doesn’t click on ‘likes’ or ‘Pins’ in their profile, or if they don’t scroll through their entire list of Pins, it can’t detect inconsistencies. This solution also isn’t appropriate for inconsistencies that are expensive to detect during online reads.
The chosen solution
In the short term, the third solution (online detection and quick repair) was preferred since it fixes existing inconsistencies in the count in near real-time, and is simple to implement. In the longer term, we may still build the second solution.
Results
After deploying and gradually ramping up the solution to repair counts for one percent of users to 100 percent, nearly all Pinner reports about count inconsistencies vanished. The table below shows the number of reports over a period of about 12 weeks, including a few weeks before the launch, the ramp-up period and the weeks following launch. | https://medium.com/pinterest-engineering/taming-data-inconsistencies-96ae43ced0ce | ['Pinterest Engineering'] | 2017-02-21 18:58:25.403000+00:00 | ['Pinterest', 'Infrastructure', 'Engineering', 'Data'] |
Hints and Tips for Coursera Google Cloud Platform Fundamentals: Core Infrastructure | I recommend Architecting with Google Cloud Platform (GCP) or Developing applications with GCP Courseraspecializations. Going through these specializations will significantly help you obtain the Google Cloud Architect certification or Google Cloud Developer certification. More importantly, the hands-on labs enable you to build GCPapplications.
The course, Coursera Google Cloud Platform Fundamentals: Core Infrastructure, is first of the five courses in Cloud Engineering with Google Cloud specialization and first of the six courses of Cloud Architecture with Google Cloud Professional Certificate specialization.
Google Cloud Associate Cloud Engineer Certificate and Cloud Architecture with Google Cloud Professional Certificate share the first three classes.
The Coursera Google Cloud Platform Fundamentals: Core Infrastructure class consists of five essential services:
Virtual Machines: Compute Engine (IaaS); Use Profiles and Network: IAM and VPV; Storage, BigTable, SQL, Spanner; Containers and Load Balancing: Kubernetes Engine. Supply the code all Services provided: AppEngine (PaaS)
The blog assumes an advanced level of prior knowledge. Below I list terms that are outside the scope of the blog.
The summary is a broader list of Linux/Unix commands, applications, and GCP command-line applications.
Links are given for some general terms you need:
Anthos;
A WS EC2 ;
Cloud Computing;
Cluster;
data encryption and encryption key management [ 2 ];
]; Docker [4,5,6,7] ;
; key, key code, key management [ 8 ];
]; Security key part or key pair [9].
Creating a free account
You get $300 in credit for 90 days per account.
Using your free account
You will not use your new free account for the labs.
Hint: Anti-spoiler alert. If you think some of the hints are spoilers or cheats, they are not. Please read all the hints. Tip: Not really a tip but a feature of GCS. The feature is that Google wants you to learn how to use GCS. Tip: Every lab is in QwikLabs. At the beginning of QuikLabs, a new account is created without requiring a credit card. You are given full access and use to the GCP Console, gcloud, and gstart. Tip: A QwikLabs is 30 minutes long. Lab sessions can be restarted, whether you fail or succeed. However, services and resources created are deleted at the end of the timed session. The time length is usually long enough for short experiments. Tip: You can use GCS Colab with a GPU or a TPU for free. You do not have to pay Coursera $50 per month to use the “free” QuikLabs.
Google Cloud Platform networking and operational tools and services
Compute Engine Service — The base compute component.
The GCP Compute Engine Service is for launching one to a fleet of clusters of virtual machines.
Tip: You can use other browsers, but all lab videos use Chrome. Tip: Now how to use incognito window in Chrome browser. Tip: Your local terminal use of ssh of an AWS EC2 instance requires you to:
Log in to the AWS console. Open up the specific EC2 key pair, note the key pair, and note the external IP address (Considered as one step if you neural atypical). Launch a local terminal.
3. log in using ssh , keypair, and IP address;
Opening up an ssh session in GCP reduces to one step.
Login to the GCP console. open up the specific GCS Compute Engine instance dashboard and click on the ssh icon. (Considered as one step if you neural typical).
Tip: The select tool that starts the lab. The icon you click is at the bottom of the page after Coursera'’ Honor Code box.
Figure 1.
Google Cloud Platform Fundamentals: Core Infrastructure>Week 1>Lab 3: Getting Started with Cloud Storage and Cloud SQL.
Tip: Memorize the table of GCP Cloud storage types and how they differ from each. Tip: You need to know SQL’s basics to perform Lab 3 and understand what you are doing. Tip: Lab 3 is longer and more complex. It gives you up to fifty minutes on timed Lab 3. You will need all time to complete all 6 steps. Tip: The last two of six steps are optional. Tip: If you run out of time, if you show 15/15 in an orange rectangle, located in the top right-hand corner, you will receive 100% on the quiz.
Summary
Tip: All labs are clicks and cut&paste. Hint: You spend a considerable amount of time in a Linux/Unix shell. You need to know Linux/Unix commands: >>, bash, cd, chmod, cp, curl, dd, export, grep, kill, ls, nano, sed, sudo touch. Hint:. You should know the following network and internet terms: DNS, http:, https, IP, TCP, UDP, URL, virtual private network. Hint:. You should know what is and when to use the GCP shell commands: gcloud, gsutil, kubectl. Hint:. You should be familiar with the following Linux/Linux applications: apt-get, git clone, gzip, Nginx, pip install, python, virtualenv, yams.
Yeah!!
I hope this blog is of help in going through Coursera Google Cloud Platform Fundamentals: Core Infrastructure. | https://medium.com/analytics-vidhya/hints-and-tips-for-coursera-google-cloud-platform-fundamentals-core-infrastructure-42b781bbc311 | ['Bruce H. Cottman'] | 2020-11-08 17:14:45.180000+00:00 | ['Cloud Computing', 'Learning', 'Certification', 'Course', 'Google Cloud Platform'] |
We Need to Destigmatise Taking Antidepressants | I met up with my mom the other day. I haven’t seen her in years, and it was nice to spend some time with her. I was a little worried about whether I’d be able to make the trip, but luckily I was having a good day.
While eating, she asked me about my mental health. For some background, I’ve had issues my whole life, but it was only this year I opened up about my struggles. Mental illness is stigmatised but even more so in the Black community. Many of us are children to immigrants who rose from nothing, so for us to be depressed “over nothing” is seen as an insult.
To my surprise, she was very understanding and supportive. She understood my hesitation to tell her and promised that when it came to university, she would ensure everything was taken care of. She would also arrange a therapist.
She is great; however, one thing she has been a little iffy over is me taking antidepressants. She mentioned on the phone how she wished I didn’t need to take them. Then over our meal suggested I stay off them and see how I do. | https://medium.com/an-injustice/we-need-to-destigmatise-taking-antidepressants-7a6598177dd9 | [] | 2019-12-08 12:46:33.092000+00:00 | ['Borderline Personality', 'Life Lessons', 'Mental Health', 'Culture', 'Zuva'] |
The Sacred Heart of Mormonism. “I am not a God afar off, I am a… | “I am not a God afar off, I am a brother and friend;
Within your bosoms I reside, and you reside in me:
Lo! we are One; forgiving all Evil; Not seeking recompense!” — William Blake, Jerusalem: The Emanation of the Giant Albion (c. 1803–1820), ch. 1, plate 4, lines 18–28
The Latter-day Saint temple experience being what it is, I’ve had to learn how to write about “something” without actually writing about “something.”
There are very particular portions of the temple endowment ordinance which participants promise never to disclose, except in a highly particular ritual context (nothing fun, spooky, or sexy, I assure you), but the aura surrounding those things is tremendously taboo for many Mormons. Consider it something like the Pharisaic philosophy of hedging in the Law with additional requirements, not as a means of disrespecting the Law or becoming overly anal about it, but as ensuring that one never violates a tenet of the Law by putting further distance between oneself and the very possibility. I believe a similar impulse, often with the same ad hoc theological rationalization, can be found among many Latter-day Saints. The sensation of “sacred anxiety,” of experiencing a buzzing angst the closer one comes to the “Sacred” (whatever it may be; and there are, of course, varying answers), is prevalent in religious communities with defined organizations, hierarchies, or systematic theologies.
In short: there’s something special at the center, cascading outward in various contingent forms (rituals, texts, organizations, etc.), which derives much of its value from being held beyond the tedium and typical experiences of the everyday community, while yet exerting an oddly demanding influence upon that community. It’s the tension of transcendence and immanence, we could say. But much of what’s special about the Mormon temple seems to lie just beyond that threshold where many (though certainly not all; perhaps not even most) Latter-day Saints are unwilling to go. I can respect that, and not only because I’ve experienced the endowment numerous times myself. I too hold to my promise to not disclose particular symbols used in the ordinance (they’re not hard to find, but I’m uninterested in giving them to you). I think many Latter-day Saints reinforce this sacred boundary with anxiety because it’s easier to believe something awful will happen if you transgress that boundary than to waste away your minutes, hours, and days contemplating in the celestial room at the center of a temple “what it all means” (a practice that is unfortunately discouraged, though for understandable reasons; some temples are quite busy). So many Latter-day Saints may experience not only indignation (understandably) when they happen upon some hidden-cam exposé of their most sacred rituals and symbols on YouTube or the like, but anxiety as well — a kind of palpable (albeit often undefined) fear that something bad could happen. I’ve felt it before, the sense that you’re dealing with something “heavier” than what you typically encounter — a sign that you’re inching toward the Sacred at the heart of a community.
I appreciate Hugh Nibley’s view on this matter of inadvertently brushing the sacred boundary:
“Even though everyone may discover what goes on in the temple, and many have already revealed it, the important thing is that I do not reveal these things; they must remain sacred to me. I must preserve a zone of sanctity which cannot be violated whether or not anyone else in the room has the remotest idea what the situation really is. … No matter what happens, it will, then, always remain secret; only I know exactly the weight and force of the covenants I have made — I and the Lord with whom I have made them — unless I choose to reveal them. If I do not, then they are secret and sacred no matter what others may say or do. Anyone who would reveal these things has not understood them, and therefore that person has not given them away. You cannot reveal what you do not know!” — Hugh Nibley, Temple and Cosmos: Beyond This Ignorant Present (Deseret Book, 1992), 64
While I think sacred anxiety is natural to many religious people with distinct sacred boundaries, it needn’t be such a visceral trauma every time one accidentally brushes the border. Perhaps, to build on Nibley’s idea, sacred anxiety may be somewhat misdirected anyway. Rather than fearing that something terrible may happen were one to disclose these things, perhaps we may simply say that to do so is to deaden them, in a way similar to how removing a light bulb from its socket turns it into nothing more than a cumbersome, useless object. Put another way: perhaps these symbols, to borrow a Buddhist aphorism, are fingers pointing to the moon; to this one may add the characteristically Buddhist warning to not conflate the former with the latter.
Some are rather uncomfortable with the idea of Mormons having “secrets,” assuming that “secret” must mean “sinister,” but I think that’s overly presumptuous already. One popular response many Latter-day Saints have been tempted to give is that the temple is not “secret” but “sacred,” but this gives into the same false dichotomy that something is either “secret/evil” or “open/sacred/good.” That’s a longer conversation, but perhaps we can re-contextualize sacred silence. Rather than silence before an otherwise touchy God who “will not be mocked,” perhaps to simply show the symbols off is to demean them into what we typically demean everything else ostensibly religious into: a quick, one-and-done kind of tech manual you can read quickly, understand, then throw away. Instead, rather than merely didactic manuals (or tedious articles), these symbols are instead meant to trigger experiences of the sacred at once aloof from yet demanding upon a community. To share them openly is to pretend that that is all they are, self-explanatory and at face value; to hold sacred silence is to refuse to attempt the futile task of putting the sacred into the tedious and often problematic language of the everyday — to pretend that we’re dealing with just another piece of trivia we can hear, sigh at, then forget altogether.
It’s an odd bind to be in, because, as I said above, it puts me in the position of being essentially unable to write to other Latter-day Saints (let alone to non-Mormons) about what I believe is essentially the center and foundation of Mormonism (at least within the LDS Church). On the other hand, far from an after-the-fact rationalization haphazardly cobbled together to explain why I can’t talk about this or that, experience has taught me that the Sacred is notoriously difficult to put into words (at least for me!), an experience I read back onto the sacred silence surrounding the Mormon temple experience (rather than the other way around). I’ve dropped tid-bits here and there in my writing, but much of my writing has been only around the topic, if not eventually deferring to conclusions not as answers but as terminus points where I can’t speak much further anyway.
It seems to me that “religious” communities (an odd and frankly problematic category, but we’ll use it for now) typically construct or at least hold onto and hand down symbols — rituals, texts, temples, gods, heroes, stories, etc. — because they encapsulate something of the Sacred of that community, around which they orient themselves on a fundamental level while yet keeping that sacred something far enough away as to not let it be battered by the uncertainty of otherwise transient everyday life. It’s a feeling I get when I hear about human tribes near the end of the Ice Age worshiping gods that look a lot like the animals they hunted or interacted with the most, of Egyptians contemplating the dogs of the deadly desert in the dog-headed god of death Anubis, of the Dogon calling upon Amma who holds all things together, Buddhists carving statues and building temples to the one person or few people to ever achieve the enlightenment they’re after, and so on and so on. These aren’t just passing ideas or places to visit on occasion to “pay respects,” but principles which serve as the “strange attractors” which render the typical chaos of their lives into some intelligible and thus manageable pattern. We build temples to the things we value the very most in our lives, regardless of what others may think, or whether they may strike every member of the community as equally important.
In this vein, I’d wonder what sits at the center of the Latter-day Saint temple and thus at the center of Mormonism (at least in one interpretation of Mormonism within the LDS Church). I don’t think it must be something unique to the LDS Church, unknown to all others, in order to be meaningful; but it’s a question that comes to my mind often. I’m tempted at times to say that at the center of the temple, we could say (at the risk of slipping into too much short-hand), is “Elohim.”
By and large in Latter-day Saint discourse, “Elohim” is defined as a heterosexual, cisgender, monogamous couple (the LDS Church currently acts out that interpretation in sealings, for instance); but while that could work for me in my particular life (as a straight, cisgender, monogamous male), I also know it’s exceptionally exclusionary to so many other good, beautiful, and true lives that take shapes other than my own. To do so is to cast off an entire portion of the body of Elohim. However, I think that, to borrow a cue from Dialogue’s editor Taylor Petrey, Mormonism has a “post-heterosexual” potential. I think Elohim even presumes it, in a way. In any event, Mormonism’s understanding of its central symbols has indeed evolved dramatically since 1830; perhaps this is another direction in which Mormon culture’s interpretations could develop.
While Joseph Smith seems to have largely thought of temple sealings as more or less between men and women (albeit in a polygamous context), he seems to have derived sealings from his larger view of Elohim (or “Eloheim”), a term which he always used in the plural for a council or family of beings. Working from there, perhaps we may say that, rather than an exclusively heteronormative couple, Elohim is instead all human relations as such, containing all and thus not limited to any one of its components. Elohim is that Plural One who sends, generates, or emanates Michael, who is Adam and Eve, and thus all humanity. In other words, Elohim is the most fundamental singularity rendered plural in that it brings each and every one of us into existence from out of itself. Sealings could invite us, then, in a ritual context, to see the Other as a splinter of the infinite Elohim — as a piece of God — and to become woven back together in such a way as to not dilute the authentic uniqueness of either the other or oneself.
As I said, I don’t think this is self-explanatory. Even so, I don’t think we need to limit Elohim purely to human relations. Our lives are composed of so much more — as is Elohim, it would seem. The endowment doesn’t just recollect the interactions of various beings with one another, but of those beings’ interactions with the worlds around them in creation, degradation, temptation, negotiation, redemption, and on and on. “Elohim” names our connections to one another as well as to all else — the earth and all of nature, to the universe. In that view, everything is a fragment of “Elohim,” which in this sense would be, therefore, not just another name for existence or the universe, but for all things simultaneously in their unimaginable and inexhaustible diversity and evolutionary complexity as well as their fundamental unity — always already at one, even if prone to forget that.
Rather than an exclusive focus on individuation — the emergence and articulation of an individual person — or collectivity — the overall life of the community — Elohim may instead inch closer to the concept of transindividuation. As one commentator defines the term:
“…it seems to me that thinking in terms of transindividuation is different than simply another description of the process of individuation; in the sense that transindividuation is always about what exceeds this process, both in the sense of the formation of a collective individuation, or the individuation of collectives, but also the preindividual relations that exceed any individuation. These relations are the basis of transformations both individual and political. This seems to me to be the ultimate test, or gamble, of thinking transindividuation — that it makes it possible to rethink and rearticulate the relationship between individual crises and collective transformation, the constitution of new types of organization, in an age of utter fragmentation and isolation.” — “Talkin’ Transindividuation and Collectivity: A Dialogue Between Jason Read and Jeremy Gilbert,” Capacious: Journal for Emerging Affect Inquiry 1 (4), 2019. (PDF)
Conceiving Elohim as transindividuation (or perhaps we may say intersubjectivity) — as opposed to only either individuality or collectivity, or even the tension between them — invites participants in the LDS temple to contemplate the ways in which they are always already nested in the same ground as all others. Rather than merely defining oneself through negation, in opposition to others, or by uncritically absorbing one’s surroundings, we may instead see ourselves much like Michael and Jehovah: not Gods creating the world from nothing, but persons working with pre-existing materials they themselves did not create and of which they themselves are composed.
In a somewhat Hegelian twist, whatever one may be in particular is only ever because of “all the rest” that one is not them — the true infinity against which one defines their finite self, “you” foregrounded against “everything else.” Elohim is a symbol which calls one to reconcile with that dialectic tension, rather than to answer it with antagonism, recognizing that were it not for that tension one would not exist at all.
“…infinite, in the sense in which it is taken by this reflection (namely, as opposed to the finite), has in it its Other just because it is opposed to it; that, therefore, it is limited and itself finite. Therefore, if it is asked how the infinite becomes finite, the answer is that there is no infinite which first is infinite and then must become finite or pass on to finitude, but that for itself it is already finite as much as infinite. … Or, rather, this should be said, that the infinite has ever passed out to finitude; that absolutely, it does not exist, by itself and without having its Other in itself …” — G. W. F. Hegel, Science of Logic, tr. by W. H. Johnston and L. G. Struthers (The Macmillan Co., New York, 1929), I, 166 f
Of course, this isn’t an interpretation likely to take hold in Mormon culture (and perhaps it doesn’t need to), but it’s been a fundamental part of my own spirituality for some time now, albeit in varying terminologies and conceptual frameworks. In highly particular terms: Elohim is the totality of existence, that Whole of which everything is a part. If you will, Jesus is the one who lived faithful to that reality in his life of unconditional love. And, by extension, the endowment, as a “putting on” of the identity of Christ, is a ritualized invitation to seek that same experience as Jesus — to realize down to the marrow of your bones that all things and people, just as they are, belong and go together. | https://medium.com/interfaith-now/the-sacred-heart-of-mormonism-notes-on-endowment-elohim-and-entanglement-f882a920c7b9 | ['Nathan Smith'] | 2020-01-04 19:10:03.086000+00:00 | ['Spirituality', 'Philosophy', 'Mormon', 'Religion', 'God'] |
Breasts and Eggs | The right to question what is not right for us
I was sorting the notes taken throughout the reading. A pattern is obvious: the right to be who we are. As much as the title hinted to us about the main issues to be explored, it was the stand of Natsuko compared to her sister’s (Makiko’s) and her niece’s (Midoriko’s) that challenged my own assumptions about being a woman. Why do we accept what was cast to us without questioning its legitimacy? On top of that, the unchallenged rules and norms were conceived by a different gender with the power to make it last for centuries.
Reproductive rights from the lens of a child
I was drawn to the thoughts of Midoriko. As a child, she was quick to judge her mother (Makiko), but each judgment always came with a reason. As the International Safe Abortion Day (28 September) is around the corner, it is timely to look at her angst in the context of reproductive rights. Do children have no right to dictate if they should be born? While her personal experience in school drove her into more internal conflicts, we ought to reflect on how many burdens a child or youth needs to put up with due to some unconscious decision made by the parents. While many would say that having a baby is part of the life process and a big part of a family, have parents thought about why this is normal? If a father or mother says they want to have babies, does the desire justify on its own? | https://medium.com/fourth-wave/breasts-and-eggs-a35ee30b8e75 | ['Yong Yee Chong'] | 2020-10-06 09:28:58.904000+00:00 | ['Gender Equality', 'Women', 'Feminism', 'Japan', 'Books'] |
Haptics for enhanced UX: designing for all | Both the masters play their game in their unique style. Here are a few takeaways:
1. Haptic is always complementary
Google and Apple both believe that haptic feedback should be complementary. In most cases, haptics can never be a primary means of communication but it can be paired with other audio and visual elements.
2. Use haptics to enhance the usability
Haptics should be used to enhance feedback and convey useful information to your users. They should not be leveraged to decorate interactions or provide feedback that is not necessary for the experience. For example, using haptic feedback to indicate a layered or complex gesture, not for normal button press interactions.
3. Minimize battery consumption
We’ve heard that enabling vibrations can drain the battery, but here’s something you need to know. Adjusting the vibration’s intensity and sharpness can help reduce the energy intake for producing vibrations. The less the motor vibrates, the less energy consumed.
4. Custom Vibration Patterns
Apple allows you to define your own vibration patterns for your incoming alerts, alarms, etc. For me, it’s kind of a playground where I can define new vibration patterns for my projects and see how it feels in my hands. This may come handy for you if you’re an iPhone user.
5. 3D Touch Gestures
Again, this is an Apple-only feature. 3D Touch being more sensitive than Force Touch, it has been developed to work using capacitive sensors integrated into the display. Using 3D touch people can access additional functionality by applying varying levels of pressure to the touchscreen.
Does vibration alone enhance the user experience?
Maybe. In most cases, haptics can never be a primary means of communication but it can be paired with other audio and visual elements.
In certain cases where haptic is the only mode of communication, we should be more specific on how we use the haptic and where. For example, when a device has the sound turned off or incapable of producing sounds, or even for features like Talkback.
It is a good idea to avoid…
1. Trying to fit where it won’t
It is true that haptic will enhance the user experience but at times like haptics are making the user uncomfortable, or get irritated, check if you can fix it by adding some visual cue or sound to compliment, but if that doesn’t work as expected, it is better to avoid haptic there.
2. Avoid conflict with system patterns
The operating systems like iOS and Android have already defined a set of vibrating patterns in their OS interface. But as a designer/developer, we are always allowed to create some of our own haptic patterns. But we must ensure our patterns won’t conflict with the system patterns as they may serve different intentions.
3. Skipping the Usability Test
It is always must to test your haptic designs before letting them out to the real users. Testing helps us to identify the issues where the triggered vibrations disrupt other experiences of the product. The early you test, the more efficient the product will be. | https://uxdesign.cc/haptics-for-enhanced-ux-9f7e2f3c0e | ['Avinash Bussa'] | 2020-10-02 14:51:46.743000+00:00 | ['Design', 'Interaction Design', 'Vibration', 'User Experience', 'Haptics'] |
Messenger Bot using Flask and API.AI | Setting up API.AI
API.AI is a conversational experiences platform. Register and create an account at API.AI. Create an agent to access test console. Navigate to settings of the agent and get the Client Access Token.
Creating an intent
An intent is mapping between user expression and the desired response. We can train which user input fits to which intent. We can also use built-in parameters within the user input and set context. This helps us categorise the user inputs.
Setting up the Flask Server
Add client access token, page access token and verify token. A page has to be created for the bot. That page access token is needed. Verify token can be anything as per developer choice.
CLIENT_ACCESS_TOKEN = 'your_client_access_token_from_api_ai'
PAGE_ACCESS_TOKEN = 'your_facebook_page_access_token'
VERIFY_TOKEN = 'verification_token_for_facebook_chatbot'
Webhooks
Facebook webhook sends the request to the application and if the verification is complete i.e. the token returned by application matches the token entered in facebook application, request comes to the application server.
All messages sent by the facebook messenger are received on the application server and are handled.
Parse message
Application server receives the message but the intent is yet to be recognised. So, we use python library apiai to get the intent. To use apiai, client access token is needed.
API.AI parses the message sent by the user and receives the bot response from API.AI which contains the intent along with other information.
If everything is alright, response[‘status’][‘code’] should be 200. response[‘result’][‘metadata’][‘intentName’] contains the intent.
After knowing the intent, we can do actions on it according to our requirements.
We can not only parse texts but can also parse events. We can also set context and assign session id to the request.
Sending back message to Facebook user
To send message to the facebook user, call Facebook Graph API using python requests library.
sender_id is present in the request body.
Make a flask app which contains the above routes (webhooks) and methods.
Activate a virtual environment, install all depedencies and run the flask app.
$ virtualenv .env
$ pip install -r requirments.txt
$ python flask_app.py
Set up tunnelling to localhost
We can use ngrok to expose local webserver to the internet so that it can be used for callback verification needed to be done for using a webhook with facebook app.
$ wget $ wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (for Linux)$ wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-darwin-amd64.zip (for Mac OS) $ unzip /path/to/ngrok.zip
$ ./ngrok http <port>
Ngrok port should be same as the port to which the flask app is listening.
Please note that a secure callback URL (https) is needed for verification.
Set up facebook messenger
Create a facebook page and facebook app. Add Webhook to the app.
Add Messenger to the app and generate token for the page which has to used for chat.
This page access token is used in the flask application to send messages back to the facebook user.
Enable webhook integration with callback URL and verify token
The callback url should be the ngrok url and the verify token should be same as that defined in the flask app.
Select events for page subscription | https://medium.com/ymedialabs-innovation/messenger-bot-using-flask-and-api-ai-f34f6e2eb6e6 | ['Rahul Nayak'] | 2018-01-26 04:26:54.326000+00:00 | ['Flask', 'Backend', 'Python', 'Apiai', 'Bots'] |
Multinomial Logistic Regression In a Nutshell | Now that we understand multinomial logistic regression, let’s apply our knowledge. We’ll be building the MLR by following the MLR in the graph above (Figure 1).
Data
Our data will be the Fashion MNIST dataset from Kaggle. The dataset is stored as a DataFrame with 60,000 rows, where each row represents an image. The DataFrame also has 785 columns, where the first column of the DataFrame represents the label of the image (Figure 3.2). The rest of the 784 columns contain the RGB-values for the pixels of each training image (Figure 3.1). Each image pixel number ranges from 0 to 255 based on its RGB value.
Figure 3.1. Sample image of a shirt from the training set.
Figure 3.2. Labels
Task:
Split the DataFrame into DataFrame X and DataFrame Y
Convert DataFrame X to an array
One-hot encoding Y values and convert DataFrame Y to an array
We are using one-hot encoder to transform the original Y values into one-hot encoded Y values because our predicted values are probabilities. I will explain this later in the next step.
Figure 4. Given X- and Y-values and desired X- and Y-values
Score & Softmax
Task:
Compute the score values
Define an activation function
Run the activation function to compute errors
Looking at Figure 1, the next step would be computing the dot product between the vectors containing features and weights. Our original weight vector will be an array of 0s because we do not have any better values. Don’t worry, our weight will be constantly updating as the loss function is minimized. The dot product is called the score. This score is the deciding factor that predicts whether our image is a T-shirt/top, a dress or a coat.
Figure 5. Softmax function. Photo credit to Wiki Commons
Before we utilize the score to predict the label, we have 2 problems. Remember that we one-hot encode our scores because our predicted values are probabilities? Our current scores are not probability values because they contain negative values and thus do not range from 0 to 1. So, we need to implement the Softmax function in order to normalize the scores. This exponent normalization function would convert our scores into positives and turn them into probabilities (Figure 5).
In an array of probability values for each possible result, the argmax of the probability values is the Y value. For example, in an array of 10 probabilities, if the 5th element has the highest probability, then the image label is a coat, since the 5th element in the Y values is the coat (Figure 3.1).
Figure 6. Score and Softmax functions in Python
Gradient Descent & Loss Function
Task:
Define a gradient function
Define a loss function
Optimize the loss function
After the Softmax function computes the probability values in the initial iteration, it is not guaranteed that the argmax matches the correct Y value. We need to iterate multiple times until we are confident about our argmax. To validate our argmax, we need to set up a loss function. We will use cross-entropy loss.
Figure 7. Cross-entropy loss in Python
The way to maximize the correctness is to minimize the loss in cross entropy function. To do that, we will apply gradient descent. Specifically, we will use stochastic ٖgradient descent. Stochastic gradient descent is no different than regular gradient descent. The term “stochastic” means random, meaning the gradient descent will be done by randomly selecting a sample of features. Then, instead of taking the gradient of all features, we are only calculating the gradient for the sample features. The purpose of stochastic gradient descent is to decrease the number of iterations and save time.
Figure 8. Gradient descent and stochastic gradient descent formulas
In order to achieve randomness, we will disrupt the order inside the X array (permutation). Each time we sample an image from the X array, we’ll compute the stochastic gradient descent and update the weight. Then the updated weight will be used to find the minimum value of the loss function. The number of times the minimum is found in the loss function is known as the number of epochs. Typically, more epochs would lead to better results since there is more training involved. However, too many epochs would lead to overfitting. Choosing a good epoch depends on the loss values, there is an article that talks about how to choose a good number of epochs here.
Train & Test
Task:
Define a training set and a test set
Train our samples
Visualize our loss values
Now that we have optimized the loss function, let’s test our model on our data. Our training sample has 60,000 images, we will split 80% of the images as the train set and the other 20% as the test set based on the Pareto principle. While we fit the model, let’s keep track of the loss values to see if the model is working correctly.
Figure 9. Losses after iterations
We can clearly see that the value of the loss function is decreasing substantially at first, and that’s because the predicted probabilities are nowhere close to the target value. That means our loss values are far from the minimum. As we are getting close to the minimum, the error is getting smaller because the predicted probabilities are getting more and more accurate.
Accuracy
After fitting the model on the training set, let’s see the result for our test set predictions.
Figure 10. Prediction result
It looks like our accuracy is about 85% (Figure 11), which is not so bad.
Figure 11. Accuracy Score
Challenge
Now that we have our initial predictions, see if you can improve the accuracy by adjusting the parameters in the model or adding more features. Tuning parameters like the learning rate and epoch is something to start with. I’ve attached the code and datasets for you to play around with. Enjoy! | https://medium.com/ds3ucsd/multinomial-logistic-regression-in-a-nutshell-53c94b30448f | ['Wilson Xie'] | 2020-12-11 09:30:25.988000+00:00 | ['Neural Networks', 'Python', 'Logistic Regression', 'Data Science', 'Machine Learning'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.