title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
Setting up your Full Stack Application: Rails API and React/Redux | Setting up your Rails API
To reiterate, the following steps are what I often do when I set up my Rails API. That being said, there may be some steps (i.e. gems) that you will not want/need for your project. Also, I often create my Rails API with Postgres as the database. Whatever database you choose to work with isn’t as important as some of the other steps, but I will emphasize what each part is doing so that you can be conscious of your own project.
Go to the project directory that is holding your Rails API Change into the Rails API folder with:
cd <rails_api_folder_name>
3. In your Gemfile uncomment:
gem ‘bcrypt’
gem ‘rack-cors’
Bcrypt is what we will be using to hash our users’ passwords. The rack-cors gem provides support for Cross-Origin Resource Sharing (CORS), which is when a web application is making a cross-origin HTTP request that has a different origin (i.e. domain) than its own.
4. In your Gemfile you can choose to add any or all of the following gems:
gem ‘jwt’
gem ‘active_model_serializers’
gem ‘activerecord-reset-pk-sequence’
The ‘jwt’ gem is for setting up JSON Web Token for user authentication. The gem for serializers allows us to use serializers, which give us control over what is being sent out to the frontend. The last gem (activerecord-reset-pk-sequence) is for your seed data and will reset the id of your Active Record table to zero.
5. Uncomment the following code in config/initializers/cors.rb and change “example.com” after origins to ‘*’:
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins '*' resource '*',
headers: :any,
methods: [:get, :post, :put, :patch, :delete, :options, :head]
end
end
Changing the origins to ‘*’ allows any origin to make a request. If you’re interested in learning more about CORS, check out this in-depth explanation by Chris.
6. In your terminal run:
bundle install
spring stop
Only if you included gem ‘activerecord-reset-pk-sequence’ in your Gemfile do you need to run spring stop.
7. If you want to include multiple serializers and have it show more than one association deep (in other words, if you are serializing deeply nested associations), create a new file in config/initializers. Inside the new file, add the following line of code:
ActiveModelSerializers.config.default_includes = '**'
You can name the file anything, but I suggest naming it “active_model_serializers.rb”. Also, even though this allows you to use your serializers with nested associations, the associations can only go in one direction.
8. Create your models, migrations, controllers, and routes files with:
rails g resource <model_name> <attribute:datatype>
rails g resource Book title:string pages:integer author:belongs_to
Don’t forget to add your associations! You can use the belongs_to macro to set up your foreign keys and belongs_to association. However, that means you need to create based on the has_many first; simply put, create models that are independent/don’t rely on others for information first. Think about it: you can’t create a book without an author first.
Fun fact: Normally you would also get your views folder, but if you generated the app with the --api flag, it configures the generators to skip the views folder.
Bonus: If you are using Bcrypt for your user’s password, instead of password being the attribute (password:string), you would be using “password_digest” as the column in the table. However, it is only in your schema and migration that you will be using “password_digest”. Everywhere else, including when you create a user instance, you will be using “password”. For example:
rails g resource User name:string age:integer password_digest:string
User.create!(name: “bob”, age: 15, password: “pw321”)
9. Go into each model’s file (app/models) to add validations and/or the rest of your association macros for the models that need it. If we continue with the author and book example, you could add to the author model:
has_many :books
validates :first_name, presence: true
If you are using Bcrypt for the user’s password, you can use a macro in your user model called has_secure_password.
10. If you are using serializers, for each serializer be specific about the association macros you use. The association macros here have nothing to do with your model association macros and you should think about which association is more important to send out to your frontend.
11. Create your Postgres database with:
rails db:create
12. Run your migrations and create your schema with:
rails db:migrate
13. Specify the routes you will need in routes.rb. You will get all of the RESTful routes automatically because of rails g resource, (based on the earlier example it would look like resources :books) but if you know you won’t need all of them you can individually write out what you need.
14. Create some seed data in your db/seeds.rb and seed it with:
rails db:seed
Before creating seed data you might want to test in your console (rails c) if you can create instances of each model. To be more specific, you should check if your validations and associations are working. In addition, at the top of this file, for each class you can include class_name.destroy_all and class_name.reset_pk_sequence starting from the classes that are dependent/belongs_to. | https://medium.com/dev-genius/setting-up-your-full-stack-application-rails-api-and-react-redux-428ff7a7a7c0 | ['Waverley Leung'] | 2020-12-06 20:50:15.516000+00:00 | ['Guides And Tutorials', 'React', 'Code Newbie', 'Rails', 'Redux'] |
The Risk of Total Knowledge in the 21st Century | In 1941, a short story by an Argentine writer named Jorge Luis Borges was published in a book titled The Garden of Forking Paths. The story would mesmerize linguists, mathematicians, philosophers, and eventually computer scientists. The name of the story was The Library of Babel, and its thesis and mathematical depth made it a profound read.
I came across the story almost a year ago and have been fascinated with the concepts presented in it. The story presents an infinite library that contains every book that has ever been written and every book that will ever be written. The librarians make this claim because the books include every possible ordering of 25 basic characters and are limited to 410-pages. The only problem is that the majority of the books are complete gibberish. The librarians search endlessly for texts that contain all useful information, including predictions of the future, biographies of any person, decrypted messages from army communications, a letter written to a lover in 1621 — everything exists within the library. The issue was that the search for anything meaningful takes infinite time. The librarians searched for a book of secrets that they couldn’t find because they didn’t have enough time.
Borges wrote this story before the computer age, but now that it is here, we are reaching a point where this limitation, while still mathematically valid, will cease to be a problem. Let me explain how. From my interpretation of the story, mankind itself is who applies meaning to the books within the library. The search of the librarians is a map from their lives’ experience to text written in one of the books. There was a limit to how much human experience could be converted into knowledge. That limit was the number of humans alive and the information and experiences that each individual had access to.
I spent the last three and a half years of my life working with artificial intelligence, and upon my reading this story, an epiphany of grand proportions hit me. Computationally it is impossible to generate all permutations of books. And even if it were possible, sorting through the books after they had been generated would take an equally infinite time. What artificial intelligence, deployed on quantum computing hardware, will allow is the ability to synthesize human experience, develop experiential maps, and find useful information at record speeds.
Synthesizing human experience is something that is happening today on a grand scale. All around the world, data is mined like a gold rush. Storage is cheap, and our human experience is free for the taking because we don’t care — everything we do online to tracked, sold, and modeled. We say, “I have nothing to hide,” “I’m getting useful services from Google and Facebook,” and then shrug and say, “the government is doing it anyway.” What I don’t think we realize is that once there is enough data, enough experience, then governments and corporations do not need to compute the entire library. Instead, they will have the most robust map of intelligence that has even been achieved.
The map of meaning, mined from our experience, will allow any who hold such power to find those hidden books of the library to see what the future will hold. You can claim that the mining of experience will have to be continually updated and explored by humans for the intelligence to find more knowledge as time progresses — but that may be a false assumption. We already have situations in which self-driving cars are trained using simulators or even video games like Grand Theft Auto 5. Knowledge and experience can be trained into artificial intelligence even from a simulation.
The more pertinent observation is that given a robust enough simulation, ran on a network of quantum computers and explored by the experiential maps trained into artificial intelligence, total (useful) knowledge could happen. In fact, it could happen sooner than we think. Every data point we give, every shrug toward privacy that we make, brings us one step closer to this inevitability. The concern for me is who gets there first. Is it a corporation? Google? Amazon? China? The US? With total knowledge comes total power. What are the implications of that power? We need to start thinking about this now. We need to start fighting for our data and privacy. Because it is no longer “I have nothing to hide,” it becomes a matter of humanity’s freedom in the future.
I have faith in humanity, so much faith, but I have less faith in governments and corporations attaining total power over information. This is one of the biggest existential threats we will face this century.
If Borges were alive today and saw the data acquisition, artificial intelligence, and leaps in computational power, how would his story be different? The librarians who endlessly search the library only had their individual or communicated experiences as a map to traverse the infinite expanse of information. What if those librarians were machines and had a map of all humanity’s experiences?
I challenge you to think about this and read The Library of Babel. This short story from 1941 may be more relevant to our modern era than we think. | https://medium.com/beyond-the-river/the-risk-of-total-knowledge-in-the-21st-century-50c509459611 | ['Drunk Plato'] | 2020-06-26 13:40:34.024000+00:00 | ['Literature', 'Philosophy', 'Artificial Intelligence', 'Articles', 'Technology'] |
Send Emails Serverlessly With Node.js, Lambda, and AWS SES | Step 2: Creating a Lambda Function
Once your email is verified, it is now time to create a Lambda. If you have read this article, you already know how to create one, but let’s go through it quickly here too. Let’s head over to the “Lambda” service in the AWS dashboard. Once here, click on the orange “Create function” button. Give it a name, keep all other settings the same, and click on “Create function.”
Lambda function creation
After creating the function, if you scroll down, you will see the “Function code” section with some boilerplate code. Let’s replace that with something useful. I wrote this basic piece of code that responds to an event by sending an email:
Note: It may seem weird that we have not installed aws-sdk by running npm install and then uploaded a compressed project, but that is because AWS makes this SDK available for us when we are in their Node environment along with the core Node packages like fs and https .
So what does this code do? It extracts the to email address and subject from the body of the event request coming in. It then uses fs to read the HTML template, which we have not written yet for our email. Then it attempts to send an email with the relevant parameters and catches any errors. We then output these errors to the console so we can analyse if anything goes wrong in another AWS service called CloudWatch. Now let’s write up some HTML for our email template. You can customise your email to your requirements and use CSS to make it look beautiful. However, for the purpose of saving time, I simply used an email template that I found online. Feel free to use the same one, a different one, or write one up by yourself. To do this, right-click on the folder on the left and click on “New File.” As we decided to name our email template body.html in our JavaScript code, we must make sure that we name our HTML file accordingly and then paste/write our HTML code there.
Lambda code editor
Once done, click on the “Save” orange button so all our code is saved. | https://medium.com/better-programming/send-emails-serverlessly-with-node-js-lambda-and-aws-ses-186cba40d695 | ['Angad Singh'] | 2020-07-06 15:15:00.657000+00:00 | ['Programming', 'Nodejs', 'JavaScript', 'Serverless', 'AWS'] |
FAQs for Black, Queer, Female Software Engineers | FAQs for Black, Queer, Female Software Engineers
And why you might think twice before you ask them
A recent National Center for Women & Information Technology “By the Numbers” report puts Black women in computing in 2018 at 3% — and this statistic does not make a distinction between technical and nontechnical women in computing. Of the overall U.S. population, roughly 4.5% identify as LGBT. Since it’s incredibly hard to find industry-wide statistics on LGBT folks in tech, let’s take an extremely optimistic view and assume 4.5% of people in computing are LGBT. Combining these two percentages tells us that .00135% of the computing workforce identifies as Black, queer, and female.
And remember, this is an extremely back-of-the-napkin calculation that does not distinguish between technical and nontechnical women, and uses an almost certainly incorrectly optimistic estimation of the percentage of LGBT folks in tech.
But beyond these numbers, all of this matters because the lived experience of being such an extreme unicorn can teach all of us what not to do around the buzzword topics of “diversity” and “inclusion.”
Three questions I’m tired of answering
What’s it like to be a woman on this all-male team? Did this happen because you are Black or is it just that you’re Black and this happened? How do we improve diversity statistics at <insert your tech company here>?
Let’s start with the most obvious problem here. All of these questions ask us, the individuals holding a marginalized identity, to explain and usually defend our lived experience. There are so many reasons this shouldn’t happen, but the most pressing is that this doubles the amount of work it takes for us to even exist in this space. Queer black women in tech already experience countless daily microaggressions that come from being visibly different in the workplace. Ask us any of the questions above and then we also have to argue that our differences are valid and perform an incredible amount of emotional labor of responding to these questions or pushing back against the fact that we’re being asked these questions at all.
These questions are not word-for-word, but they capture the essence of questions that I have received from folks in and out of the tech industry, from friends to co-workers, to managers, and far beyond.
1. What’s it like to be a woman on an all-male team?
My experience as a woman on an all-male team lasted about 15 months. It wasn’t all that bad at first. Folks were excited to listen to my technical ideas, hear my personal stories, teach me how we do things here.
Then I passed the six-month mark, the point at this particular tech company where you’re expected to know roughly what you’re doing and be able to contribute at the same level as your co-workers. And suddenly I began having to repeat myself in planning meetings for anyone to notice I had spoken. At lunch, it was blindingly clear by that point that I didn’t have anything to contribute to conversations about racing cars and destination weddings, so my co-workers stopped inviting me to meals. I resorted entirely to online methods of communicating about technical problems because every word I said out loud would be hyperscrutinized by every individual within earshot.
At some point, my manager asked me how things were going, and I let him know the difficulties I was having. I pointed out that I was the only woman, and one of the youngest people on the team. I explained how my co-workers did not trust my knowledge, despite extensive proof that I knew exactly what I was doing. And he asked, “Are you speaking up enough? I notice you are often quiet in group conversations; maybe you need to insert yourself more so folks know they can have conversations with you as well.”
All of these questions ask us, the individuals holding a marginalized identity, to explain and usually defend our lived experience.
The person who had been in nearly every group conversation where I had been spoken over, whose job it was to pay attention to my experience, and make sure I could contribute at the fullest capacity possible, who had time and time again said he understood the struggles of being other (because he was a foreign nonwhite person living in the U.S.) his first question put the onus of change squarely on my shoulders.
Perhaps all of this was because I am a woman; perhaps none of it is. My point is that it is an experience that is not at all unique to me. Rather, it is one that many (if not most) folks in tech have when they are not cis men.
2. Did this happen because you are Black or is it just that you’re Black and this happened?
Over the course of my life — slightly shorter than a quarter-century — I’ve asked myself countless times:
Did this happen because I’m Black? Mixed-race? Darker than white folks? Lighter than Black folks? Was I just treated differently because I’m with a group of white folks? Would this have happened differently if I wasn’t alone in my Blackness?
Tech has been no different. And in bringing up The Race Question with co-workers and managers in several companies — folks who are, themselves, Black and white and several other races — I’ve been surprised at the responses.
Almost universally, regardless of the race of the person I’m speaking with, I’m asked “‘Are you sure? Are you sure this has to do with race?” Even other Black folks, other Black women in and out of tech, have flinched away from my presentation of race as part of the reason I struggle in ways my non-Black co-workers do not.
Since it is challenging to identify the exact reason I may have been criticized at a higher rate than my co-workers, let’s not even focus on that. Rather, let’s focus on why folks push back on my assertion of my racial identity affecting how I’m treated in the workplace. Of course, my race affects my work — because humans are primarily visual creatures, our visual presentation (race frequently being one of the most immediately obvious) is one of the primary ways we are categorized. Why pretend otherwise?
The only answer I can think of is that pretending that Blackness doesn’t factor into my experience in the workplace means that my co-workers, managers, and beyond don’t have to consider how to change a pervasive, difficult-to-tackle culture that allows for racially-motivated differences in treatment. Race-based mistreatment or inequality is, in my opinion, one of the hardest cultural problems to admit to and then change. It’s the easiest to ignore when you’re not the target, the most uncomfortable to accept on a personal level, especially if this acceptance would hold you personally accountable to change. It usually requires such a drastic cultural change that the easiest solution is to pretend race simply doesn’t affect anything.
Queer black women in tech already experience countless daily microaggressions that come from being visibly different in the workplace.
But the question itself — are you sure it’s about race? — and with it, the clear hope that my Blackness doesn’t have to do with the issue at hand, lets me know right away that I will have to fight to have my concerns even acknowledged as legitimate. It tells me that I do not have an ally who believes in the words I say.
Again, whether or not I am right in the end is not really the issue. The issue is more that I have to fight to have my experience believed in a way that other folks simply don’t — and that this is a part of my daily experience in the workplace as much as are any of the engineering problems I face.
3. How do we improve diversity statistics at <insert your tech company here>?
First, let me ask this:
Are you, midlevel manager or tech giant CEO, going to your white male employees and asking them how we improve diversity statistics?
No? Well, then, don’t ask me how we can improve diversity unless you are going to pay me to answer this question. That’s unpaid labor, and you can learn more about that by reading “The Techies Project And Why Unpaid Labor for Diversity in Tech Needs to Stop” and “Whose Job Is It to D&I Anyway?” If you’re asking me for this work simply because I hold identities similar to the ones you hope to hire, that’s just a bad reason. It also makes me more susceptible to burnout, and less inclined to help in any way.
If I know I’m going to be bringing other black folks (or queer folks, or women, or any mix of the three) into an environment where the many invaluable nontechnical contributions they will be expected to make will be unpaid and unappreciated, I’m not going to tell them to come. There are many reasons Black Twitter exists. One of them is that folks holding often-exploited identities know how to band together and support one another — and we will never willingly lead each other into harm’s way.
As a side note, that solidarity is why I will never again work on a team that does not have a single Black woman, queer person, or QTPOC-identifying individual — I have been the first and often only in a toxic environment one too many times already.
My identities do not exist in a silo
For me, it is not about how my identities as a Black, queer, and female software engineer exist separately. It’s about how they exist together. It’s about the exhaustion that I experience on a daily basis as I try to figure out if it is my femininity or my Blackness that contributes more to my non-Black male co-workers speaking over me; if it’s the visible queerness or the slang I slip into with QTPOC friends that makes my co-workers avoid my eyes in the cafe; if it’s my ideas or my image that need improvement in the eyes of my teammates who don’t quite trust me.
In a world where my co-workers had no implicit biases, I could trust their words without a second thought. They could give me critical feedback and I could take it at face value. But when I have to wade through a syrup of social and political confusion before I can even get to the technical questions at hand, it decreases my ability to be fully present and effective at work and increases my stress level from all sides.
So why does it matter?
Why is this collection of thoughts on womanhood, Blackness, and diversity important?
If you have ever had even a passing thought about the experiences of women in tech, racial minorities in white male-dominated spaces, or the “general diversity issues” that seem to plague every company in America, it can only help for you to be exposed to the viewpoints of more folks whose voices you don’t traditionally hear.
You rarely hear from Black queer women who are software engineers. (And if I’m wrong, please correct me — and then put me in touch with them.) So I am giving you the viewpoint of one Black queer woman currently adventuring as a software engineer.
Hopefully, you also get a little more compassion for experiences like mine. Hopefully, you understand a little better that being a Black queer female software engineer is not just about the interesting and challenging technical problems tech companies are solving, but also about countless hours of struggle over social and political issues that are rarely, if ever, recognized within tech companies themselves.
Hopefully, you learned something you’ll remember. And then you’ll use it to help change the world. | https://onezero.medium.com/faqs-for-black-queer-female-software-engineers-38b2d2b9450e | ['Naomi Day'] | 2019-09-25 14:21:26.602000+00:00 | ['Black', 'Software Engineering', 'Diversity In Tech', 'Industry', 'Queer'] |
Black Nature Writing, by Christopher Brown | This newsletter is about walking alone in nature. Mostly with an inclusive idea of what constitutes “nature”-the degraded urban wilderness of empty lots, traffic islands, and the trash-strewn floodplains of rivers that flow through big cities. But it’s not as inclusive as I like to think.
I go into the woods or onto the river to feel like I am stepping away from society, into a different realm, tuning in to the non-human world. I see the signs of human impact in the landscape, but the connections I make are mostly with other species, and with the earth, air, sun and water. But recent events in the national news like the murder of Ahmaud Arbery for jogging while black and Central Park birder Christian Cooper’s experience having a white woman call 911 when he asked her to leash her dog have helped me appreciate more clearly the extent to which the outdoors is not neutral territory, but maybe the most segregated part of America.
The kind of urban eco-exploration I document here is a product of privilege. The privilege of being able to walk alone through the negative space of the city, often traversing private property, sometimes marked, sometimes not, through a landscape populated by men with guns and a religious fidelity to the idea embodied in the fence, without ever really worrying that something bad could happen to me. The reflexive confidence, accumulated over an adult life as a white male establishment lawyer, that my very status is the real passport this society issues, one that lets you mostly go where you want.
My walks often involve encounters with other solitary walkers. And in the fifteen years I have been exploring the zone along the Colorado River in East Austin, many of those encounters have been with people of color, usually enjoying the wild parts of their own neighborhood, like the guys pictured below digging nightcrawlers from Boggy Creek. But in a life spent outside, from the Big Bend to Alaska, from the Maine coast to the California coast, from the Boundary Waters to the Rocky Mountains, almost all of the other hikers and paddlers I have encountered have been white. It’s not because they are the only ones who enjoy life outside.
This week I looked at my shelf of nature writing and realized what a monoculture it really is. Classics of the genre, stories of canoe trips, guides to all manner of flora and fauna, books of cloud watching and wind collecting, romantic poetry and paper planetariums. A few Native American voices in there, but as soon as you ask the question you already know the answer: nature and the outdoors are a field of writing dominated by white authors celebrating one kind of diversity while mostly oblivious to another.
A few weeks ago a family member sent me The Norton Book of Nature Writing, a canonical anthology of exceptional pieces by more than 125 writers, most of them American, intended as a teaching text. Only seven of the contributions are by writers of color. That after an effort by the editors to expand from the original 1990 edition, noting in their introduction “the growing consciousness that there can be no fundamental distinction between environmental preservation and social justice, that human compassion and environmental sustainability are branches of the same tree, and that cultural diversity is one of the primary resources we have for ensuring biological diversity.”
I’ve written a fair bit about those connections, on my own terms. I’ve learned a lot in the past decade from neighbors who fight for justice at the nexus of race and environmentalism, and from my dear uncle, who in his last few years traded stories of our shared hometown with me that helped me understand the extent to which my childhood freedom of movement was something he never had as a black kid for whom even a paper route was a trip into the danger zone. But to my embarrassment, I had never really looked with clear focus at what colonized space the American outdoors really is.
This week I began seeking out nature writing by black authors, and found an array of powerful works that immediately sharpened my understanding of the very different ways in which we experience access to nature. I found writers who bring a much deeper and more intense sense of the history visible in the land, whose expressions of the joy to be found in those moments of satori-like connection to the non-human world transcend the usual reveries of this peculiar genre. And writers whose walks in the woods are haunted by a fear of more than snakes and bears. I have only just begun to wade into this body of work, but I thought I would use this week’s newsletter to share a few examples of the enlightening literature that’s out there, and encourage you to go read them and undertake your own explorations.
Evelyn White
Evelyn White writes about a lot more than nature, but her short essay “ Black Women and the Wilderness “ is one of the most powerful and succinct discussions I found of that difference of experience. She shares her intense fear at even the idea of going out outside the city, through a recollection of summers spent in a genteel and bucolic setting, the kind most writers covet, teaching at a mountainside writers workshop in Oregon’s Cascade Mountains. Showing through personal history why she was unable to join the students and other teachers in their daily excursions out into the woods and onto the river, even though she may have had even stronger yearnings than them to connect with what’s out there.
“My genetic memory of ancestors hunted down and preyed upon in rural settings counters my fervent hopes of finding peace in the wilderness. Instead of the solace and comfort I seek, I imagine myself in the country as my forebears were-exposed, vulnerable, and unprotected-a target of cruelty and hate.”
When she finally gets up the nerve to go on a river trip with her colleagues, her presence results in the boatman turning them down, claiming there are no more boats. So they go on foot, into the woods.
Lauret Savoy
Lauret Savoy is a geologist who teaches at Mt. Holyoke, and writes about landscape through a prism of deep history. Her book is memoir through the mirror of environment, tracing her lifelong connection with the study of the land in parallel with her coming of age as one of its inhabitants, a woman of mixed African, indigenous and European ancestry whose childhood cognizance of racial identity she describes as being imposed on her long after she had looked at her skin in the California sun as a young girl, and connected her soul to the American rock through a family trip to Grand Canyon’s Point Sublime. Savoy writes with an easy grandeur grounded in a compelling personal story, suggesting the draw of natural history as an escape from racism: “nature wasn’t something that would hate me.”
“Human experience and the history of the American land itself have, in fragmented tellings, artificially pulled apart what cannot be disentangled: nature and ‘race.’ It’s important to make connections often unrecognized, to trespass supposed borders to counter some of our oldest and most damaging public silences. There are so many poorly known links between place and race, including the siting of the nation’s capital and the economic motives of slavery. None of these links is coincidental. Few appear in public history. Many touch me-and you.”
Described by one reviewer as “John McPhee meets James Baldwin,” Trace won the American Book Award and the ASLE Environmental Creative Writing Award, and was a Pen Literary Award Finalist, shortlisted for the William Saroyan Prize, and nominated for several other notable prizes.
Eddy L. Harris
Eddy Harris is a St. Louis native who, at the age of 30 in the late 1980s, set out to canoe the full length of the Mississippi River-as a black man paddling solo. He chronicled this trip in his book Mississippi Solo, and then made the trip again 30 years later as a middle-aged man-this time with a camera crew, which produced the 2018 documentary River to the Heart. The epic trek on a route through the heart of America gives Harris’s book a powerful narrative line that lets all the parts of the story flow together-the personal journey, the intense social context, and the sublime natural history. Harris is a gifted storyteller who writes beautifully and perceptively, showing a keen self-awareness and eye for the character of others. Consider this passage, as he discusses his plans with an older mentor over whiskey:
Once I finish Mississippi Solo, I plan to dig into South of Haunted Dreams, Harris’s book about his even more courageous solo motorcycle journey through the Deep South, described as among the best books ever written about race in America.
I hope those of you who enjoy nature writing and the experience of the American outdoors will check these works out, and some of the other exemplars of this rich literature. They help us see how race is always there, even when there are no people around. | https://medium.com/the-reading-lists/christopher-browns-black-nature-writing-2afbc6267cd | ['Brianna Robinson'] | 2020-07-09 13:22:37.687000+00:00 | ['Christopher Brown', 'Environment', 'Nature', 'Nature Writing', 'Black Writers'] |
Build Your Own Fake News Classifier | DATASET
The dataset I used for this python project is news.csv. This dataset contains News, Title, Text, and Label as the attributes. You can download it from here.
READING THE DATASET
#Reading the data
df=pd.read_csv('/home/femme_js/Hoaxify/news.csv') df.head(10)
This is how dataset looks like.
Before proceeding, check whether your dataset does have any null value or not.
# checking if column have NaN values check_nan_in_df = df.isnull()
print (check_nan_in_df)
This data frame does not consist of null values. But if your data frame consists of null values fill them up with spaces before combining them into a feature. Here’s how you can:
df = df.fillna(' ')
As we see both ‘Title’ and ‘Text’ features are important so we can combine them into a single feature named ‘Total’.
df['total'] = df['title'] + ' ' + df['text']
df.head()
The dataset looks like this.
PRE-PROCESSING OF THE DATA
To preprocess your text simply means to bring your text into a form that is predictable and analyzable for your task. We used nltk library for this.
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer from nltk import sent_tokenize, word_tokenize
Removing Stopwords: Stopwords are the words in any language which does not add much meaning to a sentence. They can safely be ignored without sacrificing the meaning of the sentence. You can read more about it here. Tokenization: Tokenization is the process of tokenizing or splitting a string, text into a list of tokens. One can think of token as parts like a word is a token in a sentence, and a sentence is a token in a paragraph. For Example:
from nltk.tokenize import word_tokenize text = "Hello everyone. You are reading NLP article." word_tokenize(text)
The output looked like this:
['Hello', 'everyone', '.', 'You', 'are', 'reading', 'NLP', 'article', '.']
3. Lemmatization: Lemmatization is the process of grouping together the different inflected forms of a word so they can be analyzed as a single item.
Text preprocessing includes both Stemming as well as Lemmatization. Many times people find these two terms confusing. Some treat these two as same. Lemmatization is preferred over Stemming because lemmatization does the morphological analysis of the words.
Examples of lemmatization:
swimming → swim
rocks → rock
better → good
For taking a high-level dive into Stemming Vs. Lemmatization, check here.
The following code does all the pre-processing.
stop_words = stopwords.words('english') lemmatizer = WordNetLemmatizer() for index, row in df.iterrows():
filter_sentence = ''
sentence = row['total']
# Cleaning the sentence with regex
sentence = re.sub(r'[^\w\s]', '', sentence)
# Tokenization
words = nltk.word_tokenize(sentence)
# Stopwords removal
words = [w for w in words if not w in stop_words]
# Lemmatization
for words in words:
filter_sentence = filter_sentence + ' ' + str(lemmatizer.lemmatize(words)).lower()
df.loc[index, 'total'] = filter_sentence df.head()
CONVERTING LABELS
The labels here as classified as Fake and Real. For training our model, we have to convert them in numerical form.
df.label = df.label.astype(str)
df.label = df.label.str.strip()
dict = { 'REAL' : '1' , 'FAKE' : '0'} df['label'] = df['label'].map(dict) df.head()
The label feature looks like this.
For further proceeding, we are separating our dataset into input and output features as ‘x_df’ and ‘y_df’.
x_df = df['total']
y_df = df['label']
VECTORIZATION
Vectorization is a methodology in NLP to map words or phrases from vocabulary to a corresponding vector of real numbers which is used to find word predictions, word similarities/semantics.
For curiosity, you surely want to check out this article on ‘ Why data are represented as vectors in Data Science Problems’.
To make documents’ corpora more palatable for computers, they must first be converted into some numerical structure. There were few techniques used to achieve this such as Bag of Words.
Here, we are using vectorizer objects provided by Scikit-Learn which are quite reliable right out of the box.
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
count_vectorizer = CountVectorizer()
count_vectorizer.fit_transform(x_df)
freq_term_matrix = count_vectorizer.transform(x_df)
tfidf = TfidfTransformer(norm = "l2")
tfidf.fit(freq_term_matrix)
tf_idf_matrix = tfidf.fit_transform(freq_term_matrix)
print(tf_idf_matrix)
Here, with ‘Tfidftransformer’ we are computing word counts using ‘CountVectorizer’ and then computing the IDF values and after that the Tf-IDF scores. With ‘Tfidfvectorizer’ we can do all the three steps at once.
The code written above will provide with you a matrix representing your text. It will be a sparse matrix with a large number of elements in a Compressed Sparse Row format.
The mostly used vectorizers are:
Count Vectorizer : The most straightforward one, it counts the number of times a token shows up in the document and uses this value as its weight.
: The most straightforward one, it counts the number of times a token shows up in the document and uses this value as its weight. Hash Vectorizer : This one is designed to be as memory efficient as possible. Instead of storing the tokens as strings, the vectorizer applies the hashing trick to encode them as numerical indexes. The downside of this method is that once vectorized, the features’ names can no longer be retrieved.
: This one is designed to be as memory efficient as possible. Instead of storing the tokens as strings, the vectorizer applies the hashing trick to encode them as numerical indexes. The downside of this method is that once vectorized, the features’ names can no longer be retrieved. TF-IDF Vectorizer: TF-IDF stands for “term frequency-inverse document frequency”, meaning the weight assigned to each token not only depends on its frequency in a document but also how recurrent that term is in the entire corpora. More on that here.
MODELING
After Vectorization, we split the data into test and train data.
# Splitting the data into test data and train data x_train, x_test, y_train, y_test = train_test_split(tf_idf_matrix,
y_df, random_state=0)
I fit four ML models to the data,
Logistic Regression, Naive-Bayes, Decision Tree, and Passive-Aggressive Classifier.
After that, predicted on the test set from the TfidfVectorizer and calculated the accuracy with accuracy_score() from sklearn.metrics.
Logistic Regression
#LOGISTIC REGRESSION
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(x_train, y_train)
Accuracy = logreg.score(x_test, y_test)
print(Accuracy*100)
Accuracy: 91.73%
2. Naive-Bayes
#NAIVE BAYES
from sklearn.naive_bayes import MultinomialNB
NB = MultinomialNB()
NB.fit(x_train, y_train)
Accuracy = NB.score(x_test, y_test)
print(Accuracy*100)
Accuracy: 82.32 %
3. Decision Tree
# DECISION TREE
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(x_train, y_train)
Accuracy = clf.score(x_test, y_test)
print(Accuracy*100)
Accuracy: 80.49%
4. Passive-Aggressive Classifier
# PASSIVE-AGGRESSIVE CLASSIFIER
from sklearn.metrics import accuracy_score
from sklearn.linear_model import PassiveAggressiveClassifier pac=PassiveAggressiveClassifier(max_iter=50) pac.fit(x_train,y_train) #Predict on the test set and calculate accuracy y_pred=pac.predict(x_test) score=accuracy_score(y_test,y_pred) print(f'Accuracy: {round(score*100,2)}%')
Output:
Accuracy: 93.12%
CONCLUSIONS
The passive-aggressive classifier performed the best here and gave an accuracy of 93.12%.
We can print a confusion matrix to gain insight into the number of false and true negatives and positives.
Check out the code here. | https://medium.com/swlh/build-your-own-fake-news-classifier-7918f05c2ec7 | ['Jeevanshi Sharma'] | 2020-06-27 08:14:20.568000+00:00 | ['Machine Learning', 'Data Science', 'NLP', 'Artificial Intelligence', 'Fake News'] |
What every software engineer should know about OAuth 2.0 | OAuth 2.0 is the industry standard protocol for authorization.
When I read the above sentence on the OAuth 2.0 homepage I felt like this is the kind of knowledge that I should have in order to call myself a software engineer, so I started digging. I remember that I felt a little bit scared that I will have to break through a lot of knowledge that is difficult to learn, but it turned out alright, and I’ve decided to share what I’ve learned!
What you’ll get from this article:
How OAuth 2.0 works
How an application gets an access token
How SPA and web server applications deal with OAuth
Types of access token and their validation
How OAuth 2.0 works
First of all, if you are like me, the official specifications are not always the best place to get a bigger picture of how OAuth works, as almost immediately you may overwhelmed by the amount of information (because it’s not one specification but a lot of them). To understand how OAuth works, let’s ask questions.
What problem does OAuth solve?
Let’s take for example an application where we want to see a user’s daily schedule, assuming that our user has a Google account with some data to get from calendar — but how we can ask the user for this data whilst considering security? That’s the moment where OAuth comes to play, but what is important is: OAuth doesn’t tell the application who you are, it just gives the application access to data which it asks for!
How does OAuth solve it?
The simple answer is, by giving an access token which allows your client application to get the data it needs from a resource server. I would like to describe an analogy which helped me a lot to understand this:
Let’s assume you work for a bigger company where you have to use an access card to get in. This card has been given to you on the first day of work with proper permissions by a person which knows that you are a new employee. After that first day you use this card to get into the company and nobody checks your identity anymore, you just put your access card to the reader and the door will open. Furthermore, you cannot open all doors but just this one which allows you to do your everyday job tasks. OAuth is based on this kind of flow.
Conclusions:
You are verified on the first day and receive an access card which you can use every day
Every reader opens the door for you (if access to this area has been granted to you)
The reader doesn’t know anything about your identity, the only thing that is important for the reader is if you have a valid access card in your hand.
How does an application get an access token?
From an apps perspective, the goal is to get an access token and then get data thanks to this token — so how does an app get an access token?
Ways to obtain a token by OAuth:
Authorization Code Flow (web apps, native apps)
Device Flow (AppleTV)
Password (for first-party apps)
Client Credentials (machine to machine)
Authorization Code Flow is most common approach, as it is used on both the web and in native applications. Before we can answer the question of how it actually works in OAuth one more thing we need to know is what kind of roles exists in OAuth.
OAuth Roles:
User (Resource owner)*
Device (User agent)
Application (Client)
OAuth server (Authorization server, where access token comes from)
API (Resource server)
*names in parentheses are those used in OAuth spec
A proper diagram can be worth more than a thousand words:
Getting an access token
User logs in to the application; Application request user for access to some data (Google Calendar) and redirects the user to OAuth server; User logs in to an OAuth server with credentials; OAuth server returns a temporary code which will be then exchanged for an access token by app; User sends temporary code to the app; The application exchanges the code for an access token; The OAuth server returns an access token for the app; The app uses the access token to get some data from a resource server; Resource server returns user data to the application.
Off-topic: The application logo on the above diagram isn’t there by accident, it’s my newly created app to betting with friends. I named it Betbitly, check more if you are curious.
This access token flow can be a divided into two channels:
🍎 Front channel (red lines)
Data is sent via the URL and because of that request/response can be tampered by user/malicious software. In this kind of situation, someone to not seeing someone that you add access token. Both sides don’t know if data has not been tampered or sent from unwanted sources. Someone can change data along the way.
You can imagine a situation where you send a letter to your friend which doesn’t have letterbox and you don’t choose an option to delivery to the owners hands, so the postman may leave the letter at the the front door, so until your friend returns home, somebody else can get at this letter and change something in it. In this case, neither you nor your friend can be 100% sure that the letter has not been changed, by an uninvited person. Why even use a front channel in this case?
The front channel is necessary because it is a natural interface to communicate with users to authorize an application to use data from the resource server, another example can be a mobile device.
🍏 Back channel (green lines)
This is the secure part of communication because it is sent from an application to the server over HTTPS (and cannot be tampered with).
In our analogy with the letter, it will be an option to deliver a letter to the owners hands.
Conclusions:
OAuth can obtain an access token in four different ways
There are five different type o OAuth roles
The way how the application obtains user’s data can be divided on two channels (front, back)
Go deeper, two types of clients!
In terms of application possibility to keep a secret we can divide it again on two types of clients:
Confidential Clients (application runs on server) which can keep a secret
and Public Clients (SPA or native) which cannot.
Keeping in mind this division, I will describe what a request looks like on each of these types. It will be four requests, two for authorization and two for getting access token (the numbers refers to the numbers on the schema).
Authorization (front channel 🍎)
User => OAuth server (3) OAuth server => User (5)
Getting an Access Token (back channel 🍏)
Application => OAuth server (6) OAuth server => Application (7)
Web server application (Confidental Clients)
Authorization (front channel 🍎)
It’s worth mentioning just a bit about Scope here. Scope guarantees an application only has access to data which was requested during login (of a user to an OAuth server). Users should always should be which data access has been granted.
If a user allows an application access to resources, the OAuth server will respond with code which then will be exchanged by the application for the access token:
Getting access token (back channel 🍏)
Single-Page Application (Public Clients)
This kind of an application cannot maintain client’s secret because it’s run entirely in the browser, so to preserve authorization code flow introduced above, have to generate secret on each request, at this is called PKCE which stands for Proof-Key for Code Exchange, so instead of sending client secret you send PKCE generated when every getting access token flow starts, to achieve this we have to change a little bit our previous web server authorization request (auth-req.web-server.js) by adding two ingredients:
code verifier (random string 43–128 characters long)
code challenge (url-safe base64-encoded SHA256 hash of the code verifier)
Authorization (front channel 🍎)
Getting access token (back channel 🍏)
Mobile apps also cannot generate secret so we have to use PKCE secret there, to get more information I will recommend to visit aaronparecki site.
Conclusions:
Two types of clients confidential and public;
The confidential client can keep a secret (web server app);
Public client can’t keep a secret (SPA);
PKCE instead of client secret in public client.
Access token
Types of access token
Reference token:
Often storing in database
Allow to every kind of operation which you can do with any standard database data, for example, show a list of active tokens
Can be easy revoked (just delete from database)
Use for smaller the application because of the requirement of storing all tokens.
Self-Encoded Tokens:
Data live in the token itself (JWT)
Don’t have to be storage
Separation of concerns, API doesn’t have to a storage access token
Valid at the API not with DB/HTTP lookup like in reference token
Cannot get a list of all active tokens
Better for bigger application
Access token validation
An access token has to be validated once it has been granted. It can become invalid from two reasons expiration or revocation.
Token expiration is related to expiration time, which is one of the properties returned from the OAuth server along with the rest of the data when the application exchange code received from the user to access token.
Imagine a case where access token expiration is set on 1 hour but our application session is longer than that, for example, 4 hours. To deliver the best user experience we don’t want to ask the user to login 4 times in one session — to solve this issue we receive a refresh token along with access token from OAuth server which allows the application to get a new access token for the user in the background. The key difference between access and refresh token is lifetime cycle, the access token is short life so storing it doesn’t require such security as a reference token which has a long life cycle because if an attacker would steal it the consequences would be much worse.
Revocation is quite simple, it takes place when the user tells OAuth server that this token is no longer valid.
Revoked token issue
Note: Pay attention to the fact that if you only do local validation and your access token has expiration time like 4h and after 1h user revokes application to use data then local validation will still be passed because it doesn’t know anything about revokes operation, but if the application would ask OAuth server, of course, it will return, that token is invalid so in worst case scenario you can have an invalid token for 3h but locally it will be still valid, because of that you should process local validation for every protected HTTP request and then in your application decide which data are so sensitive to should make an additional request to OAuth server to check if a token hasn’t been revoked.
Conclusions:
Two types of tokens reference and self-encoded
Token can expired or be revoked
Always ask OAuth server if an access token is still valid before processing security-sensitive operation
Closing thoughts
These are the basics which (in my opinion) every software engineer should know about OAuth. I hope that it’s just a little bit more accessible than reading official specifications.
As far as I’m concerned I’m still impressed with how good OAuth 2.0 works, and in terms of security it also looks very solid. It’s easy to understand the basic concept and flow of granting rights an application to the user’s data resources which stands behind it. No wonder than almost all application which uses 3rd-party login using OAuth 2.0.
I hope now you are just a bit more familiar with OAuth, thanks for your time!
Stay in touch with me on Twitter or Github.
📑 Resources | https://medium.com/dailyjs/what-every-software-engineer-should-know-about-oauth-2-0-10f0ef4998e5 | ['Kacper Wdowik'] | 2020-04-27 17:10:44.980000+00:00 | ['Software Engineering', 'Programming', 'Oauth', 'Security', 'Token'] |
Do YOU Have Toxic Traits? How to Identify Your Own Toxic Behaviors | We’ve talked a lot about self-awareness in our podcast and explored the concept of how our own behavioral traits can affect our relationships.
Not everyone is perfect, we know that. There are always aspects of yourself that you can improve on, whether it be in your relationship, your friendships, or your work environment. So it’s inevitable that your own behavioral traits may conflict with your surroundings from time to time, especially if they are toxic traits.
Image Credit: Unsplash
However, when it comes to unhealthy traits of your own, not everyone realizes how to recognize them nor how to correct them. This is especially important if they’re affecting your everyday life. Oftentimes, we are quick to blame other people for downfalls in our life, when in reality, it takes an emotionally mature person to look inwards at their own part in the situation.
In this article, we’ll discuss some common toxic traits that people have but may not even realize they possess. We’ll also outline how to recognize if you are the one with unhealthy habits and some tips on how to cultivate self-awareness.
If you’d prefer to listen to this topic, check out our podcast episode titled, do YOU have toxic traits? how to identify your own toxic behaviors.
Ask Yourself: Are You Self-Aware?
Image Credit: Unsplash
Before we get into some toxic behavioral traits, ask yourself, “Am I self-aware?”
Being self-aware means being honest with yourself and being open to improving your behavior habits. By checking in with yourself, you may recognize that the problem is not always the other person. It could be you too.
We like to think of self-awareness as being very you focused. Taking a pause and reflecting on your own behavioral traits and areas of improvement is crucial for mental and emotional development. This is not just advice for certain people, we all need to take this time for ourselves.
Toxic Trait: Negativity
Image Credit: Unsplash
The first common toxic trait that people have and may not realize is actually harmful, is negativity.
Ask yourself, “Do I have a negative outlook on life?” “Am I always approaching situations from a negative perspective?” This is something many of us are guilty of.
Perhaps some of us use negativity as a defense mechanism to protect ourselves from going out on a limb, from chancing something, from taking risks. However, this negative outlook can take over our lives and may even dictate our personality if we don’t catch on and recognize it.
Negativity can also affect our relationships in ways we don’t consider. For example, if you are friends with a super positive person, being negative could be a dealbreaker for them. Your friends may end up feeling disconnected from you because they don’t want your negativity in their life.
We all have bad days, but when your whole mind listens and answers in a negative tone, you can’t begin to appreciate anything. You expect crappy things to happen to you while treating everything as a blame-game rather than accepting something, learning from it, and moving on.
Be cautious of the energy you’re giving off and notice if you are the person that changes the energy in a room. If you are someone who tends to be more negative, practicing gratitude for the little things that are going right for you allows you to notice positivity but also allows you to feel it. The goal is to find things that make you feel good.
We as humans tend to give a lot of our attention to the negative aspects of our life. The things we don’t have, the things we want more of, but often we forget how much we do have, like things we have now that we may have asked for in the past.
Toxic Trait: Being Judgmental
Image Credit: Unsplash
Our next toxic trait that people tend to not realize they have is being judgmental. Are you judging others for how they’re living their own life? Are you judging someone when they confide in you?
In a romantic relationship for example, if your partner comes to you with a concern or a discussion, it’s important to create a safe space for them so that they can voice their opinions. Making the other feel heard and listened to goes hand in hand with a healthy relationship. This is a good time to recognize if you are being too judgmental or too hard on someone.
Not settling and having standards is one thing, but if you’re judging a friend or partner off minuscule things that don’t define their character, it’s wise to recognize that and actively tell yourself to stop.
We find that a tip for cultivating self-awareness and training yourself to fix this unhealthy trait is to seek to understand where the other person is coming from, instead of jumping to a judgmental conclusion.
It’s interesting to note that if you’re judgmental towards others, it may be coming from being too judgmental towards yourself. Reflect on what your own triggers are for when you’re hard on yourself, and see if those are the same triggers that you have while judging others.
Toxic Trait: Taking No Responsibility
Image Credit: Unsplash
Avoiding responsibility is a toxic trait that is often a sign of immaturity as well. Being emotionally mature and self-aware means that you’re able and willing to admit when you’re wrong, where you could improve on, and even that you may have hurt someone. It’s not always the other person.
This toxic trait goes hand in hand with the unhealthy trait of not apologizing when you are wrong, which we discussed in one of our podcast episodes.
In recent years, we’ve seen a new approach to apologizing. We’re realizing more and more that some things just don’t require an apology even though we’re inclined to give one anyway. However, there is a difference between unnecessary apologies for things like your feelings and necessary apologies for when you hurt someone.
If taking responsibility requires an apology, let’s remember that a good apology is something sincere. There are different ways in which people want to be apologized to as well. Some people want there to be an action behind the apology, some people want to hear the words “I’m sorry”, some people need to see an improvement in future behaviors, and some need all of the above.
So it’s important to take responsibility for your actions in the way that the person receiving the apology needs to experience it. Asking the other person, “What can I do to make this better?” is a good question if you are unsure how to approach the apology or the responsibility.
Toxic Trait: Gaslighting
Image Credit: Getty Images
Another toxic trait that we believe is under-discussed in society is gaslighting. Gaslighting is when you invalidate someone’s emotions and manipulate them into questioning their own sanity.
Example of gaslighting are phrases like:
“You’re crazy to think that.”
“You can’t take a joke.”
“You’re being too sensitive.”
“That person was lying.”
“You don’t know what you’re talking about.”
Basically, if you’re blaming someone for exaggerating or making a big deal out of something that’s important to them, this is a form of gaslighting.
Are you someone who often puts down someone else’s emotions when they voice concerns? Most people are quick to say no because, in theory, this sounds like an awful trait to have. However, ask yourself have there been times when you downplayed something someone else said you didn’t agree with or couldn’t take responsibility for?
In our lives, we have seen a lot of “mild” forms of gaslighting where someone may not mean to intentionally manipulate another person to question their insanity, but rather turn something back on the other person in order to take the attention off themselves.
This form of gaslighting is common when people fail to acknowledge, apologize, or deal with their own unhealthy patterns of behavior. An example of this is not taking responsibility for saying something hurtful to someone but instead passing it off as a joke.
However, there can also be instances where someone is absolutely manipulative and knows exactly what their intentions are like telling blatant lies to make a person believe something about them.
A lot of people may not be familiar with the term gaslighting but know its meaning, having experienced being this or have experienced being on the receiving end of this.
It’s never ok to diminish someone’s worries or concerns. If someone comes to you and says, “I’m really worried or concerned about so and so” and you answer with “Don’t worry, it’s not that bad,” this is also gaslighting. Too often, we forget to listen to understand and instead, listen to answer.
If someone has genuine concerns, they’re not going to appreciate you diminishing its importance. Especially if someone approaches you with a subject that might be difficult for them to talk about or something that takes a lot of courage for them personally to address.
Toxic Trait: Manipulation
Image Credit: Unsplash
Being manipulative is a toxic trait that is a bit more uncommon because it’s considered to be one of the most harmful traits that a person can have, and often one that people do not want to improve on.
Being manipulative can present itself in many forms. This can include making something all about yourself, doing anything to get what you want, or even using other people for your own personal gain.
Manipulation can manifest in romantic relationships, friendships, and even amongst family members.
Manipulative people don’t care about the consequences of their actions. Putting people down for personal gain or even twisting stories to benefit yourself is a common characteristic of manipulation.
Some other examples of manipulation include:
Using mental tricks to implement fear
Guilting someone into doing something for you
Exploiting emotions
Pressuring someone to make a decision before they’re ready
Using victimhood as an excuse for something
Pretending to be ignorant
Talking about people in a certain way when they aren’t present to get someone else to believe something about them
This trait can end up becoming very dangerous if not acknowledged. If you believe that you have tendencies to be manipulative, you’re going to have to really work at improving this behavioral trait. First off, you have to recognize if you are manipulative and to what degree.
Unfortunately, people who are manipulative often won’t admit it or refuse to acknowledge that they have this trait in the first place.
In this case, seeking help from a mental health professional can be extremely beneficial for working through what’s provoking this toxic trait.
Toxic Trait: Inconsistency
Image Credit: Unsplash
Another unhealthy behavioral trait that a lot of people possess (and can totally be improved on) is inconsistency — both in behavior and actions.
Inconsistency in behavior and commitments may develop as a result of where exactly you are in your life right now, or if you’ve just experienced something difficult or life-changing, which is understandable.
It is, however, another story to be the type of inconsistent where it’s affecting your progression in life but also when it affects the people around you. If you can’t be relied on and people can’t depend on you, that’s where this trait starts to become toxic.
If you’re inconsistent with hangouts, work commitments, or important responsibilities, this can eventually lead to frail relationships, impaired trust, and overall low expectations of you.
Of course, it’s understandable to make yourself distant in certain situations, avoiding social interaction, taking a breather from responsibilities, or perhaps you’re discovering yourself, educating yourself on different things, exploring different versions of yourself, that’s completely normal and okay.
However, when you begin to strain your relationships because of consistent inconsistency and become somewhat unreliable, you make that relationship flimsy.
A common example of inconsistency is making plans and not following through or rescheduling with no intent to follow through. We’ve all done this at some point, but do you make a habit out of it, is the important question?
If you’re someone who is inconsistent, you may be guilty of only being there for someone when it’s convenient for you. Many inconsistent people also struggle with making up their minds with who or what they want in their life. If this is something you can relate to, then this is a chance for you to enhance your self-awareness.
What To Do When You Have Toxic Traits
When changing your behavioral habits, you’re already acknowledging a part of yourself that you may have been avoiding to change. When becoming self-aware, it’s important to be honest with yourself, even if it means asking yourself direct questions about the past. Acknowledging these aspects of yourself leads to self-development.
So, once you’ve asked yourself the questions and are willing to correct some unhealthy habits that can manifest as toxic traits, then you can begin to cultivate true self-awareness.
Internal Self-Awareness
Internal self-awareness refers to how we fit in with our environment and how we impact others. This type of self-awareness relates to how clearly we see our own values, passions, aspirations, reactions including thoughts, feelings, behaviors, strengths, and weaknesses.
External Self-Awareness
External self-awareness means understanding how other people view us, in terms of those same factors listed above. How we see ourselves and how others see us are very different experiences.
Accepting and understanding who you are is crucial when improving toxic traits. This means that you are being honest with yourself, being curious about who you are, and taking the third-person perspective in arguments.
Final Thoughts
Everyone has areas of themselves that they can always improve on. No one is perfect and the first step to bettering yourself is recognizing what your areas of improvement are.
Some people are more susceptible based on their personalities to possess certain negative qualities. It’s all about working on your self-awareness and growing as an individual.
- Negativity
- Being judgmental
- Lacking responsibility
- Gaslighting
- Manipulation
- Inconsistency
For more discussions, check out our podcast, shifting her experience (she.) on Apple, Spotify, or wherever you listen to your podcasts. We release new episodes every Tuesday.
If you want to read more articles like this, check out our blog!
Tiana & Sophie from
shifting her experience. | https://shiftingherexperience.medium.com/do-you-have-toxic-traits-how-to-identify-your-own-toxic-behaviors-30e0012a5036 | ['Shifting Her Experience', 'She.'] | 2020-10-01 21:05:53.509000+00:00 | ['Self-awareness', 'Toxic Relationships', 'Relationships', 'Friendship', 'Self Improvement'] |
100 paths to journalism: how to become a podcast producer | 100 paths to journalism: how to become a podcast producer
Dávid Tvrdoň shares his insights and advice for aspiring journalists
Starting a career in the journalism industry can be a difficult task. How to choose between many different career paths? What are the first steps to make towards your dream job? To help aspiring journalists kick-start their careers, we launched the monthly newsletter “100 Paths to Journalism”, featuring career tips and Q&As with industry professionals.
For our first edition, we talked with Dávid Tvrdoň, podcast producer and product manager for online news at SME.sk, to learn about his career.
Dávid started as a med student, then became a copywriter and social media marketing specialist, and then moved on to be a data journalist and product manager. Now he’s also a podcast producer. This is what he told us about his professional journey and the advice he would give to aspiring journalists out there.
Can you walk us through your professional journey of becoming the journalist you are today?
After secondary school, I went straight to medical school only to find out I’m much more interested in journalism, media and marketing. I ignored my med-exams and rather spent time working on a “cultural blog” project with a friend. So basically the med-school showed me I really should do something else.
The best thing journalism school gave me was contacts, my classmates work all around the journalism landscape. Unfortunately, the journalism school in Bratislava I attended was at the time light years behind what was expected from a journalist. However, before I started studying I landed a job doing social media, community management and copywriting. It was a far better education than the one I got at school. The journalism school was only teaching the content creation part (and not really well), my marketing job taught me how to think in terms of content distribution. That combination made total sense for me.
Dávid at our News Impact Summit in Budapest.
During that time I stumbled upon an online data journalism course, co-created by the European Journalism Centre. After that, I attended a few related workshops and seminars. When I was asked to come to work for my current newsroom, my job was to work on data visualisations and they needed someone to be a bridge between the paper and the development team. So I had two part-time jobs within one.
A year later, I focused only on product management and when we started doing podcasts I split my work time again. I was very lucky to have a chance to try everything, I picked up different skills along the way and became quite flexible, which is always good within this industry.
How did you become a podcast producer?
I had been mentioning podcasts within the newsroom I worked at for some time and once the management decided to pursue this venture I was basically the first pick. However, I had to pick up a lot of things, they don’t teach podcasting in any school. Almost everyone working in podcasting now is self-taught or has a background in radio.
What were the biggest challenges you encountered at the beginning of your career and how did you overcome them?
Definitely finding the one or two things I will be best at. You want to have at least one skill you are really really good at doing. There was a bunch of things I wanted to do within journalism, had to pick a few and just learn as much as I could.
What do you wish you’d done differently in your early career?
This is very specific for me, not a recommendation — I wish I picked up coding or graphics design on top of what I know now. Even within podcasting, there are some technical issues where these skills would help.
Dávid and his colleague Ondrej Podstupka while recording one of their podcasts.
What does your job as a podcast producer involve?
It’s kind of a ‘backstage’ job — deciding which podcasts to greenlight (though it’s not just my decision), which platforms to take seriously, thinking about distribution and promotion, a lot of looking at analytics and data, getting guests and having scripts in order. The official job title involves also audio-editing and post-production. But it really depends on the newsroom, just look at Slate’s job posting.
What are the main skills a podcast producer needs?
It may sound like a cliché, but love for audio is definitely a great start. I had no special training. I just listened to a lot of podcasts and knew how I wanted them to sound like. Obviously, audio editing skills are a great start, I love the NPR audio guide if you are looking for one. With narrative podcasts, you want to work hard on your storytelling skills. And basically for any kind of podcast you need to have a pretty specific script.
How does a typical workday look for you?
Regarding the podcast production, the day always starts by listening to the current episode, some of us listen to it two or three times. After that, someone from the team goes to the editorial meeting, where the topics of the day are discussed. We have to choose one which we think is the most important and start working on it. The host prepares for the interview and the reporter basically does her or his job as usual.
The responsibility of the producer is to get additional audio — background voices and setting up the time of the recording so that everyone is available. After recording, the audio is being produced, edited and mixed. Then it is uploaded online and distributed.
If you weren’t a podcast producer, what job would you do?
Can’t really think of a time in the future I would not at least host a podcast. My current job is made up of three positions, besides podcast producer, it’s a product manager for online news and digital project, and I still do some reporting and writing.
Dávid and his team started a podcast revolution in Slovakia.
What do you think is a common misconception that aspiring journalists have about working in the industry?
Reuters Institute for the Study of Journalism recently published a study called Are Journalists Today’s Coal Miners? It basically examined the struggle for talent and diversity in modern newsrooms. What surprised me was a huge demand for print journalism from the students. One look at Mary Meeker’s state-of-the-Internet slide deck, specifically at the slide with media time & ad spending, and you see that the future of journalism is digital and mobile.
What is your main tip for aspiring journalists who want to start their journalism career?
Just go and get any journalism job as soon as possible or at least an internship.
What are the top 3 resources to check out if someone’s interested in a career in the podcast industry?
I have already mentioned the NPR audio guide, I stay up to date mainly by the daily Podnews newsletter and I love the very comprehensive Tools for podcasters website by RadioPublic. | https://medium.com/we-are-the-european-journalism-centre/100-paths-to-journalism-how-to-become-a-podcast-producer-b6b2c0fbce7a | ['Linda Vecvagare'] | 2019-10-16 04:31:01.751000+00:00 | ['Careers', 'Podcast', 'Journalism', 'Media', 'Insights'] |
ConsoleAppFramework v3 — command line tool framework for .NET Core | I’ve renewed ConsoleAppFramework, a framework for easily creating CLI applications and many batches in C#.
The concept of a CLI framework on top of Generic Host, the basic structure, remains unchanged.
The method definitions become command line arguments, and help is generated automatically, as well as settings of loggers and DIs, and loading and binding of options in the Generic Host (also used in ASP.NET Core, etc.), so you can do detailed configuration with it. Since the foundation is the same, it can be shared with ASP.NET Core and others.
The simplest example would look like this
v3’s new feature is
Strictly argument parser
Show default commands(help, version) on help command
Automatic command definitions assign to class method command
command Filter(Middleware) extension
dotnet already has standard System.CommandLine package. However System.CommandLine is low-level API, it requires to create binding manually., You’ll need a lot of boilerplate lines. If you compare it to ASP.NET Core, System.CommandLine is Core only and you need to create your own Middleware. ConsoleAppFramework is ASP.NET Core MVC. | https://neuecc.medium.com/consoleappframework-v3-command-line-tool-framework-for-net-core-ace8becd49dd | ['Yoshifumi Kawai'] | 2020-12-09 09:43:10.204000+00:00 | ['Csharp'] |
Porosity-Permeability Relationships Using Linear Regression in Python | Porosity-Permeability Relationships Using Linear Regression in Python
A short guide on applying a linear regression in Python to semi-log data
Photo by Ekaterina Novitskaya on Unsplash
Core data analysis is a key component in the evaluation of a field or discovery, as it provides direct samples of the geological formations in the subsurface over the interval of interest. It is often considered the ‘ground truth’ by many and is used as a reference for calibrating well log measurements and petrophysical analysis. Core data is expensive to obtain and not acquired on every well at every depth. Instead, it may be acquired at discrete intervals on a small number of wells within a field and then used as a reference for other wells.
Once the core data has been extracted from the well it is taken to a lab to be analysed. Along the length of the retrieved core sample a number of measurements are made. Two of which are porosity and permeability, both key components of a petrophysical analysis.
Porosity is defined as the volume of space between the solid grains relative to the total rock volume. It provides an indication of the potential storage space for hydrocarbons.
is defined as the volume of space between the solid grains relative to the total rock volume. It provides an indication of the potential storage space for hydrocarbons. Permeability provides an indication of how easy fluids can flow through the rock.
Porosity is a key control on permeability, with larger pores resulting in wider pathways for the reservoir fluids to flow through.
Well logging tools do not provide a direct measurement for permeability and therefore it has to be inferred through relationships with core data from the same field or well, or from empirically derived equations.
One common method is to plot porosity (on a linear scale) against permeability (on a logarithmic scale) and observe the trend. From this, a regression can be applied to the porosity permeability (poro-perm) crossplot to derive an equation, which can subsequently be used to predict a continuous permeability from a computed porosity in any well.
Porosity vs Permeability Crossplot with Python Statsmodels prediction (red line).
In this article, I will cover how carry out a porosity-permeability regression using two methods within Python: numpy’s polyfit and statsmodels Ordinary Least Squares regression.
The notebook for this article can be found on my Python and Petrophysics Github series which can accessed at the link below:
https://github.com/andymcdgeo/Petrophysics-Python-Series
Additionally, a list of previous articles, notebooks and blog posts can be found on my website here:
http://andymcdonald.scot/python-and-petrophysics
Importing Libraries and Core Data
To begin, we will import a number of common libraries before we start working with the actual data. For this article we will be using pandas, matplotlib and numpy. These three libraries allow us to load, work with and visualise our data.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
The dataset we are using comes from the publicly available Equinor Volve Field dataset released in 2018. The files used in this tutorial are from 15/9- 19A which contain full regular core analysis data and well log data. To load this data in we can use pd.read_csv and pass in the file name.
This dataset has already been depth aligned to well log data, so no adjustments to the sample depth are required.
When core slabs are analysed, a limited number of measurements are made at irregular intervals. In some cases, measurements may not be possible, for example in really tight (low permeability) sections. As a result, we can tell pandas to load any missing values / blank cells as Not a Number (NaN) by adding the argument na_values=' ' .
core_data = pd.read_csv("Data/15_9-19A-CORE.csv", na_values=' ')
Once we have the data loaded, we can view the details of what is in it by calling upon the .head() and .describe() methods.
The .head() method returns the first five rows of the dataframe and the header row.
core_data.head()
Results from the head function of our pandas dataframe. This shows us the first 5 rows of data and column headers.
The .describe() method returns useful statistics about the numeric data contained within the dataframe such as the mean, standard deviation, maximum and minimum values.
core_data.describe()
Results from the describe function of pandas on our core dataframe.
Plotting Porosity vs Permeability
Using our core_data dataframe we can simply and quickly plot our data by adding .plot to the end of our dataframe and supplying some arguments. In this case we want a scatter plot (also known in petrophysics as a crossplot), with the x-axis as CPOR — Core Porosity and the y-axis as CKH — Core Permeability.
core_data.plot(kind="scatter", x="CPOR", y="CKH")
Simple linear-linear scatterplot of porosity vs permeability using matplotlib and pandas.
From this scatter plot we notice that there is a large concentration of points at low permeabilities with a few points at the higher end. We can tidy up our plot by converting the y axis to a logaritmic scale and adding a grid. This generates the the poro-perm crossplot that we are familiar with in petrophysics.
core_data.plot(kind="scatter", x="CPOR", y="CKH")
plt.yscale('log')
plt.grid(True)
Log-Linear scatterplot of porosity vs permeability using matplotlib and pandas.
We can agree that this looks much better now. We can further tidy up the plot by:
Switching to matplotlib for making out plot
Adding labels by using ax.set_ylabel() and ax.set_xlabel()
and Setting ranges for the axis using ax.axis([0,40, 0.01, 100000)
Making the y-axis values easier to read by converting the exponential notation to full numbers. This is done using FuncFormatter from matplotlib and setting up a simple for loop
from matplotlib.ticker import FuncFormatter
fig, ax = plt.subplots() ax.axis([0, 40, 0.01, 100000])
ax.plot(core_data['CPOR'], core_data['CKHG'], 'bo')
ax.set_yscale('log')
ax.grid(True)
ax.set_ylabel('Core Perm (mD)')
ax.set_xlabel('Core Porosity (%)') #Format the axes so that they show whole numbers
for axis in [ax.yaxis, ax.xaxis]:
formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y))
axis.set_major_formatter(formatter)
Formatted log-linear scatter plot of porosity vs permeability using matplotlib.
Isn’t that much better than the previous plot? We can now use this nicer looking plot within a petrophysical report or passing to other subsurface people within the team.
Deriving Relationship Between Porosity and Permeability
There are two ways that we can carry out a poro-perm regression on our data:
Using numpy’s polyfit function
Applying a regression using the statsmodels library
Before we explore each option, we first have to create a copy of our dataframe and remove the null rows. Carrying out the regression with NaN values can result in errors.
poro_perm = core_data[['CPOR', 'CKHG']].copy()
Once it has been copied we can then drop the NaNs using dropna() . Including the argument inplace=True tells the method to replace the values in place rather than returning a copy of the dataframe.
poro_perm.dropna(inplace=True)
Numpy polyfit()
The simplest option for applying a linear regression through the data is using the polynomial fit function from numpy. This returns an array of co-efficients. As we are wanting to use a linear fit we can specify a value of 1 at the end of the function. This tells the function we want a first degree polynomial.
Also, are we are dealing with permeability data in the logarithmic scale, we need to take the logarithm of the values using np.log10 .
poro_perm_polyfit = np.polyfit(core_data['CPOR'], np.log10(core_data['CKH']), 1)
When we check the value of poor_perm_polyfit , we return: array([0.17428705, -1.55607816])
The first value is our slope and the second is our y-intercept.
Polyfit doesn’t give us much more information about the regression such as the co-efficient of determination (R-squared). For this we need to look at another model.
Statsmodels Linear Regression
The second option for generating a poro-perm linear regression is to use the Ordinary Least Squares (OLS) method from the statsmodels library.
First we need to import the library and create our data. We will assign our x value as Core Porosity (CPOR) and our y value as the log10 of Core Permeability (CKH). The y value will be the one we are aiming to build our prediction model from.
With the statsmodel OLS we need to add a constant column to our data as an intercept is not included by default unless we are using formulas. See here for the documentation.
import statsmodels.api as sm x = core_data['CPOR']
x = sm.add_constant(x)
y = np.log10(core_data['CKHG'])
We can confirm the values of x by calling upon it and in Jupyter it will return a dataframe as seen here:
Our x variables: core porosity (CPOR) and a constant column of 1s.
The next step is to build and fit our model. With the OLS method, we can supply an argument for missing values. In this example I have set it to drop. This will remove or drop the missing values from the data.
model = sm.OLS(y, x, missing='drop')
results = model.fit()
Once we have fitted the model, we can view a full summary of the regression by calling upon .summary()
results.summary()
Which returns a nicely formatted table like the one below and includes key statistics as the R-squared and standard error.
OLS summary from statsmodels for a linear regression.
We can also obtain the key parameters: slope and intercept, by calling upon results.params .
Slope and intercept values from our linear regression.
If we want to access one of the parameters, for example the slope or constant for the CPOR value, we can access it like a list:
results.params[1]
Which returns a value of: 0.174287
We can then piece together the equation we will use to predict our permeability:
Relationship for permeability and porosity from Ordinary Least Squares Regression.
Finally, we can take our equation and apply it to our scatter plot using the line:
ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-')
The whole code for the scatter plot:
from matplotlib.ticker import FuncFormatter
fig, ax = plt.subplots() ax.axis([0, 30, 0.01, 100000])
ax.semilogy(core_data['CPOR'], core_data['CKHG'], 'bo') ax.grid(True)
ax.set_ylabel('Core Perm (mD)')
ax.set_xlabel('Core Porosity (%)') ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-') #Format the axes so that they show whole numbers
for axis in [ax.yaxis, ax.xaxis]:
formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y))
axis.set_major_formatter(formatter)
Porosity vs Permeability Crossplot with Python Statsmodels prediction (red line).
Predicting a Continuous Permeability from Log Porosity
Now that we have our equation and we are happy with the results, we can apply this to our log porosity to generate a continuous permeability curve.
First, we need to load in the well log data for this well:
well = pd.read_csv('Data/15_9-19.csv', skiprows=[1])
And then apply our derived formula to the PHIT curve:
well['PERM']= 10**(results.params[1] * (well['PHIT']*100) + results.params[0])
When we check the well header using well.head() we can see our newly created curve at the end of the dataframe.
Visualising the Final Predicted Curve
The final step in our workflow is to plot the PHIT curve and the predicted permeability curve on a log plot alongside the core measurements:
fig, ax = plt.subplots(figsize=(5,10))
ax1 = plt.subplot2grid((1,2), (0,0), rowspan=1, colspan = 1)
ax2 = plt.subplot2grid((1,2), (0,1), rowspan=1, colspan = 1, sharey = ax1) # Porosity track
ax1.plot(core_data["CPOR"]/100, core_data['DEPTH'], color = "black", marker='.', linewidth=0)
ax1.plot(well['PHIT'], well['DEPTH'], color ='blue', linewidth=0.5)
ax1.set_xlabel("Porosity")
ax1.set_xlim(0.5, 0)
ax1.xaxis.label.set_color("black")
ax1.tick_params(axis='x', colors="black")
ax1.spines["top"].set_edgecolor("black")
ax1.set_xticks([0.5, 0.25, 0]) # Permeability track
ax2.plot(core_data["CKHG"], core_data['DEPTH'], color = "black", marker='.', linewidth=0)
ax2.plot(well['PERM'], well['DEPTH'], color ='blue', linewidth=0.5)
ax2.set_xlabel("Permeability")
ax2.set_xlim(0.1, 100000)
ax2.xaxis.label.set_color("black")
ax2.tick_params(axis='x', colors="black")
ax2.spines["top"].set_edgecolor("black")
ax2.set_xticks([0.01, 1, 10, 100, 10000])
ax2.semilogx() # Common functions for setting up the plot can be extracted into
# a for loop. This saves repeating code.
for ax in [ax1, ax2]:
ax.set_ylim(4025, 3825)
ax.grid(which='major', color='lightgrey', linestyle='-')
ax.xaxis.set_ticks_position("top")
ax.xaxis.set_label_position("top")
# Removes the y axis labels on the second track
for ax in [ax2]:
plt.setp(ax.get_yticklabels(), visible = False)
plt.tight_layout()
fig.subplots_adjust(wspace = 0.3)
This generates a simple two track log plot with our core measurements represented by black dots and our continuous curves by blue lines.
Final permeability prediction from core measurements using Python statsmodels library and ordinary least squares regression.
As seen in track 2, our predicted permeability from a simple linear regression tracks the core permeability reasonably well. However, between about 3860 and 3875, our prediction reads lower than the actual core measurements. Also, it becomes harder to visualise the correlation at the lower interval to the more thinly bedded nature of the geology.
Conclusion
In this walkthrough, we have covered what core porosity and permeability area and how we can predict the latter from the former to generate an equation that can be used to predict a continuous curve. This can subsequently be used in geological models or reservoir simulations.
As noted at the end, there are a few small mismatches. These would benefit from further investigation and potentially further modelling either by refining the regression or by applying another machine learning model. | https://towardsdatascience.com/porosity-permeability-relationships-using-linear-regression-in-python-eef406dc6997 | ['Andy Mcdonald'] | 2020-12-28 14:17:39.713000+00:00 | ['Python', 'Petrophysics', 'Scatter Plots', 'Linear Regression', 'Pandas'] |
Is Screen Time Bad for Your Child? | It’s no secret that there is a lot competing for your child’s attention these days. It used to be all we had to do was turn off the television and video games to get some reading time in, but now there is a computer attached to our hands and within easy reach and influence of your child, too.
I’ve never been one to be super strict about turning off all technology, but I will say that with all the distractions competing for your child’s time and mind, you will have to diligent and purposeful about making time for reading.
That being said, let’s talk about digital devices for reading or reading practice, the pros and cons and when other resources besides the print book can be useful for reading instruction or practice.
A print book is always going to be the best choice for young children
There is an interesting study that talks about how different forms of reading and media impact a child’s brain. Here is the link; I encourage you to read it because it’s important. What’s Going on in Your Child’s Brain When You Read Them a Story?
I’m going to summarize and emphasize a few things for you here.
It’s sort of like the Goldilocks story.
Audible only: If a child is only hearing a story like via a CD or audible book, it’s too cold. In other words, some parts of the brain area activated but without pictures, the child is straining harder to understand the story.
Video and Animation: This was too hot. In other words, too much visual and auditory stimulation does not allow children time to reflect on what they are hearing, resulting in less comprehension as well.
Books: What was the just-right combination? You guessed it! Listening to a storybook while seeing the pictures. You reading with and to your child.
When we read to our children, they are doing more work than meets the eye. “It’s that muscle they’re developing bringing the images to life in their minds.” Hutton’s concern is that in the longer term, “kids who are exposed to too much animation are going to be at risk for developing not enough integration.” Overwhelmed by the demands of processing language, without enough practice, they may also be less skilled at forming mental pictures based on what they read, much less reflecting on the content of a story. This is the stereotype of a “reluctant reader” whose brain is not well-versed in getting the most out of a book. — from What’s Going on in Your Child’s Brain When You Read Them a Story?
This study was done with MRI imaging of the brain while children (ages 3–5) were exposed to different stimuli.
What this tells us is that reading books to your child is still the best thing you can do to support brain development that will give them the skills they need to be strong readers.
This doesn’t mean that animated video and listening to stories are bad, just limit these for young children.
Audio Books
These are still a great option for when sitting with a book on your lap is not possible. Think of all the many places and opportunities you have to use audible books.
driving in the car — both everyday errands or long road trips. (of course, keep a bag of books in the car too)
while taking a bath or doing chores
as a follow up to a book you’ve read together — especially if children loved the book, they’ll want to be reunited with their favorite characters.
Other ways to use audibooks
Audiobooks can support reluctant or struggling readers or those with a reading disability. Sometimes hearing the story first and then reading the book is helpful. It builds background knowledge and familiarizes them with new vocabulary words or difficult names etc. A dyslexic child might find it helpful to listen to the audiobook while following along with the words on the page.
Movies from books. By all means, check these out. I like to read the book first, then watch the movie only because 9/10 times the book is better, but there are some wonderful movies that have been made from children’s books and are certainly worthy of seeing.
Some of my favorites
Because of Winn-Dixie
Stuart Little
The Lion, The Witch, and The Wardrobe
Some movies or television shows your child may have seen already, have books that are less known because the movie or television show is a classic. These books are worth reading!
Chitty Chitty Bang Bang
The Wizard of Oz
Mary Poppins
Arthur
Little Bear
After you read the book and watch the movie, you can complete a Venn diagram and compare and contrast the movie and the book. Comparing and contrasting is considered the best comprehension strategy for children to learn according to Marzano’s high-yield instructional strategies. In other words,
children who can find similarities and differences own a valuable skill that will translate to other learning in other areas.
More on how technology impacts a child’s brain vs reading
Technology conditions the brain to pay attention to information very differently than reading. The metaphor that Nicholas Carr uses is the difference between scuba diving and jet skiing. Book reading is like scuba diving in which the diver is submerged in a quiet, visually restricted, slow-paced setting with few distractions and, as a result, is required to focus narrowly and think deeply on the limited information that is available to them. In contrast, using the Internet is like jet skiing, in which the jet skier is skimming along the surface of the water at high speed, exposed to a broad vista, surrounded by many distractions, and only able to focus fleetingly on any one thing. — from How Technology is Changing the Way Children Think and Focus
Using apps and online reading programs
It’s important to use discretion when using online reading programs or apps for reading/phonics practice. There are some good ones, some okay ones, and some awful ones.
The main thing to remember is that no reading program or animated phonics practice can replace what you do with your child using real children’s books and targeted practice aimed at their needs and strengths.
Use these programs as you would use salt in your meals, for some added flavor but don’t count on them for much nutrition. And most importantly, don’t spend a lot of money on them!
Here are some that I would recommend (this is not at all-inclusive but pertains to my personal experience and knowledge of the tool)
Starfall
Headsprout
Kizphonics
Reading Eggs
A final note on technology in all forms
Nothing is all bad or all good. Use a lot of caution with children under the age of six whose brains are still acquiring the ability to pay attention and sustain attention and persevere at a task. Reading stamina is needed in order to become a strong reader. A child’s brain that has been programmed with fast action and snippets may not be able to sustain in order to read with accuracy and comprehension.
Use all technology sparingly, be a good role model with technology, and err on the side of caution.
New research is coming in every day on the effect of digital devices on kids, so keep reading about the updates and make wise choices as you invest in your child as a lifelong reader.
Additional reading:
Will Technology Ruin Your Children’s Development?
5 Negative Impacts of Technology on Children
Screen Time Can be Dangerous
Take some time to look through the articles and research here and think about how much you want your child to be exposed to technology and what you can do to eliminate some of that if it’s too much, or safeguards you can put in place to monitor over-exposure.
Watch my interview with Madlin (join our Facebook group to see it) about screen time vs. books and find more resources from Unplugged Family.
Until later — read, read more, read more often! — Mary | https://medium.com/raise-a-lifelong-reader/is-screen-time-bad-for-your-child-81bec3517f14 | ['Mary Gallagher'] | 2019-11-22 16:12:24.620000+00:00 | ['Books', 'Reading', 'Literacy', 'Parenting', 'Digital Health'] |
Horror Vacui, the fear of white space (UX Alphabet Series) | “There is just too much white space.”
When I started out as a designer, as soon as I would hear this phrase, all time would come to a screeching halt. I would roll my eyes in disbelief. Yes, there would be some internal cursing and disbelief in what I had just heard. I would then begrudgingly appease the stakeholder by filling all the white space with unnecessary content and end up hating the design and my life. I would think to myself, there goes another design I can’t be proud of. Does this sound familiar?
You might have seen The Oatmeal comic, “How A Web Design Goes Straight To Hell”. It’s the one where a bright eyed designer is excited about redesigning a website. Slowly but surely the client keeps asking for minor changes that take the redesign straight to hell. “There is too much white space.” is just an indirect request that adds to the demise of the designer. What makes this a sad yet hilarious comic is that the statement about having too much white space usually is the precursor to a website going straight to hell. Other statements such as, “Can you make it pop?”, “Make the logo bigger”, and “I don’t like blue” all speed up the process of a design going to hell.
Fast forward to now and I approach the statement about having too much white space as an opportunity to educate stakeholders. As a designer it is our job to help others realize why white space is an important part of design and an important part of a great user experience. | https://uxdesign.cc/horror-vacui-the-fear-of-white-space-a-ux-alphabet-series-6ad5337a6adc | ['Rizwan Javaid'] | 2017-06-20 13:27:54.583000+00:00 | ['Visual Design', 'Design', 'UX', 'Design Thinking', 'Web Design'] |
Cheers to NaNoWriMo 2020 | Cheers to NaNoWriMo 2020
Now more than ever, our stories need to be told.
If anyone asks me my dream, I say it is to write the greatest novel the world has ever known. If anyone asks me how it’s going, I gaze sheepishly at the empty wine bottles in the recycling bin and the plane tickets littering my desk and say things like, “Oh I’m still gathering material.” But if I’m being honest, I have oodles of material. I could write thirty-seven young adult series! Serieses? I should check that.
I’m in the wine industry (hence the empty bottles — work related FYI), and I travel a lot. Those twenty-year Burgundian retrospectives won’t taste themselves! Yet even trapped in a flying tin can for twelve hours, I don’t work on my brilliant world-changing novel. Is there no end to my procrastination I ask as I smile at the unattached man in 6E? Then my iCal taxis onto the runway of November. There is a brief pause in the action before the champagne mayhem of the holidays.
Is there no end to my procrastination I ask as I smile at the unattached man in 6E?
So, like the Port vintage declaration of 2011, I first joined NaNoWriMo that year with great intention and fanfare. Logging onto the website was a step towards my dream, and most every year since I still believe that this time I’m going to finish. Like sipping a sultry sherry, writing is something I can’t do alone. I need the community discipline of fellow dreamers. Happily, NaNo has so many resources. I join forums and attend virtual and in-person write-ins. Sometimes, I catch a tailwind burst of word count, and other times I hit the turbulence of blank page struggle. I watch the altimeter rise shyly on my NaNo dashboard — 200 words yesterday, another 400 today. I cruise well below the 50,000-word trajectory. Ugh.
I need the community discipline of fellow dreamers.
But that’s okay. We’re all bound for fruition. Me. You. The forum woman commenting from a fairytale “Upon” something UK town. The famous writer giving the Pep Talk. We NaNo together because writing can be a relentlessly solitary pursuit, and some of us (thumbs pointing at self) flail without accountability. Even that scoundrel word count graph is a cruel and effective writing coach. It feels nice to furnish the abyss with these things.
I am writer. Hear me or.
Especially this November of pandemic, panicky half full flights and that meteor they’re saying is on the way — this has got to be my year of finishment, of finally and mercilessly fermenting all the raw heartbreak and vivid inspiration, of barefoot stomping plot and character until they bleed and blending them with setting and syntax. If you’ve ever walked through a barrel room where alchemical juices percolate away, you know how this works. The winemaker somehow sees into the concoction’s future, how sweet or dry it will be, its hue and power. Will our stories be palatable? Intoxicating, even? Will they linger upon the senses once they’re put down?
NaNoWriMo 2020 has to be a banner year. The team of all humanity needs a win. Now more than ever, our stories need to be told. Tell them as a gentle cove for the overwhelmed. Tell them as a fury against loss and injustice. Tell them messy and clean ’em up later. Just tell them. Uncork everything that’s been bubbling up all year and pour it out like a gorgeous deluge of dizzying dazzling disturbing truth.
Tell your stories messy and clean ’em up later. Just tell them.
Come on this joyous, life-changing, maddening journey with us this month. Join NaNoWriMo.org today! Also donate if you can, and if you like what you see here, add a response to let us know if you’d like to contribute to this publication.
Cheers!
Stephanie
❤ NaNoWriMo Board Member since 2019, participant since 2011 ❤ | https://medium.com/nanowrimo/cheers-to-nanowrimo-2020-4bcf36444a1b | ['Stephanie Block'] | 2020-11-02 00:58:38.339000+00:00 | ['NaNoWriMo', 'Nonprofit', 'Fiction', 'Writing'] |
Everyone’s Calendar is One Meeting Too Full | Everyone’s Calendar is One Meeting Too Full
Here are four things you can do about yours
Photo by Curtis MacNewton on Unsplash
If your friend, co-worker, or boss asked you today to add another meeting to your calendar, what is the first thing you’re going to feel?
For most of us: anxiety.
Or maybe this is a better question: How do you feel when you’re sitting in your third hour-long meeting of your Thursday, listening to the department manager list off the potential complications with your upcoming project launch?
For most of us: bored.
Why is that? Why does everyone feel like they don’t have space for one extra meeting on their calendar? Why is it so easy for meetings to not capture our attention.
Can you remember the last time you were at work and didn’t have a single meeting scheduled for the whole day?
We’ve got something backward about the time we invest on a daily and weekly basis in meetings. For the sake of our sanity, as future and current leaders, that needs to change. | https://medium.com/better-marketing/everyones-calendar-is-one-meeting-too-full-70eabfd1092a | ['Jake Daghe'] | 2019-08-08 01:15:55.904000+00:00 | ['Meetings', 'Leadership', 'Productivity', 'Time Management', 'Self Improvement'] |
All You Need to Know About the Secure Wallet’s Security | So you know the difference between hot and cold storage methods. And you know that to protect your investments you need to protect your private keys. Now it’s time to understand exactly how the Secure Wallet’s security features keep your assets safe, and exactly what sets it apart from other cold storage devices on the market.
Security Features at a Glance
If you haven’t glimpsed the Secure Wallet yet, there are a few key design features that improve its overall security (we’ll go into more detail below).
The Secure Wallet is the same size and thickness as a credit card, making it ultra-portable, and giving you the option to keep it on you or to store it in a secure location. Completely wireless interaction, preventing cyber-attacks and malware from compromised devices CC EAL5+ certified- the highest security standard for government level deployments. Encrypted Bluetooth connectivity Only cold wallet available to use a One-time-password generation feature Users must physically press the built-in confirmation button to confirm any transaction that sends cryptocurrency off the Secure Wallet. Seed generation and recovery phrase (to recover your assets if you lose your wallet, your phone, or both)
So how do the components of the Secure Wallet security interact to bring you the safest cold storage wallet available?
The Secure Element
The main function of the Secure wallet’s security measures is to keep your private keys safe. In fact, by using the companion app and physically approving all transactions leaving the wallet, the private keys are never exposed, nor do they ever leave the secure element (microchip).
This chip, the Smart MXTM secure element (SE), is a state-of-the-art security crypto-controller. It is designed specifically for high-performance security chip card management and applications. This allows for contactless interactions and multi-factor authentication requirements. Without going too deep, some of the SE’s genetics include:
Smart MXTM high security micro-controller IC secure element
Security certified according to CC EAL5+
Data retention time: 25 years
Endurance: 500 000 cycles minimum
Interfaces:
-Contact interface according to ISO/IEC 7816
-Contactless interface according to ISO/IEC 14443 A
-Contact interface according to ISO/IEC 7816 -Contactless interface according to ISO/IEC 14443 A Voltage class: C, B, and A (1.62 to 5.5 V)
Memory Management Unit (MMU)
High-speed 3-DES coprocessor (64-bit parallel)
High-speed AES coprocessor (128-bit parallel)
PKI (RSA, ECC) coprocessor FameXE (32-bit parallel)
Don’t worry too much if you don’t understand these functions. The main takeaway from the secure element is to understand that it is impenetrable. By implementing Bitcoins ECDSA algorithm with parameter secp256k1 the element can generate and digitally sign transactions without the private keys ever leaving the chip.
Connectivity and Hosting
One unique feature of the Secure Wallet’s security is the ability to interact with it wirelessly. This makes the wallet exceptional in that it is the only true cold storage device available, removing any potential threats from wiring it directly to a laptop or phone. It does this via an encrypted Bluetooth connector and Bluetooth low energy interface.
Whilst the connection is secure, have also taken into account the need to securely register and pair a device. For this example we’ll use your apple or android smartphone. We call this becoming a ‘host’ of the Secure Wallet. For this to occur:
You install the ECOMI App.
Select the Secure wallet as the device to connect. In doing so you provide your devices UUID (universally unique identifier) to the Secure Wallet (this is how Bluetooth connections work).
The Secure Wallet will generate a 6 digit one-time-password and display it on the e-paper display screen on the wallet.
This is then entered into your smartphone, which generates the device key and pairs the devices.
On the surface, connecting and instantaneously applying the Secure Wallet’s Security is as simple as pairing a device. However, behind the scenes, a challenge-response mechanism is also at play to confirm that both the UUID and one-time-password match. If there are any discrepancies the devices will not pair.
Can I Connect More Than One Device?
The Secure Wallet’s security is equipped to simultaneously host 3 devices. When you connect to the Secure Wallet for the first time, there are no risks to security as there are no private keys stored on the wallet. In order to connect additional devices, however, they must be approved by an already activated/approved host.
Hierarchical Deterministic Keys
One function of the Secure Wallet’s security is the ability to generate Hierarchical deterministic keys (HD keys). Before we dive too deep, however, we need to understand a couple of things:
HD keys are a type of deterministic wallet, which uses a ‘seed’ to allow for the generation of child keys from a parent key. This can be a string of words or numbers.
The relationship (chain of code) between the parent and child keys is invisible to anyone/anything without the original seed. HD keys are primarily used to simplify wallet backups, as you only ever need the original seed to generate an infinite number of child keys. In this way, if you lose your Secure Wallet, you can regenerate the child keys using your parent key, and regain access to your individual wallets and assets.
The mnemonic phrase (seed). In cryptography, this is typically a 12–24-word phrase of random words strung together. This phrase is then used to generate the seed, and as a back up to recover your private key.
The master node- where your private keys are stored.
An example ‘seed phrase’ with accompanying QR code
The issue with using a hot wallet is that your mnemonic phrase- the seed that gives access to your private keys- can be compromised simply by using a screen capture virus. In contrast, the Secure Wallet’s security measures circumvent this issue by allowing you to generate a number-style mnemonic phrase on the ECOMI app.
This string of numbers is converted into a string of words displayed on the e-paper screen for you to record, and to enter into the ECOMI app to confirm you have it recorded correctly. Subsequently, the Secure Wallet will generate the master node entirely within the devices secure element, ensuring that your private keys can never be compromised. In order to recover your device, the process is reversed. Your mnemonic (word) phrase is converted into a string of numbers, which are confirmed by the Secure Wallet’s security functions, and allows you to regain access to your parent and child keys.
Confirming Transactions and Updates
The key to the Secure Wallet’s security lies in the need for user interaction to confirm transactions. Whereas viruses or malware may attack hot wallets or wired cold storage devices, the Secure wallet circumvents this with the following security policies:
It displays the receiver’s address on the e-paper screen. This prevents any tampering with the destination and allows the user to visually confirm it. The Secure Wallet requires you to physically interact with it in order to sign and approve any transaction. That is to say, for any transaction to leave the device you have to confirm it by pressing the red button. Without this direct interaction, no assets can be released from the Secure Wallet.
The Secure Wallet’s security and firmware can also be updated over-the-air. This means that as new currencies, forks, and security updates become available, the device can be instantly updated through the ECOMI app. This is done through an encrypted loader key, which, once validated and decrypted, will update your cold storage device.
By maintaining truly wireless connectivity, as well as the ability to generate master nodes within the Secure Wallet, it is easy to see why it is the most secure cold storage device available. The Secure Wallet’s security features have been designed and developed with you, the user, in mind. By allowing you to seamlessly create and encrypt private keys, as well as securely storing them, the team at ECOMI has created a unique, user-friendly and ultra-secure cold storage device.
For more information check out our online store or join us on Telegram! | https://medium.com/ecomi/all-you-need-to-know-about-the-orbis-secure-wallets-security-20a48ad8e297 | [] | 2019-05-28 17:56:09.504000+00:00 | ['Startup', 'Cryptocurrency', 'Crypto', 'Bitcoin Wallet', 'Bitcoin'] |
How You Can Make Love Last | For most people, we don’t give up on love. Humans are tribal. We have a deep sense of wanting to belong. As Belgian psychotherapist Esther Perel says, “we are hardwired for connection.”
We don’t give up on love, even when it does us wrong.
The need for connection appears to be innate — but the ability to form healthy, loving relationships develops over time.
Some of us are naturally better at relating than others, while many of us keep trying to get it right because we are susceptible to the human condition — we crave connection.
One way to find out who you are and what your needs are is to be in an intimate relationship. When you meet the right person — someone you like spending time with — it can be one of the most fulfilling aspects of life, giving you a sense of deep satisfaction and companionship.
In John Gottman’s book, The Seven Principles for Making Marriage Work, he writes about what it takes to thrive in a long-term relationship. | https://medium.com/the-happy-spot/how-you-can-make-love-last-e7c289c41024 | ['Jessica Lynn'] | 2020-10-01 11:37:59.812000+00:00 | ['Relationships', 'Relationships Love Dating', 'Love', 'Self-awareness', 'Pyschology'] |
Angular 8 Differential loading — behind the scene | If we look at the war of JavaScript frameworks and libraries the list is huge but the major 3 competitors are Angular, React and Vue.
Although angular is a complete framework which has its own package for network request, form validation and many more but the one one down side of angular apps is the app/bundle size.
In Angular 8, the team decided to find some new ways to fix this and reduce the bundle size and then the concept of differential loading comes into action.
What is happening till angular 7 when we build our apps with “ng build” command, our all the typescript files are converted to js and then the final application bundle is created for all browsers either modern or older one considering the application should run properly of older browser. which results in larger bundle size because all the modern JS/TS code need to be converted in JS which can be easily supported by older browsers.
Now Whats happening in angular 8 when we build an app with “ng build” command.
output of ng build command
when we build the project it created two different bundles one is for older browser(marked with red) and another is for browsers that supports modern javascript(marked with blue), and have a look at the file size of both the bundles.
During deployment both the bundles are deployed and when a client opens your web app then based on browser compatibility the JS bundle is loaded.
But the question is how the application knows weather the browser is older or modern?
Have a look at at index.html where scripts are imported.
from the above image you can easily understand how it is working . when nomodule is true then it will load js for older browser and ignore the next script otherwise it will load the js for modern browsers. if you are still confused have a look at this nice video tutorial on Demos with Angular Channel on youtube.
Keep Exploring! | https://medium.com/codingurukul/angular-8-differential-loading-behind-the-scene-d7299e64a57c | ['Suraj Kumar'] | 2019-07-10 17:26:39.900000+00:00 | ['JavaScript', 'Angular', 'Google', 'Software Development'] |
The Missing List of JupyterLab Keyboard Shortcuts | With keyboard shortcuts, you can whiz around Jupyter notebooks in JupyterLab. You can save time, reduce wrist fatigue from using your mouse, and impress your friends. 🙂
Below is the missing list of common JupyterLab keyboard shortcuts from a GitHub Gist I made. Enjoy! 🎉
If you want to make your own JupyterLab shortcuts, I wrote a guide to doing that here. 🚀
I write about Python, SQL, Docker, and other tech topics. If any of that’s of interest to you, sign up for my mailing list of awesome data science resources and read more to help you grow your skills here. 👍
Happy JupyterLab-ing! 🎉 | https://towardsdatascience.com/the-missing-list-of-jupyterlab-keyboard-shortcuts-c613ff711a20 | ['Jeff Hale'] | 2020-10-16 21:45:19.823000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Jupyter'] |
Trying Out Apples Stock Apps | Calendar vs Fantastical
When it comes to my Calendar I will be completely honest and say that if Fantastical went away tomorrow, I probably wouldn’t be that upset. Not that Fantastical isn’t great and I do prefer it over the Calendar app overall.
When I got my iPhone 11 Pro last year I decided to setup it up as a new phone and not restoring from a backup. I then took my time in installing apps back on my phone and one of those apps I didn’t reinstall right away was Fantastical, I used the Calendar app for awhile not really caring that it didn’t have some of the features Fantastical provides.
But then it was something I either heard on Connected or Mac Power Users that made me realize how much I missed the natural language typing of an event allowing you to have every detail of the event setup without the need of flipping any toggles or using scroll wheels. In one sentence I could add an event on a day, at a specific time frame, and on the exact calendar I wanted.
Besides the natural language component the other thing I really love about Fantastical compared to the Calendar app is the views that it provides. Like the screenshot I am showing above, I love the ability to see an entire month then scroll through my event below telling me any events in that month or any upcoming months, allowing the month to change as I scroll to the next months events.
The design is also a little more polished, I know I sound like a broken record here, but I do love and appreciate Apple’s minimalist design aesthetics. I think something that these third party apps are showing is that you can still have a minimalist design but it still be functional, I feel sometimes Apple’s apps falls more on the side of design than function and think that could be the reason I don’t find myself using them.
Outlook vs Mail
I, like many others, have tried a variety of different mail apps on my iPhone, iPad, and even Mac. But every time I do I end up back in the stock Mail app. The main reason I do this is losing emails. Some of these apps provide additional filtering which throws some of my mail into Trash or Junk without me realizing it.
I currently have an iCloud, Gmail, and Techuisite email address that I use for all different reason. iCloud is used for more personal things like financial or family uses, my Techuisite account is for more “business” related needs for my writing, and everything else goes to Gmail — and I mean everything. Every newsletter, random website that I need to create a required account for that I will never use again, or an email to just give out for me to random people that need to contact me that are not that close.
The reason I treat Gmail this way is because I use a service called Sanebox to help me filter and remove any emails I no longer want to receive or block. It provides an additional filter than just the Spam one that Google provides and puts them in a SaneLater folder for me to go through and either allow to come to my Inbox or move it to the SaneBlackHole where I will never see it again.
All that being said, I have a system already that lets me filter my emails and allows me to see the things that I need to see flow through each email address. So for a email client, I just need a place for all those emails to come through. For the most part Mail has always provided that to me without fail.
Other email clients like Outlook or Airmail I feel they try to provide a different experience with email that I just don’t find useful. For instance, Outlook has a Focused and Other toggle in the app. This is supposed to provide you a way to see the emails you must see instead of seeing all of them at once.
On paper this seems like a good idea. A lot of times emails that you actually need to get to quickly get lost in the long list of emails and you might miss it. Outlook is trying to remove that issue and give you all the emails you need to look at right now and then provides a toggle for you to see the rest whenever you need.
My problem with this is I don’t know what emails Outlook thinks needs to be in Focused or Other and a lot of times I ended up forgetting switching over to other thinking that an email just hasn’t come yet. Again, this is another filter or barrier that an email client thinks is useful but for me just gets in the way. Mail just gives me my emails and I like it that way. | https://medium.com/techuisite/trying-out-apples-stock-apps-b8e6a7611049 | ['Paul Alvarez'] | 2020-06-13 15:19:09.243000+00:00 | ['Gadgets', 'Apps', 'Review', 'Technology', 'Apple'] |
Cash Mode Now Live in TriviaSpar! | In addition to tournaments and quickplay matches that reward players with KNW Tokens, TriviaSpar players will now be able to partake in quickplay and tournaments that reward $USD.
Play cash mode trivia by pressing the GO PRO button at the top of the home screen menu in-app. Deposit $USD to participate in cash reward tournaments and quick matches. Users will be required to activate their location settings to play cash mode, geographical restrictions will apply.
TriviaSpar QuickLinks
Download TriviaSpar: https://triviaspar.com/download/
Create a Knowledge account: https://account.knowledge.io/sign-up/get-started | https://medium.com/knowledgeio/cash-mode-now-live-in-triviaspar-51e241acaadb | [] | 2018-11-30 20:20:11.459000+00:00 | ['Mobile App Development', 'Blockchain', 'Mobile Games', 'Ethereum Blockchain', 'Ethereum'] |
Deep Thinking: 10 Books That Will Help You Understand The World Differently | Deep Thinking: 10 Books That Will Help You Understand The World Differently
Paradigm shifting books for every curious mind
To live is to learn, and to read is to learn from the experience of others. Reading is my favourite way to learn new worldviews, principles, ideas, mindsets, perceptions and mental models.
A good book may have the power to change the way we see the world, but a great book goes beyond the change of perception; it can easily become part of our daily consciousness, guiding our every choice and judgement.
“Books are an extraordinary device, transitioning through time and space, moving from person to person and leaving behind insight and connection,” says Seth Godin. Thought-provoking books are the treasured wealth of the world.
I don’t aim to finish every book I start. I tend to read a lot of books at the same time but only finish a few of them — the very good ones. And re-read my favourite books.
Francis Bacon once said, “Some books should be tasted, some devoured, but only a few should be chewed and digested thoroughly.”
When you expose yourself to a broad range of ideas, you heighten the chances of understanding the world and improving how you perceive it.
The more you read the more curious you become. Seeking knowledge and understanding things you never understood is deeply satisfying. If you can’t find time to read, the best way is to listen to audiobooks.
These books by deep thinkers might change how you think about the world, work, other people, and yourself. They are perfect for anyone with a curious mind and a passion for learning. I’ve added one or two of my favourite quotes from each book.
The definitive guide on how to prepare for any crisis — from global financial collapse to a pandemic
1. How to Survive the End of the World as We Know It: Tactics, Techniques, and Technologies for Uncertain Times by James Wesley Rawles
“The modern world is full of pundits, poseurs, and mall ninjas. Preparedness is not just about accumulating a pile of stuff. You need practical skills, and those come only with study, training, and practice. Any armchair survivalist with a credit card can buy a set of stylish camouflage fatigues and an “M4gery” carbine encrusted with umpteen accessories. Style points should not be mistaken for genuine skills and practicality.”
This book uncovers the hidden consequences of free-market capitalism
2. 23 Things They Don’t Tell You About Capitalism by Ha-Joon Chang
“Equality of opportunity is not enough. Unless we create an environment where everyone is guaranteed some minimum capabilities through some guarantee of minimum income, education, and healthcare, we cannot say that we have fair competition. When some people have to run a 100 metre race with sandbags on their legs, the fact that no one is allowed to have a head start does not make the race fair. Equality of opportunity is absolutely necessary but not sufficient in building a genuinely fair and efficient society.”
“The free market doesn’t exist. Every market has some rules and boundaries that restrict the freedom of choice. A market looks free only because we so unconditionally accept its underlying restrictions that we fail to see them. How ‘free’ a market is cannot be objectively defined. It is a political definition.”
Rovelli invites us to imagine a world where time is in us and we are not in time
3. The Order of Time by Carlo Rovelli
“Before Newton, time for humanity was the way of counting how things changed. Before him, no one had thought it possible that a time independent of things could exist. Don’t take your intuitions and ideas to be ‘natural’: they are often the products of the ideas of audacious thinkers who came before us.”
“Because everything that begins must end. What causes us to suffer is not in the past or the future: it is here, now, in our memory, in our expectations. We long for timelessness, we endure the passing of time: we suffer time. Time is suffering.”
How often have you asked yourself: What is the meaning of life? Sasha finds it everywhere
4. For Small Creatures Such As We: Rituals and reflections for finding wonder by Sasha Sagan
“No matter what the universe has in store, it cannot take away from the fact that you were born. You’ll have some joy and some pain, and all the other experiences that make up what it’s like to be a tiny part of a grand cosmos. No matter what happens next, you were here. And even when any record of our individual lives is lost to the ages, that won’t detract from the fact that we were. We lived. We were part of the enormity. All the great and terrible parts of being alive, the shocking sublime beauty and heartbreak, the monotony, the interior thoughts, the shared pain and pleasure. It really happened. All of it. On this little world that orbits a yellow star out in the great vastness. And that alone is cause for celebration.”
This great end-of-the-world novel captures the generalised panic of 2020
5. Leave the World Behind by Rumaan Alam
“Theirs was a failure of imagination, though, two overlapping but private delusions. G. H. would have pointed out that the information had always been there waiting for them, in the gradual death of Lebanon’s cedars, in the disappearance of the river dolphin, in the renaissance of cold-war hatred, in the discovery of fission, in the capsizing vessels crowded with Africans. No one could plead ignorance that was not willful. You didn’t have to scrutinize the curve to know; you didn’t even have to read the papers, because our phones reminded us many times daily precisely how bad things had got. How easy to pretend otherwise.”
A beautifully written book that explains difficult and complex topics around race
6. So You Want to Talk About Race by Ijeoma Oluo
“When we identify where our privilege intersects with somebody else’s oppression, we’ll find our opportunities to make real change.”
“When somebody asks you to “check your privilege” they are asking you to pause and consider how the advantages you’ve had in life are contributing to your opinions and actions, and how the lack of disadvantages in certain areas is keeping you from fully understanding the struggles others are facing and may in fact be contributing to those struggles.”
An indispensable guide to thinking clearly about the world
7. Factfulness: Ten Reasons We’re Wrong About the World — and Why Things Are Better Than You Think by Anna Rosling Rönnlund, Hans Rosling, and Ola Rosling
“human beings have a strong dramatic instinct toward binary thinking, a basic urge to divide things into two distinct groups, with nothing but an empty gap in between. We love to dichotomize. Good versus bad. Heroes versus villains. My country versus the rest. Dividing the world into two distinct sides is simple and intuitive, and also dramatic because it implies conflict, and we do it without thinking, all the time.”
“Factfulness is … recognizing that a single perspective can limit your imagination, and remembering that it is better to look at problems from many angles to get a more accurate understanding and find practical solutions. To control the single perspective instinct, get a toolbox, not a hammer. • Test your ideas.”
A guide to the fallacy of the obvious in everyday life
8. How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life by Thomas Gilovich
“A person’s conclusions can only be as solid as the information on which they are based. Thus, a person who is exposed to almost nothing but inaccurate information on a given subject almost inevitably develops an erroneous belief, a belief that can seem to be “an irresistible product” of the individual’s (secondhand) experience.”
“People will always prefer black-and-white over shades of grey, and so there will always be the temptation to hold overly-simplified beliefs and to hold them with excessive confidence”
Tools from 60 great thinkers to improve your life
9. Great Thinkers by The School of Life Press
“…simplicity is really an achievement — it follows from hard-won clarity about what matters.”
“Aristotle also observed that every virtue seems to be bang in the middle of two vices. It occupies what he termed ‘the golden mean’ between two extremes of character.”
“The primary thing we need to learn is not just maths or spelling, but how to be good: we need to learn about courage, self-control, reasonableness, independence and calm.”
Why has human history unfolded so differently across the globe? Jared provide expert insight into our modern world
10. Guns, Germs and Steel: A short history of everybody for the last 13,000 years by Jared Diamond
“History followed different courses for different peoples because of differences among peoples’ environments, not because of biological differences among peoples themselves”
“All human societies contain inventive people. It’s just that some environments provide more starting materials, and more favorable conditions for utilizing inventions, than do other environments.”
“My two main conclusions are that technology develops cumulatively, rather than in isolated heroic acts, and that it finds most of its uses after it has been invented, rather than being invented to meet a foreseen need.” | https://medium.com/kaizen-habits/deep-thinking-10-books-that-will-help-you-understand-the-world-differently-89ad1f715819 | ['Thomas Oppong'] | 2020-12-15 12:11:03.132000+00:00 | ['Books', 'Self Improvement', 'Reading', 'Learning'] |
Cold Darkness | Haiku is a form of poetry usually inspired by nature, which embraces simplicity. We invite all poetry lovers to have a go at composing Haiku. Be warned. You could become addicted.
Follow | https://medium.com/house-of-haiku/cold-darkness-ec07c946cd4b | ['Sylke Laine'] | 2020-12-21 23:58:03.113000+00:00 | ['Self-awareness', 'Haiku', 'Poetry', 'Solstice'] |
How Partnerships Build Inclusive Ideas and Innovation at Microsoft’s Annual Hackathon | By Jessica Tran and John Porter
One benefit of joining the UX research community at Microsoft is the chance to participate in the company’s Hackathon, a yearly event that brings employees out of their daily roles to collaborate on new ideas. Our project showed us why partnering closely with people who may use our designs is always best practice.
“Nothing about us, without us” has long been a saying within the disability rights and accessibility communities. For researchers and designers, the motto reminds us to work in community throughout the design process. This notion underpins Microsoft’s accessibility initiatives, including development of the Inclusive Tech Lab. It also drove our team’s process during two years of innovation at the Hackathon.
Our project focused on closing the technology gap that limits options for gamers who are blind or with low vision, who often rely on auditory cues to play. A problem arises for these gamers because the vital social component — laughing, trash talk, yelling “watch out!” to a remote teammate who’s about to get ambushed — usually involves voice chat or the system reading text chat aloud.
If gamers who rely on audio devote attention to the social experience, they’re more likely to miss essential game cues like the furtive footsteps of a sneaky opponent.
To explore this problem space and design a solution, we included blind gamers and designers on our Hackathon team each year, as well as strong UX research representation. Testing our designs and experiencing accessibility barriers alongside our teammates who are blind or have low vision ended up shaping the project in unexpected ways, spurring it to unanticipated heights.
Teamwork at the Hackathon
Our project meant something different to each of us. For Jessica, a UX researcher and engineer, it was a unique opportunity to make something new with accessibility at the core of the design. For John, an intern, it was a chance to leverage primary research he’d done at the University of Washington to identify the unmet need we were exploring. For all of us, the project provided an exciting occasion to work with others who care about accessibility and take a leap into the unknown.
The air buzzed with the sound of more than 18,000 people kick-starting their projects as we brainstormed around our table in the Hackathon tent. Things went smoothly at first. The group hypothesized that a controller with Braille capabilities would allow gamers who rely on audio to communicate during gameplay.
But as soon as we started ideating on designs, we hit a barrier to accessibility for our teammates who are blind: whiteboarding.
In order to be truly inclusive and integrate all stakeholders, we had to embrace a MacGyver spirit in our ideation process. Some of us verbally described sketches being drawn in real time and typed descriptions in OneNote so team members who are blind could use screen readers to hear them. We made it work, but the difficulties of ideating under these constraints pushed us to move toward physical prototypes more quickly, so we could test our designs by putting them in people’s hands.
The team used a combination of high-fi and low-fi prototypes to test different aspects of the design from year to year. The input prototype (left) enabled participants to type in Braille.
Building prototypes to test designs
The first question we had was whether a module at the back of an Xbox One controller could allow gamers to comfortably chord Braille cells, essentially typing in Braille. We used a 3D printer to create a prototype with six paddles, corresponding with the six dots in a Braille cell. The gamers on our team who are blind helped us iterate until the position of the paddles allowed them to chord input comfortably.
For reading incoming Braille messages, we envisioned a refreshable Braille display on the back of the controller. The tech gap was too great to build this out at the Hackathon, so we kept this aspect of the prototype low-fi, punching out Braille letters on index cards. We recruited Braille-literate participants from different corners of the tent to test our hypotheses, for example that the Braille cells on the readable display should flow from left to right. When a participant pointed out that this orientation was awkward, we checked our biases and pivoted to a vertical orientation — just as legible but more natural for someone with a controller in hand.
Drawing on our hybrid skill sets, we iterated until we had a prototype that could send messages to a console. When members of Microsoft’s senior leadership team came by, having heard about our project, we were able to hand them a controller that had a story behind it. We told them about the adjustments we’d made so far and why, then offered them a chance to try the input prototype.
Microsoft CEO Satya Nadella (center) holds the input prototype, interacting with Jessica Tran (right). John Porter (second from left) and Chief Accessibility Officer Jenny Lay-Flurrie (center) look on.
Company leadership was clearly excited about our hack, including Chief Accessibility Officer Jenny Lay-Flurrie. When she engaged the paddles, then watched the phrase she chorded appear onscreen, her interested expression transformed into a beaming smile. She told us we were on to something and encouraged us to keep going.
Riding the wave of enthusiasm
The support of Microsoft leadership inspired us to keep working on the project after the first year’s Hackathon. We shared out our findings, talking with experts in different divisions and members of the accessibility community. The process showed us that although we were bringing a new and exciting concept to the field, there was room for further innovation, particularly with our output model.
A refreshable Braille display was still significantly out of reach of current technology. So, how else could someone receive Braille messages? For year two of our Hackathon project, we brainstormed ideas and eventually landed on the concept of haptic feedback.
Our teammates who are blind supported the idea that the same six paddles used to chord Braille could theoretically become a legible mode of output if each paddle vibrated independently.
Of course, we had to test this theory with another prototype, this time with individually vibrating actuators. Some adjustments were needed for different hand sizes, but our blind teammates and other Braille-literate participants confirmed that they could read using the paddles already at their fingertips.
From idea to invention: Closing the loop with patent applications
Drawing on continued support at Microsoft as well as Jessica’s experience as a patent holder, we were able to file patents on our designs that included the three major components we’d tested: a refreshable Braille display, haptic-feedback output, and Braille typing.
The team’s patent applications outline designs for Braille input and output.
The eventual granting of these patents was a satisfying validation that we’d brought a new idea to an unmet need in the gaming space. Although there are no plans to develop the designs into product, the experience of working from foundational research through to invention encouraged us to keep innovating around accessibility, continuing our work to close the tech gap and connect more people through games.
We’d never have reached this point if not for close partnership with people who are blind or have low vision. It’s no accident that the most accessible aspect of our testing — a prototype providing a sophisticated tactile experience — became the vehicle for socialization that most excited everyone. We see this theme play out again and again in the world of accessible design. The act of inclusion opens new doors for all. | https://medium.com/microsoft-design/how-partnerships-build-inclusive-ideas-and-innovation-at-microsofts-annual-hackathon-276824268af0 | ['Jessica Tran'] | 2020-02-13 18:25:01.369000+00:00 | ['Accessibility', 'Research And Insight', 'User Experience', 'Design', 'Microsoft'] |
100 Proven SEO Techniques 2017 | Nowadays, SEO is blossoming like a magical world surrounded by thousands of SEO witches with tons and tons of SEO tools and strategies as wands. No one can say whether the magic works or not, but every SEO witch is working too hard to win the heart of search engines, the miracle maker. Let’s go for a quick flashback.
How did a traditional SEO work?
A traditional SEO is a man of perfection who followed any SEO tutorial step by step and executed it as it is, without any question. Closing his eyes and spinning hands all over his keyboard, he involved himself in a huge number of directory submissions, article spinning, blog commenting and so on.
So, what’s the need of the hour? How should a modern SEO work?
A modern SEO should be a strategist who build his own stories and learn by himself through experiments. He won’t rely blinded on books or tutorials, whereas he compiles his own book out of his learnings. He builds his content primarily for users with search engine tags included. He focuses mainly on the quality of the content produced and build backlinks naturally. He looks upon semantic strategies and does specifically focused meaningful promotions. Now let’s talks about top 100 PROVEN SEO tips. This is a compilation of some real-time experimented tips that resulted in desirable outcomes to improve productivity. To facilitate a successful transition from the traditional SEO model to modern SEO model, I have shared some proven SEO tips of my career which are rarely found all through the Internet. Let’s get into the tips straightaway. Key takeaways:
Learn hidden real-time experimented and proven SEO tips
Know the avenues to be focused for better visibility
Every tip comes with real-time explanations and steps to implement
Understand the path to build your own marketing strategy
Every tip comes with real-time explanations and steps to implement.
Explained in simple language that anyone can understand.
Table of contents
It’s not KEYWORD; it’s just a keyword!
Go for long tail, even it’s searched lesser
RIP keyword density metric
Can you predict your website’s future?
Spying isn’t illegal in SEO; Spy to the core
Be social with social media
Tag yourself and get your face detected
Videos not only serve users but also bots
Over-optimized page tastes over-cooked
Your website is one; with/without www
Think twice, before placing any link
Construct your links guessing user’s mind
It’s time for some SEO don’ts
Say goodbye to subdomains
Design agency? List of design directories
No spammier directory submissions
More votes; more credibility!
It’s not KEYWORD; it’s just a keyword!
I often see people struggling with keywords. No, Keyword is not at all a difficult word to find. It doesn’t need an excel sheet or an intelligent compilation. It’s all about understanding your user’s mind. It’s just how a third person describes your website. Before going for any so called keyword tool directly, you need to do two steps.
Talk to your target audience
Analyse the terms they use while describing your product/service
Write down all those words and then go for Google keyword tools to get the number of searches
Tools to be used : Website search button analytics, Google keyword planner
All the SEO experts normally prefer to go for highly searched keywords with less/medium competition, since they don’t want their website pushed down out of a heavy competition. In my opinion, I would like to rephrase the same and suggest you to prefer even less searched long tail keywords to capture the complete share of it. Gradually, it brings your website to the top for related generic keyword searches.
Identify all long tail keywords
Analyze your competitors to understand their keyword strategy
Experiment and then implement
Search for questions related to the topic and use them as long tail keywords
Tools to be used : Ubersuggest, Spyfu, Answer the Public
There are tons of searches through search engines to know the percentage of keyword density to be used for SEO copywriting while optimizing each and every piece of content. I can say, don’t even think about keyword density anymore, since you are not going to use the same keyword again and again, but only its synonyms and related entities. Search engines have the greater intelligence to map synonyms and entities to the keywords.
List down your keywords, its synonyms & related words
Spread each of them in your content naturally
Optimize them in your images and videos too
Tools to be used : LSI Graph, Meta glossary
Yes. You can. Wonder how? Just type through the search, search engines suggest you the mostly searched terms related to your keyword and the new ones that you’re not even aware of. Apart from that, you also have Google trends to guess a complete picture of how your market is going to be in the near future.
Know all the search engine operators to find all possible predictions and do check out “searches related to” section at the bottom of your search results.
Subscribe to Google trends
Read your industry related news periodically
Tools to be used : Google Trends, Feedly
In SEO, Spying is the main key to success. You can do SEO by yourself, but understand your competitors keyword strategy and build your digital marketing tactics on top of it, takes you to desirable levels. Know your competitor, follow them wherever they go, analyse and then build your strategy to surprise your competitor.
List out the keywords that your competitors are using
Identify their market and target audience
Build your keyword strategy accordingly
Tools to be used : Spyfu, SEMRush
No content reach can be done successfully without the contribution of social media. Wherever your leads or prospects go, the only one object, which goes along with them, is their mobile with all social media apps installed. It’s a must for every SEOs to keep track of the user’s interests by monitoring what’s currently trending in social media.
Check out the tags (#) in social media using tools like Topsy or Social Mention. The same tags can act as an effective keyword
Follow your competitor’s page and social media feed periodically.
Also, you can get tons of keyword ideas from sites like Wikipedia, reddit or delicious
Look at the trending tags in twitter or Facebook. You can create an individual promo page with those trending keywords to capture real-time visitors
Tools to be used : Social Mention, Reddit, Delicious, Facebook, Twitter
On-page SEO is all about tagging. Title tag, Header tag, Meta keyword tag, Meta description tag and the list go on and on. Though there are people who say Meta keywords are no more, I suggest you to go with all the tags as much as possible, provided it should not be stuffed or spammy. Get your main keyword at the first in all these tags to get your website more visible for the main keyword.
Choose 2 main keywords and place them in your SEO tags like URL, H1 or H2, meta keyword and description
Have a call for action in your Meta description to improve click through rate. E.g.: Try for free, Download now
Place tags related to the respective page, don’t stuff all keywords in index page. Let them be distributed across your pages
All marketers place videos and images in their website to make the users understand their product/service much better. Apart from this purpose, this also serves as a medium of SEO optimization. By having the keyword as alt tag, image/video name, search engines understand what the video or image is about and index the same in search results for the specific keyword.
Use title tag in videos/images along with alt tags, to be displayed to the user on mouse over the image
Keep the alt text of your images short and simple, since it looks cluttered to the users on image loading time
Don’t dump the same keyword in all images/videos, it should be more specific
Add transcription for every video to gain more SEO visibility
Search engine optimization is like food, it wont tastes good if it’s over-cooked or under-cooked. It’s up to the search engine experts to act as a perfect chef and prepare delicious food. Many SEOs think that optimizing a website to a greater extent gets their website to the top of the SERPs. But these days, this simply don’t work as search engines are becoming more and more intuitive.
Make use of synonyms or LSI keywords to avoid keyword stuffing
Don’t write content, thinking about the keyword. Optimize the content for SEO after drafting your content completely
Distribute the main keyword few times and if you optimize further, optimize with your secondary keyword as well
Your website may have different versions, either it may be with www or without it. If you use both those versions, search engines see your website into two and you wont get as much as indexing benefits as you expect. And this is the same case with http and https versions.
Redirect your non-www version to www
Move to https as quickly as possible, since https websites are given more preference and ranking at the top position as per search engine algorithm
Prefer 301 redirection. Don’t remove any indexed page as much as possible whereas you can 301 redirect to related page
Linking the pages is more important than the website itself. You can build your website, but without proper linking, your users wont get to know your product/service as expected. In the same time, prioritizing the value of your website links is also equally important. Balancing the website links as per the value provided brings you benefits, in turn sales.
Restrict the number of links in every page; keep it as much minimal as possible. Let it doesn’t exceed more than 100 links
No-follow less prioritized links like privacy policy, disclaimer, contact us and similar pages
Make your external links open in a new tab (target=blank) to reduce exit rate
Constructing links in your website pages requires you to do detailed analysis on your user’s behavior. Before you add links, understand the source/region from where majority of your users are coming from, whether through search engines or referral sites. And then add links accordingly guessing what your users may want to navigate in the next step.
Link every page atleast to any other page. There shouldn’t be any page left behind, since it lags SEO benefits
Structure your links as per the page navigation and goal funnel in Google analytics to achieve maximum goal conversion rates
Track the links clicks in all the pages by adding cookies (?cookie) to the end of the URL that is detected in Google analytics
From Visually.
Search engines insist every website to follow certain guidelines to get more benefits out of organic search. However, people tend to override such guidelines through black-hat and grey-hat techniques. I can strongly say that black-hat and grey-hat no longer exist, but only pure white-hat exists in this world of dancing algorithm changes.
Reduce the usage of flash in your website, still it’s difficult for the search engines to crawl them
Avoid hidden text; you aren’t supposed to hide keywords in white text on a white background. The same that’s visible to the bots has to be visible to the users without any manipulation
Reduce the usage of iframes too
Page URL structure needs some homework and it has to be decided properly before making your pages live. People use to have abc.com, one.abc.com, two.abc.com, abc.com/one, abc.com/one/two and it goes on and on. There are too many folders, subdomains and lengthy filenames. But what’s best for your users as well as search engines?
Make sure to use abc.com/blog instead of blog.abc.com, since it is considered as a two different website
Add keywords in your subfolder name also. Say if the page is about directory list, you can have the url as abc.com/directory/high-pr.html
Write shorter and meaningful filenames in a way which your users can remember and relate to it
Be Roman when you’re in Rome. Promote your website focusing your target audience. It doesn’t make any sense or benefit, if you promote your SEO agency in a design directory. Relevant promotion with relevant anchor text in relevant domains brings you back SEO benefits, which reflects in gaining top search engine ranking position for that relevant anchor-texted keyword to your website.
Say if your website provides both SEO and design service, you need to promote it to SEO directories with SEO based anchor text and the same needs to be design based in case of design directories.
The same one shouldn’t be promoted to any unrelated directories just to increase the number of backlinks which is useless in obvious.
Choose the perfect category into which your website fits in, while promoting. If it’s listed specifically under it’s category, the backlink is taken as a quality backlink
The first step of off-page SEO that any SEO experts learn is directory submission. In previous days, SEOs used to have a huge list of directories sorted based on their page rank. They submitted their website in all the directories by copying and pasting same title, same description and same set of keywords. But nowadays, directories lost their trust among search engines, since most of them are spammy.
Don’t submit to all the directories (ignore the ones which share the same template, have illegal links and less page rank)
Don’t prefer reciprocal links or paid links from low quality directories. This may reduce your website’s quality
Submit to directories like DMOZ that has greater credibility and good page rank
Not only in elections where a leader is selected based on the number of votes, but also in search engine algorithm where your website gains credibility based on the number of votes and bookmarks. When more users bookmark your website in their browser or any bookmarking website and visit your website repeatedly, search engines value those returning visits to your website.
Add share buttons and bookmark buttons in your website, motivating your visitors to share, if they like
Create your company profile in popular social bookmarking websites like diigo & stumbleupon, follow similar interest profiles and stay proactive
Promote your product /service by sending private messages to few valid profiles in social bookmarking websites
As I mentioned earlier, any SEO activity, which we do, should look natural, but not like an intentional promotion. This is a common problem with most of the SEOs. Whenever you look at any forum discussion, you can identify at least one XYZ SEO’s reply in proper promotional words with keyword as anchor text. And if you copy paste the reply from XYZ SEO and search in Google, you can find too many results with the same content.
Use different anchor texts while promoting. Based on the nature of the website you promote, you can have related keywords as anchor text
If you use same anchor text for all the submissions and promotions, search engines identify you as over-promoted, in turn affects your SERP
Don’t build your anchor-texted backlinks on blog comments. Instead, try using read more or view now texts to look neutral
Guest post was a trend before. But now, it lost its credibility because of the spammers all around the web. They intentionally write a post to get backlinks, in turn spam the master website with a number of interlinks and author bio links. Search engines identified this trend and penalized such websites by implementing an algorithm as it always does.
Don’t allow less quality guest posts into your website. Get a complete moderation in place to prevent your website getting penalized due to less quality guest post links
The same applies to your guest posts as well. Adding your link in a less quality blog doesn’t bring any benefit to the backlink
Other than guest post, try writing blogs in web 2.0 websites like Hubpages and Medium
Your prospect can enter through any of your website pages, not necessarily it should be the index page. Every page needs to have all the necessary trusted elements and call for actions to achieve greater goal conversion rates. There are awesome landing page samples available all through the web to help you in deciding which works best or worst.
The first scroll of your landing page must contain a heading, a quick description, video/images, a lead form, necessary call for action and few trusted elements like awards and recognitions
In the second scroll, you need to explain about product/service, its features, pricing and few immediate links
And there should be social media icons included too. If you have a newsletter, you can even place your subscription form in your landing page
When you are surfing through the Internet to know something immediately and say if the website is taking more time to load, what will you do? You just close it adding a count to its exit rate and go to another similar vendor. This is the same case with your website visitors, if your website loading time is high. This is the most important factor to look at, before starting up your off-page SEO.
First look out for images and videos. All you have to do is to compress and optimize the file size of your images and sprite them to make all the images load at once, instead of allowing every image to load individually
Write your codes, style sheets and scripts and validate it through W3c validator to reduce loading problems on your website
Evaluate your site’s performance periodically using tools like Google page speed tool, YSlow and Pingdom full page test
The very important activity of SEO is distributing press releases bringing up more and more branding. You should not consider it as one another promotion like submitting to multiple news sites. Rather, if you outreach your press releases well planned and perfectly optimized, it creates brand recognition among search engines and in turn huge impact on your search engine ranking positions.
Write your press releases not in a promotional language, but in a meaningful way describing the benefits of your release. This helps to enhance the quality of your brand
Reach out to publication channels and spread your press release with a link back to your website
You can do an aggressive promotion campaign about your release gaining the attention of bloggers, social media influencers and industry experts and make them write about you naturally
Assume you have promoted your link aggressively to huge number of directories, bookmarks and wherever possible and say your user gets a “Not Found” page on clicking on it, it becomes a complete mess. It’s extremely required to check your website for broken links for continued navigation and promote your website, only after fixing all the links up and live.
Make use of broken link checker tools available in the market like screaming frog SEO spider tool and Xenu’s link sleuth. These tools crawls your website and fetches all broken links
Check the broken links in Google webmaster tools too
Make sure to disable cookies, JavaScript and CSS, while checking the broken links using tools
After you’re done with your on-page SEO optimization and link building, what’s next. You should monitor your website’s visibility in search engines periodically. Check whether all of your web pages are indexed, what pages are showing up first for your brand name search, Is your website getting indexed for all of your main keywords?
Find the number of pages indexed by checking site:yourdomainname.com in Google. If its count is lesser than the total number of pages, you have to identify the factor behind non-indexing
Check using CopyScape tool to identify duplicate content, if the index count is larger than the actual number of pages in your website
Search for your brand name in Google. If you find your page at first and all high profile pages listed below it, you’re safe. If not, check your webmaster tools, check for possible errors and add sitelinks
It’s up to the webmaster of a web page to decide which page needs to be shown to the search engines and which page has to be hidden. Accordingly, there are certain options like sitemap, no index Meta tag and robot.txt to facilitate you with such privileges. But these tags and files have to be created and updated with extreme care to prevent you from backfires.
Restrict search engine bots from crawling certain pages of your website by including them in robot.txt file or adding no-index Meta tag. Make sure the main pages of your website is not restricted by any chance.
Inform search engines about the important pages on your website in sitemap.xml and sitemap.html (web version) and prioritize them as well
Create image sitemap, video sitemap, other language pages sitemap and split your main sitemap into two for better clarity and visibility
This is an era of question and answers. Search engines prioritize question and answer websites, forums and discussions at the first, followed by the vendor pages. You can also get some valid prospects and leads by participating in such discussions and answering their queries. It also allows your users to compare your service with your competitors in a single thread.
Subscribe to Google alerts on a specific keyword and whenever someone ask question for product recommendation, be the first to reply and capture their mind
Follow people and send private messages to them, stating about your product/service
Subscribe to responses and reply to the comments, once you receive the notification email
Many webmasters didn’t even know whether their website is penalized or not and they search all through the forums and discussions asking for expert’s opinion. There are only two metrics which clearly tells you — traffic and index status. By periodically analyzing these two metrics, you can keep yourself prepared and prevent yourself from losses.
Check your Google Webmaster tools for penalty message, if any. Do check your index status too. If your website is de-indexed, its penalized for sure
Monitor your website traffic. If you see a deep drop, identify the pages where you experience the change. Then, see the cache version of your website. If the links and design are the same as in real, there wont be any problem
Disavow backlinks, if the penalty is because of manual action i.e. less quality backlinks to any of your webpages. Once you fix the reason for the penalty, you can send a reconsideration request to Google to clear things up.
People normally say content is king. But I would say content is like a queen. Behind any king’s bravery and success, there lies an intelligent and elegant queen. Behind any website’s visibility and recognition, there lies high quality meaningful content. Content itself is sweeter and on-page SEO and link building just adds more taste to it.
Make sure to write your content without any spelling or grammatical errors and easily readable by using proper fonts
Don’t hide most of your content inside flash, images, videos or your JavaScript. Let it accessible to both search engine bots and users
Write your content neutrally for multiple audiences — let it be technical person or a layman or an expert. You can include some keywords but should not dump it aggressively
That’s true. Search engines don’t encourage duplicate content in your website. And the same happens even when you copy the whole content or even a piece of it from another website. The copied website which posted the content first is treated as original one whereas the latter is treated as duplicate and gets penalized for the same.
Add rel=canonical tag to the secondary page, if you have created two pages with the same content. By doing so, the secondary page wont get indexed by Google giving preference to the primary page.
Check duplicate content with your secured version as well sub-domain pages as well. To identify which website has copied your content, you can use the tool copyscape and inform the concerned Webmaster
Understand that there won’t be indexation problems for pages like abc.com/abc?page1, abc.com/abc?page2, but you can use previous, next links to help search engines to understand your website’s navigation and follow the same
URL, the path to your website has to be remembered easily and streamlined accordingly. This also helps you to outreach branding and to increase direct traffic where people just type the URL directly and visit your website. At the same time, it should be structured based on the nature of the individual page.
Use static URLs without excessive parameters or session IDs. Prefer short descriptive URLs.
Go with vertically ordered links for category & sub-category and horizontally ordered links for relevant category and relevant product pages.
Use footer links to achieve proper page navigation
For category pages, use URL like abc.com/category/page.html. For a subcategory page, it can be further sub-foldered
SEO is all about experiments. You can’t certainly say which strategy works for you, unless otherwise it’s experimented or tested. Right from the design, call to action buttons, content, placement to even full stop, each and every section of your website has to be tested and webpage needs to be updated based on the results derived from those experiments.
Always create different versions of any landing page or ad. AB test those versions using a proper AB testing tool and finalize the one which is declared as a winner
Choose your desired metric to execute experiments. It can be goal conversion like download, sales or lead form filling or it can be even bounce rate or average time on site
Test the effectiveness of your lead nurturing mailers by splitting your leads and sending two different email versions to understand which mailer works perfect for you.
As I pointed before, the era of traditional SEO comes to an end. It’s emerging through the semantic web of schemas into conventional search. In previous days, search engines were looking at the websites as keywords and indexed it for the same. But now, there emerges a new metric named entity. Entities provide search engines, an enhanced visualization of your website to bring out more relevant
Markup all the details of your website like contact no, address, logo, address, phone no, customer reviews, images, videos and so on, to get benefited from knowledge graph
Go for Json-LD markup (approved by Google) instead of microdata which enables you to markup any detail using script without affecting the design and usability of your website
Add the markup codes in your website, add a no-index tag to it and test using structured markup tool. Once the code is approved without any errors, you can remove the no-index and make the page live to users and bots
People are moving from desktops, laptops to mobiles and iPads. These days, no one even hesitate to make secured online transactions through mobile. Things are getting better through improvised apps day by day and people are becoming expertised on taking the best out of the apps as well. So it’s a must for any product / service to develop mobile applications for the users. | https://medium.com/seo-tips-tricks/100-proven-seo-techniques-2017-d6d21002c6f7 | ['Bharathi Priya'] | 2017-10-04 09:54:10.087000+00:00 | ['SEO', 'Online Marketing', 'Marketing', 'Seo Techniques', 'Digital Marketing'] |
Creating a Playground for the Curious | Creating a Playground for the Curious
Happy birthday to our design research initiative, Octoscope!
When I first told our team about an opportunity to build a community around research, more than 15 people with various backgrounds volunteered. And now, as the design research community fellows of IBM iX Istanbul, we are excited to announce the very first birthday of our initiative, Octoscope!
Everything we do thrives from curiosity and self-driven motivation to explore!
launch of our logo, designed by Zeynep
IBM iX İstanbul Studio
Our story as IBM iX Istanbul started in 2017 as a very small team. In just two years we grew into a big family, working on a wide range of projects with our clients from various industries. We figured the timing was right to take action, and we brought our research capabilities to a next level to embrace a more structured approach.
What is Octoscope?
Octoscope is a voluntary, multi-disciplinary research initiative. IBM has a well-known history of putting human-centered design in the heart of its processes. Design research, as a huge part of that legacy, is a practice of utmost importance throughout all IBM organizations. We also want to reflect these values in our work in the best way possible by creating the local research initiative.
We decided to focus our efforts mainly on four areas.
1. Research as a Team
Our goal has always been to improve the quality of our work progressively and maintain a standardized level of excellence in every output. One of the proven methods we embraced was to apply the research roles in IBM Design.
IBM Design defines two roles in research activities; explorers and guides. While explorers have little experience in applied research, they participate with their domain knowledge. Guides are the experienced researchers who drive the activities and take ownership of a shared understanding of user needs in the team. By assigning the explorer and guide roles, we managed to achieve high quality in our research process while improving the skills of our team.
2. Global Connection
The design research community of IBM works all over the world in almost every industry; which provides us an amazing network of professionals who are extremely open to share & learn from each other. Feyza, Asli, and I organize a series of research guild events to cherish the network by boosting the knowledge transaction and strengthening the connection. In guilds, we host a guest from this community each month to talk about their experiences and learnings from a project or a research activity.
3. Continuous Learning
This might be the core motivation that brought us together: the never-ending curiosity and love of learning. We use our buzzing slack channel for sharing inspiring content instantly. Our weekly meetings transformed into casual seminars; where we discussed a pre-selected research article thoroughly and lost track of time.
Now, learning has turned into a game. We enjoy the “Octoquest” game in our regular online-meetups, a trivia game created in Mural by Murat. When it is your turn, preparing the questions is also a challenge! It is our special get-together time we look forward to. | https://medium.com/design-ibm/creating-a-playground-for-the-curious-eae905260678 | ['Bilgenur Öztürk'] | 2020-10-27 16:05:50.848000+00:00 | ['Research', 'Design Research', 'Design', 'UX Design', 'Community'] |
Quadrant on technology triple play at Singapore Digital (SG:D) Industry Day | CEO Mike Davie outlined potential of combining of AI, blockchain and data at presentation on day of Mainnet launch
Singapore Digital (SG:D) Industry Day hosted by the Info-communications Media Development Authority (IMDA), our CEO Mike Davie shared his insights on how the combination of Artificial Intelligence, blockchain and big data will change for good decision-making processes, both in the public and private spheres.
It was a special event for us because that very same day Quadrant launched on November 22 its Mainnet, which will host a new data ecosystem where innovators, companies and policy-makers can map, verify and distribute high-quality data products.
Quadrant is already supporting IMDA’s nationwide initiative to strengthen Singapore’s digitalization efforts amid the Digital Economy Framework for Action. In May, we signed a two-year partnership with the IMDA to implement a commercial AI and microservice layer on Quadrant.io and powered by Quadrant Protocol. In front of a full auditorium, Mike explained the potential and capabilities of Quadrant’s improved technology and how our blockchain-powered protocol can play a crucial role addressing the data problems of business, government and organizations, in line with SG:D strategy goals.
The triple somersault revolution
We believe the next big revolution in the innovation economy won’t be a new technology, but the intersection of three existing ones — the combination of AI, data and blockchain. As our world relies more than ever on data, these sometimes over-used buzzwords are crystallizing into multibillion dollar markets.
And this is just the beginning. The incessantly growing flow of information from our connected devices, from our smarter cities and from our increasingly sophisticated business models is feeding a new generation of algorithms that are changing the way we make decisions.
But, to make the right choices, this new data powered determination needs new tools to manage, filter and process all that info. And that is where AI and blockchain can bring big data to the next level.
New defining technologies
Three concurrent phenomena have shaped the tech landscape during the last decade: social, mobile and cloud. We are now starting to witness a shift into the new defining technologies of our time: AI, blockchain and the Internet of Things.
Real-time big data is the key value proposition for all use cases and AI is the tool to provide actionable and efficient data outputs. But for the real change to occur, we need to build first a new data ecosystem. One that is not dominated by a handful of big players or limited by AI developments. And that is where blockchain technology can help fill the gap.
At Quadrant we are convinced of this. That’s why we have created two powerful tools to help the innovators in the datasphere thrive. Quadrant.io is a platform where data users can access a new data ecosystem and Quadrant Protocol is our blockchain tech designed to authenticate and stamp data.
Watch the video recap of the entire event here: | https://medium.com/quadrantprotocol/quadrant-on-technology-triple-play-at-singapore-digital-sg-d-industry-day-8e701e2c051b | ['Nikos', 'Quadrant Protocol'] | 2018-12-16 23:31:00.631000+00:00 | ['Presentations', 'Events', 'Blockchain', 'Big Data', 'Mainnet'] |
The New Covid-19 Strain in Britain, Explained | The New Covid-19 Strain in Britain, Explained
A mutated strain of the Sars-CoV-2 virus in the South-East of the UK is worrying scientists and politicians alike. Here’s why.
Image by PIRO4D from Pixabay
December 14th, and Health Secretary Matt Hancock was on his feet in the House of Commons, delivering some sobering news about the ongoing pandemic: the virus had mutated, and the mutated strain was thought to be spreading faster, driving the rise in cases in London and the South-East. London and many parts of the South-East were to be upgraded to Tier 3 restrictions, with all restaurants, pubs, and bars all closed.
He explained that he was first briefed on the new strain on Friday, December 11th, and was given more information over the weekend between then and his announcement in the House of Commons.
But then, late on the following Friday, December 18th, NERVTAG, the expert committee responsible for identifying and analysing new and emerging respiratory virus threats, delivered their findings to the government, and the report was bleak: the virus was between 65% and 75% more transmissible than the other Sars-CoV-2 variants, and had become the dominant strain in the capital by the end of the November month-long lockdown.
The government acted swiftly, for once at least, agreeing on Saturday morning and announcing in the evening that London and all parts of the South East previously upgraded to Tier 3 would now be upgraded to a new Tier 4 of restrictions, which is very similar to the November lockdown restrictions. They also announced that in these areas, the “Christmas bubbles” plan would be abandoned entirely, with no households allowed to mix indoor at any time, and that the bubbles in the rest of the country would be limited to 3 households for just the 25th itself, not for the 5 day period initially planned and announced on December 2nd, when the lockdown ended.
Much about this mutated strain, which has been memorably named VUI-202012/01, is not yet known. There is no known effect on the efficacy of the various vaccines, nor on the severity of the illnesses which those infected suffer on average, though it is thought that the variant causes an increased viral load, thereby making the infected more infectious. The settled view of epidemiologists and virologists is that higher viral loads worsen the illness, but that has to be confirmed with this new variant before we can be sure of that.
NERVTAG will reconvene again, following further analysis of the possible complications regarding severity, vaccine efficacy, transmissibility, and testing efficacy (both lateral flow and PCR), to advise the government in more detail about the situation and the possible responses to it.
The response will almost certainly involve drastic measures not seen since the severe and lengthy first lockdown from March through to June, such as partially shutting schools, potentially stopping students returning to universities physically, and a full national lockdown until late February or March, to give time for the vaccines to be distributed to the priority list drawn up in late November.
But there are much wider political questions that have to be answered here, both domestic and international.
Firstly, there has been concern and consternation about the UK government’s handling of this, though much of that has been misplaced. It has emerged that scientists knew of this mutation in September, and so many people have asked why we couldn’t have acted much sooner. But this is a misunderstanding of the situation — not helped by the media’s epidemiological illiteracy — as viruses mutate all the time, but it often takes a long time for any significant effects to be known. Epidemiologically or medically significant variations are rare. And, as I explained, the Health Secretary only became aware of this 2 days before introducing new restrictions, and 8 days before the further restrictions.
However, that is not to totally exonerate the government. Some have correctly pointed out that virus mutations don’t occur if there is no virus, and so many proponents of the suppression strategy, used by New Zealand to great effect both epidemiologically and economically, have used this to further their conviction, which is increasingly being vindicated, that the first lockdown was eased too soon, and that all countries, including the UK, should’ve attempted to eliminate the virus and then prevent re-entry — and, as an island, the UK would’ve had a particular advantage in this regard.
There is also an interesting point about the public health response to make here: the November lockdown appears to have driven forward the dominance of the new strain, as it brought down the R rate of the original variant to below 1, meaning its levels declined in the population, while the new variant’s R rate stayed above 1, leading to a curious pattern, unexplained at the time, of increased transmission in some affected cities such as Milton Keynes.
That is not to say that lockdowns are bad, but more that this particular lockdown suffered from bad luck. The government will now hope that the effective lockdown imposed on the hubs for the new variant will be enough to get the R down, though that seems unlikely.
And if there are still remaining questions about why it took the scientists from the obvious continued upward trend in some areas during the lockdown to December 11th to brief the Health Secretary, that delay can be explained by the combination of the natural incubation period of the virus, the short delay in information caused by the need for testing, and the additional delay caused by the need for sequencing the genetic material of the virus samples from which arose a positive PCR test, to reveal which of the variants each was. As soon as scientists had the worrying data on this at their disposal, the Health Secretary was immediately briefed.
And what does all this mean for the international community and ties with the UK?
In short, chaos. Most European nations have now temporarily shut borders with Britain, including France, which has imposed a 48-hour ban on travel from Britain, except for some limited cargo arrivals. The UK government today held a COBRA meeting, a forum for emergencies, on the possible effects of the new variant and travel bans on food supply.
It is probable, though, that the new variant is already in many other countries, given the timescales we are dealing with: the variant became prevalent enough to make international spread probable in October, and skyrocketed in November and December, with the period between the 2nd and 14th of December likely seeing significant outward international travel from the UK, and particularly from the capital, which is the main hub for the new variant.
France will probably acquiesce upon realising this, but the world is naturally worrying about the spread of this new strain. The sensible move for other countries to make is not to try to prevent the variant entering, because it already has done, but to suppress the new variant with test, trace, and isolate systems, aggressively using backwards tracing methods to find the first few infections before tracing forward. The lesson from the dichotomous levels of success between East Asia and the West is that early testing and tracing of a newly-entered virus is the only effective way of preventing the need for hefty restrictions. That remains the case with this mutated strain.
As for the public health message, it remains the same, but with greater importance and renewed purpose attached:
Wear a mask. Stay at home. Wash your hands. Practise physical distancing. And, if you have symptoms, or you are contacted by contact tracers, get a test and self-isolate yourself. | https://medium.com/discourse/the-new-covid-19-strain-in-britain-explained-f449d532e85f | ['Dave Olsen'] | 2020-12-22 03:49:55.162000+00:00 | ['Pandemic', 'UK', 'Coronavirus', 'Politics', 'Covid 19'] |
Prototyping Design Ideas with Chatbots | At Under Amour®, we’re committed to our mission: Under Armour makes you better — not just through our gear but also through our fitness apps. As software designers, it’s our job to craft digital solutions that give our users what they need to be better.
How do we know what people need? Asking users directly doesn’t always get accurate answers. Human beings can often be poor judges of their own behavior. As anthropologist Margaret Mead put it: “What people say, what people do, and what people say they do are entirely different things.”
A more fruitful tactic for learning about customer needs is to make ideas tangible. When we build working prototypes, we can let our users interact with them. Rather than asking users what they would do, we can observe what they actually do. However, building high-fidelity prototypes can take a lot of time. As part of a recent research study, we came up with an idea to speed up the prototyping timeline: we used a chatbot to prototype our ideas. What a visual prototype might uncover after months of development, a chatbot can discover in weeks or even days. | https://medium.com/ua-makers/prototyping-design-ideas-with-chatbots-24856337760 | ['Under Armour Makers'] | 2018-06-26 22:03:33.616000+00:00 | ['Automation', 'Customer Research', 'Chatbots', 'Design Research', 'Design'] |
Local Magazine Editor Just Extremely, Devotedly, Obsessively Involved | New York mag editor Adam Moss finally goes on the record: he is not, contrary to the last ten years of reports, a micromanager. “I totally dispute it.” God bless! | https://medium.com/the-awl/local-magazine-editor-just-extremely-devotedly-obsessively-involved-c1c4a0658cad | ['Choire Sicha'] | 2016-05-13 07:42:53.012000+00:00 | ['Self-awareness', 'Adam Moss', 'Media'] |
From Hobby to Job– How to Keep Writing Fun | I began writing for fun. I didn’t really show it off to anyone because it was just for me to read. Sometimes I’d tell my stories to a relative, because Caribbean families, much like many other families around the world, rely on storytelling for fun. We’d exchange ideas, see whose story was funnier, weirder or scarier and then move on to the next one.
When I started college, I learned about a website called Bookrix. It was an amazing community back when the site was interactive, was regularly worked on and had regularly updated. Users on the site were able to make “books” with their poetry or stories and could even comment on each other’s stories.
It was an amazing experience and I was able to connect with all kinds of writers who just wanted to have fun. I was determined to use that to eventually make writing some sort of career. That lead to paid writing and reporting internships, which lead me to learning more about freelance writing and figuring out how to pitch editors. I learned how to grow a tougher skin.
But during that time, writing went from something I did in notebooks for myself, or to entertain family members to something that I relied on for money. Something that I had to constantly study to improve on and figure out. It went from something I had A LOT OF FUN WITH to something that my livelihood really depended on– which can honestly suck some of the enjoyment out of writing.
It’s taken a few years, but I’ve finally figured out a few ways to keep writing fun. It’s necessary to find that sweet spot sometimes, even if the job aspect takes over a lot of your life.
Attend events with other people in the industry. There are all kinds of events that highlight different aspect of writing (and any other industry). Oftentimes there’s giveaways, food, discussion hours, presentations and more. I’ve made a point of attending several a year and it’s changed how I see media and writing. I’ve gotten to meet a lot of wonderful people, especially at themed events and I’ve learned a lot from them. That and nothing makes me happier than free muffins.
Keep a train of thought/daily journal. For writing to stay fun, try having at least one aspect of it that’s not going to be edited or seen by many other people at all. Keeping a journal is a quick way to do that. Your thoughts will be just for you and there won’t be any need to deal with scrutiny. There won’t be any censorship or agenda. It’s just about you and what you want to say.
Hold a writing circle with friends or colleagues. Say you do very serious investigative work, or technical writing… consider having a creative writing circle. If everyone works on something fun, then it’s a lot easier to get into the flow of writing something new. And since it’s a friend circle and not an assignment for work, you’ll have the flexibility to write what you’d like and have fun with the poem, short story or flash fiction that you’re writing.
Watch a movie you like, go to a museum that interests you and find a way to write about it and reflect on it. Do you think the main character should have dated the other love interest? Write an alternative end to the story for yourself! Contemplate why there’s a 1,000 year old statue in the museum, who made it, why and how they’d feel if they knew it was on display for thousands of people to see. Make up a whole alternative history for the museum object. It doesn’t even have to make a lot of sense, feel free to go out on a limb here since it’s for fun. So get creative! | https://medium.com/publishous/from-hobby-to-job-how-to-keep-writing-fun-19f8b9c68c61 | ['Angely Mercado'] | 2019-01-13 13:31:00.708000+00:00 | ['Careers', 'Advice', 'Freelance Writing', 'Career Development', 'Writing'] |
What do vegetarians eat? Visit vegan food festivals to find out | When I first became an on/off vegetarian back in 2001, I had no idea what to eat. I’d already dropped from a size 14 to a size 6/8 simply from taking a weight training class and walking up and down Jefferson City, Missouri hills. I had no interest in being a vegetarian, but I stopped buying meat once I started buying my own groceries. I was an accidental vegetarian. That is, until I came home for Thanksgiving and Christmas breaks from college, and remembered what my mother’s food tasted like. When I returned to college and my own off-campus apartment, meals became meatless all over again. By the time I graduated from college, I’d decided to make vegetarianism permanent.
Shortly after I returned home, I remember my grandfather telling me, “I’ll drink soy milk when I run across a soy milk.”
“I’ll drink soy milk when I run across a soy cow.”
He still went with me to at least three vegetarian restaurants, primarily to give me a hard time but wouldn’t admit he was curious, too. I wasn’t home from school more than a year before I saw soy milk in his fridge. I raised an eyebrow, looked at him and asked, “So what was it like when you met the soy cow?” He changed the subject.
The hard truth about transitioning to vegetarianism: The weight gain
Lining up at one of the booths of Taste of Vegan 2017 (Photo credit: Shamontiel L. Vaughn)
What people don’t tell you about going vegetarian (or vegan) is you can gain a lot of weight if you’re not careful. You don’t hear about the Vitamin D loss from not eating dairy or where to get B12. I had a fainting spell coming home from my first editing job and thought, “Forget it. I’m eating meat again. I can barely walk a few blocks.”
I shot back up to a size 12/14 because I was loading up on potatoes, rice and bread to make up for all the meat I wasn’t eating. And it wasn’t like there were a whole lot of African-American vegetarians around to tell me how to make vegan soul food. So I didn’t know how to eat a balanced meal, and doctors are not trained to give you nutrition advice. I was advised to take B12 vitamins and fish oil pills, and that was the extent of it. At that time, Beyonce wasn’t showing us how to make vegan meals, and I was just guessing my way through it all.
Inviting your vegan and vegetarian friends out to eat
Years later, I found out about Soul Vegan food at Whole Foods and ate Soul Vegetarian East in Chicago’s Chatham neighborhood. I dined at Quentin Love’s (now closed) Quench and Vegetarian Life restaurants. When I wanted Thai food, I had the time of my life at Alice & Friends (now Alice & Friends Vegan Kitchen). And I started going to a boatload of food festivals, such as Veggie Fest, Chicago State University’s Taste of Vegan, and Chicago VeganMania.
Veggie Fest 2016 (Photo credit: Shamontiel L. Vaughn)
Although some do, I wasn’t the kind of vegetarian (and vegan for one year) who would lecture you about animal rights and slaughterhouses. Honestly, I would never have become a vegetarian if someone lectured me about it all the time. But I definitely asked my loved ones and friends to come along with me so we could test the food out.
Why you should go to vegan and vegetarian annual festivals
What festivals like Veggie Fest, Taste of Vegan and Chicago VeganMania do — and apparently “soy cows” in the form of granddaughters — is educate you more on what you can eat, why you should eat healthier, how to not ruin your health trying to go meatless, and help you learn more about mental and physical health in the process.
While there’s nothing wrong with grabbing a White Castle’s Impossible slider or the usual black bean burger at your favorite restaurant, there are a laundry list of other vegan and vegetarian food brands I wish I knew 15 years ago: Quorn, Morningstar Farms, Gardein, Amy’s Kitchen, LightLife, Tofurky, Daiya and more. I found out about most of them from grocery shop testing and going to food fests. What vegetarian festivals and vegan festivals also do is introduce you to lesser-known that cannot easily be found in grocery stores like Whole Foods Market and Target. If you’re into it, maybe visit the monthly vegan/vegetarian cooking classes at the Science of Spirituality, too. | https://shamontiel.medium.com/what-do-vegetarians-eat-visit-veggie-fest-2019-to-find-out-8408c3f3db95 | ['Shamontiel L. Vaughn'] | 2020-02-08 04:34:14.813000+00:00 | ['Healthy Eating', 'Chicago Restaurant', 'Vegan', 'Chicago', 'Vegetarian'] |
Trying to Trust God When Trust is Difficult | Trying to Trust God When Trust is Difficult
Find comfort in overwhelming times.
Photo by Seven Shooter on Unsplash
This past Sunday, I turned on the live stream of the church I used to attend somewhere in one of the fly-over states. The pastor had started to preach, which meant I’d missed the singing. Fine by me.
God and I aren’t doing so well these days.
Well, let me rephrase that, since I’m sure God is just fine. It’s me who’s having the problem. A spiritual crisis, of sorts. A dark night of the soul.
Sunday’s sermon was supposed to instill hope. The pastor shared that heaven will exceed our wildest dreams and imaginings. All that’s nice, but frankly, I don’t care. I mean, of course it matters, but going to a fantastic afterlife is not one of my end goals. Getting to heaven doesn’t light my passion or fuel my faith. It’s just one of the nicer perks.
Does God Care?
What I want to know is, does God care? Not in the global sense of the word. Yes, I know he’s provided for my salvation through Christ’s death.
But does he care about me today? A middle-aged, divorced, and widowed woman who lives alone and has three grown children, one of them in remission from a deadly cancer.
Does he know I can go days without seeing another living soul outside of my working video calls? How small my world’s become? That I’m lonely, and that my future scares me?
After surviving childhood abuse and a traumatic marriage, it’s no surprise I have relationship issues. Big ones. Trusting others tops that list, and it includes anyone and everyone. Especially God.
“Find comfort in God,” I hear. Then, that well-meaning individual references a well-worn Bible verse, “Cast all your care upon Him, because He cares for you” (1 Peter 5:7, Modern English Version).
It’s an incredible promise, no doubt about it. But there’s an inherent problem. It requires that I have enough ability to trust in an authority figure’s ability to show up and come through for me.
Struggling to Relate to God as My Father
Jesus often used the world his listeners knew to give a taste of what’s possible with God. He shared that “The kingdom of heaven is like treasure hidden in a field. When a man found it, he hid it again, and then in his joy went and sold all he had and bought that field” (Matt 13:44, NIV).
Ok, I can understand that. If I were to discover an ancient treasure chest filled with gold doubloons hidden in the middle of some field, I, too, would do whatever it took to buy that piece of land. This story is relatable. I’m motivated by money and hooked by its premise. If Jesus says the kingdom of heaven is even better than this fortuitous discovery, then Wow! I’m sold!
But when Jesus uses the idea of parenting to teach us about God, I’m in trouble. Take, for example, this verse in Matthew, “Therefore I tell you, do not worry about your life, what you will eat or drink; or about your body, what you will wear….Look at the birds of the air; they do not sow or reap or store away in barns, and yet your heavenly Father feeds them. Are you not much more valuable than they?” (6:25, 26, NIV).
I struggle with that passage. The flowers and birds part makes sense. But the idea of God as my heavenly father? That’s the tricky part.
Our Attachment Styles Affect Our Relationship with God
Here’s the rub: each of us relates to others through one of four identified attachment styles: the Secure, the Anxious, the Avoidant, and the Disorganized (also called the Fearful-Avoidant). Based on our childhood experiences, we’ve learned one of these as a primary way of connecting with others. These styles of relating affect all our relationships, including our connection with God.
Those of us who’ve experienced “good enough” parenting, or what’s called a Secure attachment, understand what Jesus is describing in these verses. We’ve learned that others are “available, responsive, and helpful.” Trusting in God and believing that he cares isn’t much of a leap.
Those with an Anxious attachment style are also probably close to God since we use relationship proximity to manage our fears and stress. I suspect we would clutch at these verses, re-reading them over and over again for reassurance. For a few moments, we’d find God very comforting until we are overwhelmed one more time. Then we look to him again to reaffirm that he’s near, good-hearted, and invested.
The Avoidant among us are good with God being “over there.” Fiercely independent, we tend to go it alone. God is a great concept, but we don’t count on him to improve our day-to-day circumstances. Those of us with this attachment style wouldn’t fuss much over God’s faithfulness. We’d think to ourselves, “God is great as a concept, but we’ve got this.”
Disorganized Attachment Style’s Difficulty With Trust
Then, there are those of us with the Disorganized style of relating. I fit into this group. We are in trouble when it comes to trusting God.
Our early experiences were a mixed bag. Sometimes our loved ones showed up, and sometimes they didn’t. The ones we needed and turned to most were often the same people who hurt us. When things went wrong, they went very badly wrong. We ended up traumatized and fearful. We long for connection but distrust anyone’s ability to come through for us.
In her article, “The Forgotten Attachment Style: Disorganized Attachment,” Mariana Bockarova, Psy.D explains that, for the Disorganized, “forming intimate attachments to others can seem like an insurmountable task because any new intimate relationship formed takes a tremendous and continuous act of trust put forth onto his or her potential partner, from which consistency and reassurance are needed near-constantly.
Those of us with a disorganized style of attachment need God. Desperately so. Lacking persuasive earthly examples, however, we fear him. If flesh-and-blood people have failed us, then how can a spiritual entity do better? With our broken childhood and failed adult relationships, we can’t grasp what it means to have a loving heavenly Father.
In Desperate Need of God’s Help
So all of this puts me in a spiritual crisis of sorts. And it’s this: I’m in desperate need of God’s active intervention. A few more verses slapped on top aren’t going to fix this, which I’m sure stresses some of my Christian friends.
Is God up to the challenge? I’m sure he is. After all, he’s God — the Omnipotent, Omniscient, Omnipresent One.
Jesus, in his Sermon on the Mount, encourages me too. He said, “Blessed are the poor in spirit, for theirs is the kingdom of heaven” (Matt 5:3, NIV). He isn’t referring to being humble or “low in spirit.” No, he’s describing an all-out-crisis. Such dire straits that we find ourselves spiritually bankrupt. In the article, Who are the Poor in Spirit?, Jim Miller defines spiritual poverty as, “spiritually emptied of self-confidence, self-importance and self-righteousness.”
That’s me — at the end of myself. On my own, I cannot fix my attachment style. I need a new relationship experience with someone reliable, patient, and understanding. In short, I need God.
So I’m waiting with great fear and trepidation for a more significant experience of God. I believe God knows and understands me, my attachment style, and my desires, and that gives me hope. | https://medium.com/publishous/trying-to-trust-god-when-trust-is-difficult-ae6749dc08a6 | ['Kerry Mcavoy'] | 2020-07-30 14:01:02.961000+00:00 | ['Relationships', 'Spirituality', 'Mental Health', 'Self', 'Religion'] |
Sending Transactional Emails With Sendinblue in Kotlin | Sending a Transactional Email
Time to see how can we send a transactional email in Kotlin. This can be easily achieved with the following lines of code:
Transactional parameters must be defined in a Map<String,String> object. A Map is a collection that holds pairs of <key,value> objects. In each pair, the key is the name of a parameter as defined in the template, and the value is the corresponding value we want to give to the selected parameter.
The Sendinblue API client requires a list of SendSmtpEmailTo objects, each of which represents one recipient.
Each recipient must have an email, while a name is optional. The former should be a contact registered in Sendinblue and assigned to a contact list, and the latter is the name that will be attached to the email recipient, which will appear in the email headers, but not in the email body.
If you created a template as described in the previous step, this is the result you should get:
Extra | https://medium.com/better-programming/sending-transactional-emails-with-sendinblue-in-kotlin-6f6920f49733 | ['Antonello Zanini'] | 2020-09-09 21:41:51.628000+00:00 | ['Mobile', 'Marketing', 'Programming', 'Kotlin', 'Tutorial'] |
Optimising your Mobile App — 4 techniques you should be using | Mobile app optimisation is the process of using controlled experimentation to improve an app’s ability to drive business goals. Here I will outline various techniques I’ve used to build successful apps.
These techniques allow developers to define, measure and test features in an iterative and low cost way. They utilise a data driven approach as opposed to opinions and facilitate a validated learning process.
The end goal results in a better performing app, whether that is measured by an increased conversion rate, increased sales or another business goal.
In-App Analytics
Believe it or not, your users are probably not using your app exactly how you think they are. Therefore, it is important to include analytics within your app.
App analytics are typically used to track page views and app events (such as button clicks). These provide insights into which app features are most popular and whether users are completing particular goals, such as completing a signup form.
Most importantly, they can highlight issues such as poorly performing screens or app dead spots. A key example would be an issue with a user onboarding screen that was preventing or discouraging user sign-ups. Or it could highlight an issue with the app’s navigation that was keeping a given screen or feature hidden from the user.
If a significant number of users are failing to find a given screen, or complete a sign-up process then obviously this is a pain-point that needs addressing. Conversely, if a non-core feature turns out to be getting a lot more user attention than expected, then it could be worth investigating why.
A/B Split Testing
Once an area of the app has been chosen for improvement, we can use A/B testing to verify the effectiveness of that improvement.
A/B testing is a method for comparing two or more versions of a screen against each other to discover which is the most successful. Usually changing a single item at a time such as an image, a button, or a headline.
For example, on a user onboarding screen we may wish to experiment with a Call To Action (CTA) button. We create different versions of the screen that each have a different CTA, but the rest of the page content will remain exactly the same. Then we randomly split user traffic among the different versions of the screen and record the percentage of users that click on the CTA.
The experiment should run over a couple of days in order that enough data is collected to make a statistically significant decision over which variant is better. Keeping the number of tracked variables constrained ensures valid results.
With A/B testing, each test generates new data about whether a given change has been more effective or not. If it has, then it can be included in the app and consequently forms part of an improved design.
Multivariate Testing (MVT)
This is the more complex brother of A/B testing. MVT uses the same core mechanism as A/B testing, but compares a higher number of screen items, and therefore reveals more information about how the screen items interact with one another.
This allows us to measure the effectiveness that each combination of the design has on the given goal. After the test has been run, the variables on each screen variation are compared to each other, and to their performance in the context of other versions of the test/screen.
What emerges is a clear picture of which screen version is best performing, and which screen items are most responsible for this performance. For example, varying a screen footer may be shown to have very little effect on the performance of the screen. However varying the length of the sign-up form could have a huge impact.
The big drawback of MVT is that it requires much more traffic to reach statistical significance than A/B testing.
Usability Testing
Usability Testing is the process of watching users, use your app. Users will be asked to complete a given task whilst being observed. Typical tasks could be trying out the app’s key feature or completing a user sign-up screen.
During the process, it’s crucial that the observer does not prompt the user and that the user is encouraged to speak their mind. This will allow the observer to see if the user encounters any problems or experiences any confusion along the way. If multiple users encounter similar problems, then a usability issue has been found that needs to be fixed.
Usability Testing is sometimes considered an expensive process. However, as Jakob Nielsen has explained, testing the app on a small number of users (say three to five) can often be enough to identify any issues.
Conclusion
These are powerful techniques that when used the correct way, can enable your team to deliver incremental improvements and increase your app’s success.
The Return On Investment for app optimisation can be massive. Even small changes to say a landing page or sign-up page can result in significant increases in conversion rate, sales or another business goal.
If you liked this and want more:
Heart it, comment on it, and/or follow me. You can find out more about me on my website mikesmales.com | https://medium.com/dayone-a-new-perspective/optimising-your-mobile-app-4-techniques-you-should-be-using-70a0a702a75c | ['Mike Smales'] | 2018-08-07 13:01:16.285000+00:00 | ['Android App Development', 'Mobile App Development', 'iOS App Development'] |
Why We Should All Get Selfie-Sticks | Why We Should All Get Selfie-Sticks
Or at least reconsider asking someone to take our photo
After standing in line for 30 minutes at what felt like a ride to Disney World, the doors finally opened and my group was ushered into a cavernous room. Painted directly on the wall in front of me was the Da Vinci masterpiece, The Last Supper.
It’s a surreal experience facing a piece of artwork you’ve only seen in books and movies. I felt a bit bashful, almost like I was meeting a celebrity. The fact that I had to reserve my ticket weeks in advance, and that each visit was strictly limited to 15 minutes only enhanced my “I-can’t-believe-I’m-here” state.
I approached the artwork and stared.
In a rare moment of suspended cynicism, a swell of spirituality washed over me. I felt certain that if someone born 500 years ago could create artwork such as this, surely the world’s 7.5 billion people could eventually figure out how to end world hunger and stop all the ice caps from melting.
Just as my emotions were cresting into some sort of existential epiphany that would unlock the secrets of the universe to me, I felt three taps on my shoulder. When I turned around, a middle-aged woman with short strawberry-blonde hair and a giant DSLR around her neck stood in front of me.
“Can you take a photo of me?”
She framed her request as a question, but by the way she was taking the DSLR off her neck and handing it in my direction, it was clear that her request was actually an order. I accidentally blinked and found myself with a camera in my hands and my “model” showing me the camera’s zoom features.
For a fleeting second, I considered giving the camera back and walking away.
But I didn’t.
Maybe because I have the utmost respect for social decorum; most likely because I hate confrontation.
I took two pictures of the woman posing with the Renaissance masterpiece before wordlessly handing the camera back. She took a quick look at the photos, seemed satisfied, and then wandered off to a different corner of the room.
I tried reclaiming my state-of-mind from a moment ago, but it was lost. Within a few minutes, my group’s time was up. The church’s previously docile docents turned into resolute bouncers that politely, but firmly escorted us out.
On my walk towards dinner, I was fuming. Not having a witty retort at the moment of the incident meant I now had the pleasure of coming up with all the clever things I could have said. The more I obsessed, the more upset I got with myself for letting such a minor disturbance sour my entire experience.
But why was I so bothered? I’ve taken dozens of photos for random strangers before without having given it a second thought. I’ve even occasionally asked other people to take photos of me. Why did this interaction feel so much more intrusive? And have I unknowingly ruined other people’s metaphysical moments?
These were the questions I pondered during my remaining stay in Milan — triggered by the mobs of tourists taking selfies with the Duomo, Sforza Castle, and every plate of pasta they could get their hands on.
What I realized is that when we ask someone to take our photo, we implicitly make an assumption. We assume that the value of the photo to us outweighs the inconvenience to the person taking the photo.
We assume that the value of the photo to us outweighs the inconvenience to the person taking the photo.
And most of the time, we’re right. People usually do have 30 seconds to help a love-struck couple commemorate their trip to Paris, celebrate a family’s first vacation to New York City, or capture a backpacker’s solo trip to Argentina. Taking someone’s picture is typically a minor inconvenience compared to the joy it brings to the person asking for the photo.
But this assumption is not always true. Sometimes asking for a second of someone else’s time has unforeseen costs. The stranger we just recruited as our temporary photographer could have been in the midst of a deeply personal conversation with a friend, undergoing a spiritual revelation, or even just enjoying a moment of self-reflection. Although we can avoid certain people based on outward appearance — the single mom trying to wrangle three kids, the lovey-dovey couple with their arms around each — and target our requests to people looking bored or on their phones, there’s no definitive way of knowing other people’s internal state-of-mind. Someone’s entire life could have revolved around reaching this moment in time and we just disrupted their experience for the sake of our Insta Story.
Obviously, I’m dramatizing. But my experience at The Last Supper made me reflect on the favors we ask of other people. The person we ask to take our picture is compelled to say “yes” because that’s what society expects. As a result, it’s easy to take the favors we ask for granted because we tell ourselves the other person agreed to it — even if they didn’t feel like they had a choice. We tend to forget that every favor, no matter how small, comes at a cost to someone. And while that cost is usually outweighed by the benefit the favor brings, there are still instances when it’s not the case.
So this raises an interesting question: if in the vast majority of cases, asking someone for a picture is a negligible disturbance, does this justify the rare instances where our request is a major disruption? At what point do we recognize that the cost of capturing our perfect moment isn’t worth the risk of ruining someone else’s?
I don’t know the right answer. And realistically, I don’t expect people to stop asking for pictures anytime soon. Even I’m not above wanting a new shot for the Gram from time to time. But my experience being on the wrong side of a favor made me appreciate that asking someone for a photo comes at a real cost. Another human being is giving up time from their life to make my moment more special.
So before I ask someone to take my next photo, I suppose all I can do is spend an extra second asking myself whether the photo is really worth it. Will the photo truly capture a special moment I cherish, or does the photo just check a box in my mind? Maybe sometimes I’ll think the photo is meaningful enough to tap the stranger on the shoulder. But maybe occasionally, I’ll decide I don’t need the photo after all.
Or maybe I just need to invest in a selfie-stick.
Other Articles You May Enjoy
If you liked this article, visit LateNightFroyo.com to read about topics that spark conversations around love, life, and more.
When is the Right Time to Show up to a Party?
How Do You Get Out of Going Out?
How Young is Too Young to Date? | https://latenightfroyo.medium.com/why-we-should-all-get-selfie-sticks-eebb00fdb451 | ['George Li'] | 2020-04-25 19:22:35.292000+00:00 | ['Self-awareness', 'Self', 'Philosophy', 'Life Lessons', 'Self Improvement'] |
31 Important Things to Do to Make the Best Use of Your Time During a Pandemic | The list below is ordered by the “don’ts” first, followed by categories ascending.
Don’t
1. Don’t panic
Some people simply have their lower brains “stronger” than their higher brain, resulting in difficulty controlling their emotions when in fear. If you know someone who’s panicking, don’t judge them, help them.
The measures taken currently (lockdowns and shutdowns) are the right measures to prevent the exponential spread of the virus. This is temporary. Everything will get back to (a new and probably improved) normal.
How to not panic:
Use deep breathing
Talk to someone calm but aware of the situation
Use muscle relaxation techniques
Picture your happy place and see yourself there a few months from now
Additional resources:
2. Don’t hoard everything you can get your hands on
People who hoard things they don’t need are robbing it from people who need it. I came back early from vacation to an apartment with no food or supplies and with no easy way to get anything.
The less resourceful people may starve and lack supplies for proper hygiene. Be kind. You wouldn’t want to be in such a situation. Share the goods. We’re all in this together.
How to not hoard:
Ask yourself: Do I have enough for three weeks? If so, the rest is extra.
Shop at Amazon, where they restricted to a maximum of 2 of each item per customer
Adopt the above mentality if going to a grocery store (that still has stock)
Don’t think about profiting from selling rare items. Kindness feels MUCH better than greed.
Additional resources:
3. Don’t be glued to the news
Do you really need to know the number of cases for all countries and the names of people affected? Be aware of the situation and its latest developments, but realize that news is on repeat. The more you hear the same thing, the more it sinks into your brain. That’s how you start to panic.
How to not be glued to the news:
Turn the TV on to watch the news only for 30 minutes per day
Or don’t watch the TV and read the news online instead. But again, limit yourself to 30 minutes
Ask someone to give you daily updates
Only follow things by the CDC, John Hopkins, WHO, or other reliable sources
4. Don’t fuel your bad habits
If you’re quarantined at home and not working, view this as an opportunity to catch up on things you’ve always wanted to do but didn’t take the time (more on that throughout this article).
Don’t let yourself get back to your bad habits. You’ll realize that if your time isn’t occupied with a primary activity, it’s easy to fall off your good habits and resort back to your bad ones.
How to not fuel your bad habits:
Track your habits
Find activities to fill your time (see Activities section next)
Keep your regular morning and evening routine
Get yourself a Pavlok
Additional resources:
5. Don’t sit around and mope
There’s really only one important thing to do during a pandemic: manage your time productively. With the word “productive”, I don’t necessarily mean work, I mean activities that do yourself and your loved ones some good. For example, entertainment is a productive use of your time. All of the above are not.
How to not sit around and mope: | https://medium.com/skilluped/31-important-things-to-do-to-make-the-best-use-of-your-time-during-a-pandemic-fa637510601f | ['Danny Forest'] | 2020-11-06 20:15:49.886000+00:00 | ['Mental Health', 'Inspiration', 'Education', 'Life Lessons', 'Self Improvement'] |
How to Write a TCP Port Scanner in Go | Say goodbye to misconfigurations on your server by writing your own TCP scanner in Go.
Photo by Thomas Jensen on Unsplash
In the toolbox of any pen tester, there is an app that allows them to detect open ports on a given server. Thanks to such an app, they can list all network entry points available on the system. These entry points can be open doors for attackers and this is why they need to identify them early in the process.
The most famous TCP port scanner is a tool called nmap. you might have already used it. This is a complex piece of software. In this article, we will build a simpler version in Go.
The TCP protocol in theory
First, let’s review the basics of the TCP protocol. If you feel comfortable with it, feel free to skip this section and go directly to the hands-on part 😸
First, TCP stands for Transmission Control Protocol. Sometimes, we refer to it as the TCP/IP protocol because it is a complement to the Internet Protocol (IP). The combination of the two provides a way for two applications to communicate together. These applications may run on two different hosts but they must be on the same IP network.
Nowadays, the TCP protocol is everywhere as the Web is built on top of this transport layer.
The TCP protocol in practice
In order to write our port scanner, we need a little bit more knowledge about how this protocol is implemented. For our use case, we are only interested in the connection process, what we call the handshake process. Thanks to this process we will determine if the port is open, closed, or filtered.
First, let’s what happens if the port is open.
TCP handshake process when the port is open. Credit: Icons by freepik and Linector
This is a three-way handshake.
The client sends an SYN (synchronized) packet which contains a sequence number A. This sequence number is randomly generated and its purpose is to identify each byte of data. Thus, we can keep track of the order in which the packets are exchanged and re-order them in case of failures. The server sends an SYN-ACK packet. ACK stands for acknowledgment. Along with this packet, it sends back the sequence number incremented by one, so A + 1 and a new random number B. The client finishes with an ACK, acknowledgment of the server’s response that contains A + 1 and B + 1 numbers.
Full-duplex communication is set up.
What if the port is not open? Then, there are two scenarios. Either the port is closed or there is a firewall rule that prevents the client from reaching its target. The port is filtered.
TCP handshake process when the port is closed or filtered. Credit: Icons by freepik and Linector
So, when the port is closed, the process normally starts but the server replies with an RST (reset) packet, which means that the port is closed.
But, if you do not receive a response from the server and you observe a timeout, this means that access to this port is blocked because of firewall rules. Firewall rules are a way to select who can access which resource. These rules affect inbound traffic as well as outbound traffic.
Let’s see these concepts in action. We’ll use a tool called tcpdump. This is a simple CLI app that allows you to monitor your network traffic. You may also consider using Wireshark and that’s fine, it also does the job well.
Network traffic monitoring with tcpdump
Note: The following examples have been tested on macOS and might not work on other OS. However, it is essentially the same for other operating systems.
First, you need to know your network interfaces. On macOS, open a terminal session and type:
networksetup -listallhardwareports
Select the interface that is between your computer and the Internet. As far as I’m concerned, it is en0. Feel free to replace it in the following examples.
Then, for a quick test, launch the following command:
sudo tcpdump -i en0 tcp
If you chose the correct interface, you should be able to see all packets that go through this interface. Fascinating, isn’t it?
You can have fun getting through all these packets and explore some of the options available. You should see keywords like ack, seq, and so on. This is a good exercise to get familiar with the concepts we introduced above.
Time to Go!
Now we are familiar with the TCP protocol, it is time to start coding! First, let’s take a step back and think about how we can programmatically determine if a port is open.
Based on your knowledge of the TCP protocol, you may think of sending a request and wait for the server’s response. It is indeed the way to go! If there is no response, then it is because of firewall rules. Otherwise, we will have to examine the response and identify the nature of the segment (syn-ack or rst?).
UML Activity diagram TCP port scanner.
In Go, there is a package in the standard library that already does this job for us. It is the net package.
For the sake of this tutorial, we will use a playground provided by the Nmap Security Scanner Project and Insecure.org. Here is the first version of our program.
Run it! You should see two open ports. Have you noticed how long it took to scan every port? 😲 This is due to the fact that our program scans one port after the other. We can do better by executing these tasks concurrently. Notice that I used the word “concurrent” instead of “parallel” as they have different meanings, especially in Go. See this talk by Rob Pike, one of the creators of Go, for more details.
Fortunately, concurrency is one of the strengths of the Go programming language … Indeed, Go has been designed with concurrency in mind and the creators made it easy to implement.
We can particularly leverage Goroutines and Channels.
Goroutines are often described as lightweight threads. They allow you to execute functions concurrently. The good news is that you have already used them without knowing it! Indeed, every Go program has at least one Goroutine: the main goroutine. So, when we ran our first version of our TCP scanner, one Goroutine has been automatically created when the process began.
We could encapsulate all the code in the for loop in a goroutine. But we may encounter inconsistencies in the results because once the execution of the for loop over and all the goroutines initialized, the program will exit while packets are still on their way.
This is where channels can be helpful. Channels are used to facilitate communication between goroutines. They can be seen as streams of information. Indeed, values may be passed along the channel or read from it.
As I was looking for documentation, I found the solution from the book Black Hat Go to be quite elegant.
That’s it! 🎉 We now have a highly efficient port scanner!
This solution may be a little difficult to understand if you are not familiar with concurrency principles. But it’s ok! Concurrency is hard for many reasons. As humans, we tend to think sequentially and this is why most of the time developers struggle with concurrency. At the bottom of this article, you will find some references that will help you better understand this new way of thinking.
Conclusion
To summarize, we first discovered the TCP protocol internals. We used this knowledge to build a naive TCP port scanner and then iterated on this first version leveraging Go concurrency features. Finally, we got an efficient tool but there still is room for improvement. For example, we could add a CLI layer to interact with our program directly from our terminal. It would also be interesting to profile our program to detect performance bottlenecks (is 100 workers the best choice?). I leave that for another blog post 😊
I hope you enjoyed reading this tutorial as much as I enjoyed writing it.
Resources | https://medium.com/devops-dudes/how-to-write-a-tcp-port-scanner-in-go-d436e48fde87 | ['Benoît Goujon'] | 2020-06-20 15:05:29.561000+00:00 | ['Golang', 'DevOps', 'Software Engineering', 'Cybersecurity', 'Programming'] |
Multicollinearity — How does it create a problem? | During regression analysis, we check many things before actually performing regression forehand. We check if the independent values are correlated, we check if the feature we are selecting is significant or not and also if there are any missing values and if yes then how to handle them.
First, let’s understand what Dependent and Independent Variables are —
Dependent variables, the value which has to be predicted during regression. Also, known as the target value. Independent variables, the values which we have to use to predict the target value or dependent variable. Also, known as predictors.
If we have an equation like this
y = w*x
Here, y is the dependent variable and w is the independent variable.
We’ll see later how it is detected but first, let’s see what problem will be there if variables are correlated.
Understanding Conceptually —
Imagine you went to watch a rock band’s concert. There are 2 singers, a drummer, a keyboard player, and 2 guitarists. You can easily differentiate between the voice of singers as one is male and other is female but you seem to have trouble telling who is playing better guitar.
Both guitarists are playing on the same tone, same pitch and at the same speed. If you could remove one of them then it wouldn’t be a problem since both are almost same.
The benefit of removing one guitarist is cost-cutting and fewer members in the team. In machine learning, it is fewer features for training which leads to a less complex model.
Here both guitarists are collinear. If one plays the guitar slowly then another guitarist also plays the guitar slowly. If one plays faster then other also plays faster.
If two variables are collinear that means if one variable increases then other also increase and vice-versa.
Understanding Mathematically —
Let’s consider the equation
Consider A and B are highly correlated.
y = w1*A + w2*B
The coefficient w1 is the increase in y for every unit increase in A while holding B constant. But practically it not possible since A and B are correlated and if A increases by unit then b also increase by some unit. Hence, we cannot check the individual contribution of either A or B. The solution is to remove either of them.
Checking for Multicollinearity —
There are 2 ways multicollinearity is usually checked
Correlation Matrix Variance Inflation Factor (VIF)
Correlation Matrix — A correlation matrix is a table showing correlation coefficients between variables.
We are not going to cover how the correlation matrix is calculated.
create_correlation_matrix
I consider values above 0.75 as highly correlated.
Variance Inflation Factor — Variance inflation factor (VIF) is the quotient of the variance in a model with multiple terms by the variance of a model with one term alone. It quantifies the severity of multicollinearity in an ordinary least squares regression analysis. VIF value can be interpreted as
1 (Non-collinear) 1–5 (Medium collinear) >5 (Highly collinear)
The values having VIF value above 5 are removed.
VIF Python
Conclusion —
Multicollinearity can significantly reduce the model’s performance and we may not know it. It is a very important step during the feature selection process. Removing multicollinearity can also reduce features which will eventually result in a less complex model and also the overhead to store these features will be less.
Make sure to run the multicollinearity test before performing any regression analysis. | https://towardsdatascience.com/https-towardsdatascience-com-multicollinearity-how-does-it-create-a-problem-72956a49058 | ['Gagandeep Singh'] | 2019-08-09 13:33:23.919000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Regression', 'Data Science', 'Statistics'] |
10 Human Skulls That Taught Us Something About Humanity | 10 Human Skulls That Taught Us Something About Humanity
The Science of Interpreting Human Bodies From Centuries Past
Skulls…when we’re kids, they terrify us and fill us with horror; when we’re adults, they are the remnants of a person who once lived, a person like us, with a story to tell. Their historical use in art and other expressive mediums span the length of human history, but in the past few centuries, we’ve begun to turn to skulls to teach us about the world around us and the people who lived before us. Besides being merely a grim find, skull discoveries have often significantly advanced our understanding of biology and the history of the human race.
The Chalcolithic Skulls
The Chalcolithic skulls were a group of skulls found in separate graves in Russia, totaling 35 people in 20 different gravesites, dating back to the Chalcolithic period of human history. The Chalcolithic is a period not often discussed when metallurgy and pottery was primarily done with copper and is sometimes referred to as the copper age, between around 5,000 and 3,000 B.C.E. While bodies found in graves from this period isn’t unusual, in itself, what makes these skulls unique is what was observed on them: several of the skulls had holes which had been drilled into them, likely when the people were still alive. These particular skulls were found in a grave containing three men and two women, as well as two children, one a teenage girl and the other an infant between a year and two years old.
The skulls of two men and two women showed a single bore through them which indicated that they’d been trepanned, while the third man had an indention which had been carved into his skull, but it did not penetrate. This prompted an investigation that led to the discovery of many more such skulls in Russia. Further investigation revealed that not only had many of these skulls healed very well, with others healing for a couple of weeks before the person died, but it also appears that the bulk of these trepanations were performed on perfectly healthy individuals. This means not only is trepanation a lot more widespread than we’d previously believed, having never have discovered trepanation anywhere near the site in Russia but that it was a lot older than formerly thought. It’s still uncertain why these prehistoric people chose to trepan perfectly healthy humans, however. Unlike the ancients who performed trepanation as a form of surgery, these people seemed to have been doing it for some other reason, or perhaps to treat a mild illness like headaches. Nonetheless, our prehistoric ancestors were a lot more inventive than we’d previously given them credit for.
Austrapolithecus: Ethiopia
The next skull on the list was found in Ethiopia, in 2016, and was in remarkably good condition considering its age and freakishly unusual shape. Just how old was the skull? 3.8 million years old, and it belonged to our distant relatives, a human species similar to ours called Australopithecus. Austrapolithicus anamensis lived between 3.9 and 4.2 million years ago, and the site was only 34 miles from where the remains of the famous Lucy, another early human was found. The skull was in such good shape that researchers were able to reconstruct the face of the person to whom the skull belonged, giving us an astonishing glance into our distant past. The skull not only shaped our perception of our early ancestors but our conception of evolution. It was previously believed that Austrapolithecus anamensis died off and gave rise to a new race of humans, Austrapolithicus afarensis, but the dating of the skull compared with bone fragments found at nearby sites have shown that several species of humans lived concurrently. This challenges the long-held linear view of human evolution, that one species gave rise to the next, showing that multiple species lived side-by-side, in competition with one another for space and resources. The linear line of human evolution is starting to look a lot more non-linear as we discover more and more remains.
Homo Sapiens: Greece
Two skulls were found entombed in stone in Apidima, Greece, all the way back in 1978. They were encased together in a tiny stone box and it seemed that the skulls had been buried together and were of the same species of humans, Neanderthal. One of the faces was heavily degraded due to its age, being about 200,000 years old, and the other one had no face at all. But recent research would come to find out that one of the skulls was 170,000 years old while the other had been buried there separately, being 210,000 years old, and even more strikingly, only one of them was Neanderthal and the other was a 210,000-year-old Homo sapiens skull. This is incredible for three reasons, the first of which is that this is the oldest Homo sapiens fossil found in Europe by a long shot, with the next oldest dating back to 40,000 years ago.
It also demonstrates that seeing as the Neanderthal skull was 170,000-years-old and the Homo sapiens skull was 210,000-years-old, Homo sapiens and Neanderthal lived side-by-side, at some point in human history. Lastly, this greatly pushes the date back for when it’s believed that Homo sapiens first left Africa. Homo sapiens not only spread out earlier than we’d previously thought, but managed to make it a lot farther, into the Neanderthal territory, than we’d previously thought.
Tenochtitlan Skulls: Mexico
In 2015, archeologists discovered something astonishing in Mexico City, Mexico: a mass grave of human skulls. The skulls are some of the remains of Aztec culture who inhabited the area long before Mexico was even a country, and it’s almost certain that these skulls were used in human sacrifices, a practice which was prominent in Aztec culture. What’s most shocking about the find of 650 human skulls is the fact that they were coated with lime and placed together in an extremely organized fashion. As it turns out, this was once a massive tower of skulls fastened together, and it once stood near an Aztec church dedicated to the god Huitzilopochtli, the god of the sun, war, and human sacrifice. This behemoth of a tower was 6 meters wide and didn’t contain the skulls of just young men who were once warriors, but women and children also. This site confirmed what had been documented in the literature of the Conquistadors for centuries, the existence of massive towers and other objects made from human skulls, reports that were often doubted.
Homo Naledi: South Africa
Homo nalendi is an extinct species of humans who had an interesting blend of features that belonged to both modern anatomical humans and their ancestral primates. Scattered remains of these humans have been found in South Africa, primarily in the Rising Star cave complex. The reason for this? Homo nalendi would carry their dead and bury them deep, deep into the caves of the earth, probably as a ritual. They aad smaller, but rounder heads, a prominent eyebrow, but mainly, the remains found spanned in age, from very young to very old, meaning that at this point in human history, Homo nalendi was already beginning to care for the elderly, supporting families through multiple generations.
Yes, compassion, care, devotion, and solidly-bonded tribes go back a very, very long time. Anyone who suggests that humans are fundamentally selfish and warlike only isn’t telling you the whole story, it seems.
Archanthropus of Petralona: Greece
In 1960, researchers discovered an ancient skull inside of a cave in Greece, a skull that would greatly challenge our running theory of evolution, as time went on. At first, it was assumed to be just a Neanderthal skull, the Neanderthals having been a race that lived approximately 120,000 years ago, and, as we’ve discovered, alongside early modern anatomical humans. But, further dating would reveal the skull to be much older, between 200,000-years-old and 350,000-years-old. Some estimates have even suggested that the skull might be 700,000-years-old. Subsequently, researchers found additional remains from various species and human teeth dating within these older time ranges, topping out at 800,000-years-old. All of these severely damages the theory that all humans came from Africa, especially flying in the face of the theory that humans left Africa beginning around 120,000 years ago, with actual mass migration being much later. Could it be possible that we’re missing links in the chain of evolution? Or, is it possible that several species of humans evolved concurrently on different continents, although slightly different? Time will tell.
Naia: Mexico
Somewhere between 12,000 and 13,000 years ago, a young, teenage girl was wandering in a dark cave in modern-day Mexico, likely searching for water. At some point, she must have made a misstep in what we can only believe was total darkness and plunged to her death. In 2007, after the cave structure had significantly changed, divers in the waters which had come to fill the cave found an amazing discovery: the girl’s perfectly intact skull sitting on the bedrock of the cave. A diving team painstakingly removed the skull with tremendous care to preserve it as much as possible.
The girl would come to be named Naia, and she among the earliest humans to arrive in the Americas. A question that has baffled researchers for some time is why the earliest Americans who’d crossed into the Americans from Russia across the Bering Strait look so different from modern Native Americans. Some have sought to explain this by suggesting that there were many migrations rather than one. But with Naia’s skull so well preserved, researchers were able to perform the oldest facial reconstruction in the Americans, as well as a comprehensive DNA analysis. The results were groundbreaking in understanding early Americans, showing that Naia shared the facial features and bone structures of the earliest of Americans, which is different from Native Americans, but shared the same DNA as modern Native Americans, showing that there was only one instance of migration across the Bering Strait and into the Americas.
Denisovan: Siberia, Russia
Back in 2010, researchers stumbled upon a tiny fragment of a single bone inside of a cave in Siberia and a subsequent DNA analysis turned up something startling: a proposed new race of humans called Denisovans, after the Denisovan cave in Siberia in which the sliver of bone was found. Finding more remains to confirm this theory, however, proved to be quite difficult. Then, in 2017, two skulls were discovered in a cave in Eastern China, and though researchers couldn’t bring themselves to utter the word, the 105,000–125,000-year-old skulls are thought to belong to Denisovans. These skulls looked neither like Neanderthal nor modern humans, sharing some traits with each, and are thought to be distant cousins of Neanderthals. Ultimately, these skulls allow us to look into the past and give us an idea of what some humans looked like at a pivotal time on Earth, the Ice Age, when they lived. They had features like Neanderthals, like a strong eyebrow ridge, but they also have very big skulls suggesting a large brain capacity, in fact, the brain capacity of the skull even tops most modern humans. Could our brains have possibly gotten smaller? Time will tell what stories Denisovans have to tell and what more they will teach us about ourselves.
Pakal Na Trophy Skull: Belize
Ancient Mesoamerican civilizations, like the aforementioned Aztec and the Mayans, had a habit of skull collection, but not just any old boring skulls would do, it was a sign of prestige to have decorated skulls, or “trophy skulls” to proclaim your social status with. These skulls were often decorated with elaborate dressing and even painted or carved for effect. Turn the skulls of your enemies into decorated symbols of power and you’ve got yourself one hell of a method for scaring off any militant rivals who may want to challenge that power.
In 2019, fragments were found of such skulls at a site in Belize called Pakal Na, including fragments of jaws and cranium, each carved with holes punched out that were likely used to place feathers into for decoration. These skulls, in combination with others, are helping to guide us to what caused the curious collapse of the Mayan civilization. While it’s certain that environmental factors played a role, these skulls had an important story to tell: the imagery was that of the Mayan culture that resided much farther north, in the Yucatan region, suggesting that the Mayan empire went through a sort of civil war. Why else would a northern warrior be found so far south as Belize? It is quite possible that warfare from within the civilization was what caused the collapse of the Mayan empire.
Misliya Cave: Israel
More evidence that changed our view of when humans left Africa came out in 2018 when the jawbone and several teeth were found in the Misliya Cave, in Israel, suggesting that humans had left Africa much earlier than previously thought. The time frame these fragments are estimated to have come from are between 177,00 and 194,000 years old, which tells us that humans likely left Africa 50,000 years earlier than previously expected, which is quite the chunk of time. (link 19) Other remains of modern humans have been found and dated from about 90,000 to 120,000 years ago, as we’ve discussed, and this confirms those finds. The discovery also included stone tools that help to tell us how advanced these subjects were. | https://medium.com/unusual-universe/10-human-skulls-that-taught-us-something-about-humanity-c74b878d237b | ['Joe Duncan'] | 2019-11-13 23:44:59.153000+00:00 | ['Biology', 'History', 'Humanity', 'Science', 'Evolution'] |
How to Never Finish Anything You Start | I’m the queen of abandoned projects. Learn from my mistakes.
Image by TeroVesalainen on Pixabay
I love a good project. I love taking on something new, getting all set up, diving in.
But eventually the newness wears off. The project isn’t fun anymore. And it’s ultimately abandoned.
I’m the queen of abandoned projects.
I still have picture frames waiting to be hung in my apartment — even though I moved in six months ago. I have a paint-by-numbers picture still waiting for the last bubbles to be filled in. I have several unfinished novels sitting on my hard drive.
I’m not proud of it, but I rarely finish anything I start. But why?
Is it in my DNA? Am I just lazy? And more importantly, how do I change? How do I become the person who commits instead of quits? What magical switch do I need to flip to finally make a change?
The internet is full of so-called experts on productivity, but I won’t pretend to be one. Because the truth is I haven’t the slightest clue how to make myself actually finish the projects I take on. I don’t know how to get more done in less time and I certainly don’t know the secret to success.
But maybe my lack of follow-through can teach you a thing or two. Maybe if you try and avoid everything I do, you’ll find out exactly what you need to do. So here it is, my three-pronged approach to never getting anything done.
Dive in hard
Oh that new project feeling. The excitement. The anticipation. The buildup. This is the honeymoon phase. I get a new idea, and all I see is the good. How fun it will be. How it will enrich my life.
So I jump right on in. I don’t think. I just jump. And forget about pacing myself. I become borderline obsessed, dedicating so much of my time and energy to the project.
But in the honeymoon phase, I have rose-colored glasses on. I don’t see the roadblocks and challenges ahead. I only have eyes for the moment in front of me. So it’s full-speed ahead — until I burn out.
Then it’s the inevitable slowdown, the end of the honeymoon. And finally, another project catches my eye, and this one is abandoned.
Wing it
When I’m jumping in on a new project, I don’t have time for lists or plans. I’m going with the flow, letting the mood strike me. I’m not setting goals or making timelines. I can’t be tied down.
But when that slowdown ultimately comes, the mood is no longer striking me. I have nothing to rely on but my willpower, and let’s face it. I don’t have any willpower to speak of.
So when I come across that first challenge, I don’t have any way to deal with it. This wasn’t supposed to happen. This was supposed to be all fun and games. So that challenge, that roadblock, it leaves me stumped. I don’t have the will to conquer it, so I abandon it and once again, move on to the next project.
Focus on perfection
When I start on a project, I have these grand ideas about it. How wonderful my novel will be. How beautiful my paint-by-numbers painting will turn out. But when I’m actually working on the project, it’s never as good as what I imagined in my head.
I don’t like to be mediocre. I’m always striving to be better. To be the best. But even when I think I’m giving my best, it’s not enough. So I think well if my project isn’t even good, why finish it?
Is an unfinished masterpiece better than finished mediocre work? Probably not, but for some reason, this is the way my mind works.
I’m trying to be better. To be more mindful about the projects I take on. To be realistic and not get ahead of myself. To keep going when the going gets tough. Maybe someday I’ll relinquish my crown and don a new one. Queen of completed projects sounds like a much better title to me. | https://mariaelharaoui.medium.com/how-to-never-finish-anything-you-start-2d9bd73d0924 | ['Maria Elharaoui'] | 2019-02-27 16:53:46.268000+00:00 | ['Life', 'Self', 'Self Improvement', 'Productivity', 'Life Lessons'] |
To the Owner of the Manuscript I Just Massacred. | Hey Milton,
Listen, before you freak out, just listen. This is for the best. What happened to your manuscript over the past few weeks, it had to be this way. You get that, right?
I’m only putting this out there because first-time authors freak out when they open up the document I sent back to them, the one with all of the tracked changes. It is usually a mass of pulp, dripping with red ink like freshly smashed roadkill. Of course, it’s not ink. Everything is digital — less mess.
But it’s still a mess. All those red lines of corrections and suggested edits and notes in the margin? Those are how we clean up the mess.
Yes, we. You and I together. That’s why I’m here -to be your wingman, your teammate.
As the writer, you are fully invested in protecting the integrity of your story. As you should be. Hell, no one else is going to do it. As your editor, I’m here to guard the sanity of your readers. They are your audience, after all, the people you want reading your story.
I mean, you do want people to read this, right? Why else are we bothering? Those people are expecting the most out of your prolific, brilliant, storytelling-ass. They want top-notch delivery. Everything has to be in its place because they want to fall in love with your work and, in turn, with you.
Maybe this wasn’t what you were expecting. I get that. I have dozens of writers a year approach me with their freshly-printed manuscripts looking for an edit. I know they each expect a grammar and spelling check. No typos! That would be embarrassing.
I don’t even get to the grammar. Not when there are plot holes to fix. Not when characters have different hair colors from one chapter to the next (and not on purpose!). Not when every sentence and paragraph is structured the same. Forget grammar; we need to get the story right. We need to get your ideas in order.
If we can get the story right, then I will do a spell-check before this goes to press — pinkie promise.
Trust me; this is for the best. You’ve been married to this manuscript for months, even years, and you’re just too close to it to make the best decisions. You no longer see the flaws and the problems that have crept in. Sort of like how your spouse starts sneaking out little farts around the house, and eventually, you have no issue with full-blown, post-tikka masala flatulence in the bed you share as you both scroll through your phones before bed.
All I’ve done is show your readers the sexy, charming version of your spouse you fell in love with before the out-of-control gas and clipping-toenails-with-their-teeth monster showed up. You don’t see this stuff, but I do, which means the reader will.
Trust me on this.
I’ve spent the last 30 years reading — it is the backbone of my career. Every single year I put away dozens of books, hundreds of articles, and millions of text messages and headlines. There is so much crap out there on the internet that is half-baked and ill-executed it makes me want to scream, and more of it shows up every day! I’ve seen so much my grey matter is infinitely changed. I’m tuned into the bullshit. See enough of what the professionals put out, and you can smell when amateurs are trying to sneak through.
You’re not an amateur. I won’t let it happen on my watch.
I get it; you hate me. I won’t take it personally. History is riddled with writers who have grown to hate their editors. If you want a friend who will tell you what you want to hear, you might want to consult your agent.
For now, I’ll let you go and lick your wounds. There is a lot in the margins; take your time with it. There is no rush. Your readers will wait for something good — that’s how we keep them reading.
Within a week, we’ll either be laughing about all of this, or one of us will be dead — and I haven’t died yet.
With love and ink,
Your Editor | https://medium.com/swlh/to-the-owner-of-the-manuscript-i-just-massacred-a50c9d18352e | ['David Pennington'] | 2020-09-02 14:52:06.644000+00:00 | ['Editor', 'Editing And Proofreading', 'Writers On Writing', 'Writing Life', 'Writing'] |
IValue: efficient representation of dynamic types in C++ | Introduction
In traditional SQL systems, a column's type is determined when the table is created, and never changes while executing a query. If you create a table with an integer-valued column, the values in that column will always be integers (or possibly NULL ).
Rockset, however, is dynamically typed, which means that we often don't know the type of a value until we actually execute the query. This is similar to other dynamically typed programming languages, where the same variable may contain values of different types at different points in time:
$ python3
>>> a = 3
>>> type(a)
<class 'int'>
>>> a = 'foo'
>>> type(a)
<class 'str'>
Rockset's type system was initially based on JSON, and has since been extended to support other types as well:
bytes : taking a cue from Python, we distinguish between sequences of valid Unicode characters ( string , which is internally represented as UTF-8) and sequences of arbitrary bytes ( bytes )
: taking a cue from Python, we distinguish between sequences of valid Unicode characters ( , which is internally represented as UTF-8) and sequences of arbitrary bytes ( ) date- and time-specific types ( date , time , datetime , timestamp , microsecond_interval , month_interval )
There are other types that we use internally (and are never exposed to our users); also, the type system is extensible, with planned support for decimal (base-10 floating-point), geometry / geography types, and others.
In the following example, collection ivtest has documents containing one field a , which takes a variety of types:
$ rock create collection ivtest
Collection "ivtest" was created successfully in workspace "commons". $ cat /tmp/a.docs
{"a": 2}
{"a": "hello"}
{"a": null}
{"a": {"b": 10}}
{"a": [2, "foo"]} $ rock upload ivtest /tmp/a.docs
{
"file_name":"a.docs",
"file_upload_id":"c5ccc261-0096-4a73-8dfe-d6db8b8d130e",
"uploaded_at":"2019-06-05T18:12:46Z" } $ rock sql
> select typeof(a), a from ivtest order by a;
+-----------+------------+
| ?typeof | a |
|-----------+------------|
| null_type | <null> |
| int | 2 |
| string | hello |
| array | [2, 'foo'] |
| object | {'b': 10} |
+-----------+------------+
Time: 0.014s
This post shows one of many challenges that we encountered while building a fully dynamically typed SQL database: how we manipulate values of unknown types in our query execution backend (written in C++), while approaching the performance of using native types directly.
At first, we used protocol buffers similar to the definition below (simplified to only show integers, floats, strings, arrays, and objects; the actual oneof that we use has a few extra fields):
message Value {
oneof value_union {
int64 int_value = 1;
double float_value = 2;
string string_value = 3;
ArrayValue array_value = 4;
ObjectValue object_value = 5;
}
} message ArrayValue {
repeated Value values = 1;
} message ObjectValue {
repeated KeyValue kvs = 1;
} message KeyValue {
string key = 1;
Value value = 2;
}
But we quickly realized that this is inefficient, both in terms of speed and in terms of memory usage. First, protobuf requires a heap memory allocation for every object; creating a Value that contains an array of 10 integers would perform:
a memory allocation for the top-level Value
an allocation for the array_value member
member an allocation for the list of values ( ArrayValue.values , which is a RepeatedPtrField )
, which is a ) an allocation for each of the 10 values in the array
for a total of 13 memory allocations.
Also, the 10 values in the array are not allocated contiguously in memory, which causes a further decrease in performance due to cache locality.
It was quickly clear that we needed something better, which we called IValue . Compared to the protobuf version, IValue is:
More memory efficient: while not as efficient as using native types directly, IValue must be small, and must avoid heap allocations wherever possible. IValue is always 16 bytes, and does not allocate heap memory for integers, booleans, floating-point numbers, and short strings.
must be small, and must avoid heap allocations wherever possible. is always 16 bytes, and does not allocate heap memory for integers, booleans, floating-point numbers, and short strings. Faster: arrays of scalar IValue s are allocated contiguously in memory, leading to better cache locality. This is not as efficient as using native types directly, but it is a significant improvement over protobuf.
Most of Rockset's query execution engine operates on IValue s (there are some parts that have specialized implementation for specific types, and this is an area of active improvement).
We'd like to share an overview of the IValue design. Note that IValue is optimized for Rockset's needs and is not meant to be portable - we use Linux and x86_64-specific tricks, and assume a little-endian memory layout.
The idea is in itself not novel; the techniques that we use date back to at least 1993, as surveyed in " Representing Type Information in Dynamically Typed Languages". We decided to make IValue 128 bits instead of 64, as it allows us to avoid heap allocations in more cases (including all 64-bit integers); using the taxonomy defined in the paper, IValue is a double-wrapper scheme with qualifiers.
Internally, IValue is represented as a 128-bit (16-byte) value, consisting of:
a 64-bit field (called data )
) a 48-bit field (called pointer , as it often, but not always, stores a pointer)
, as it often, but not always, stores a pointer) two 8-bit discriminator fields (called tag0 and tag1 )
tag1 indicates the type of the value. tag0 is usually a subtype, and the meaning of the other two fields changes depending on type. The pointer field is often a pointer to some other data structure, allocated on the heap, for the cases where heap allocations can't be avoided; as pointers are only 48 bits on x86_64, we are able to fit a pointer and the two discriminator fields in the same uint64_t .
We recognize two types of IValue s:
immediate values: those that can fit within the 16 bytes of the IValue itself (while still leaving room for the 16-bit tag):
all scalars that fit within 128–16 = 112 bits: NULL , integers, floating-point values, booleans, date, time, etc
, integers, floating-point values, booleans, date, time, etc strings shorter than 16 bytes
2. non-immediate values, which require heap allocation.
tag1 has bit 7 clear ( tag1 < 0x80 ) for all immediate values, and set ( tag1 >= 0x80 ) for all non-immediate values. This allows us to distinguish between immediate and non-immediate values very quickly, using one simple bit operation. We can then copy, hash, and compare for equality immediate values by treating them as a pair of uint64_t integers.
Scalar Types
The representation for most scalar types is straightforward: tag0 is usually zero, tag1 identifies the type, pointer is usually zero, and data contains the value.
SQL NULL is all zeros, which is convenient ( memset() ing a chunk of memory to zero makes it NULL when interpreted as IValue ):
Booleans have data = 0 for false and data = 1 for true , tag1 = 0x01
Integers have the value stored in data (as int64_t ) and tag1 = 0x02
And so on. The layouts for other scalar types (floating point, date / time, etc) are similar.
Strings
We handle character strings and byte strings similarly; the value of tag1 is the only difference. For the rest of the section, we'll only focus on character strings.
IValue strings are immutable, maintain the string's length explicitly, and are not null-terminated. In line with our goal to minimize heap allocations, IValue doesn't use any external memory for short strings (less than 16 bytes).
Instead, we implement the small string optimization: we store the string contents (padded with nulls) in the data , pointer , and tag0 fields; we store the string length in the tag1 field: tag1 is 0x1n , where n is the string's length.
An empty string has tag1 = 0x10 and all other bytes zero:
And, for example, the 11-byte string "Hello world" has tag1 = 0x1b (note the little-endian representation; the byte 'H' is first):
Strings longer than 15 bytes are stored out-of-line: tag1 is 0x80 , pointer points to the beginning of the string (allocated on the heap using malloc() ), and data contains the string length. (There is also the possibility of referencing a “foreign” string, where IValue doesn't own the memory but points inside a preallocated buffer, but that is beyond the scope of this post.)
For example, the 19-byte string “Rockset is awesome!”:
Vectors
Vectors (which we call “arrays”, adopting JSON’s terminology) are similarly allocated on the heap: they are similar to vectors in most programming languages (including C++’s std::vector ). tag1 is 0x82 , pointer points to the beginning of the vector (allocated on the heap using malloc() ), and data contains the vector's size and capacity (32 bits each). The vector itself is a contiguously allocated block of capacity() IValue s ( capacity() * 16 bytes); when reallocation is needed, the vector grows exponentially (with a factor that is less than 2, for the reasons described in Facebook's fbvector implementation.)
Hash Maps
Maps (which we call “objects”, adopting JSON’s terminology) are also allocated on the heap. We represent objects as open-addressing hash tables with quadratic probing; the size of the table is always a power of two, which simplifies probing. We probe with triangular numbers, just like Google’s sparsehash, which. as Knuth tells us in The Art of Computer Programming (volume 3, chapter 6.4, exercise 20), automatically covers all slots.
Each hash table slot is 32 bytes — two IValue s, one for the key, one for the value. As is usually the case with open-addressing hash tables, we need two special keys — one to represent empty slots, and one to represent deleted elements (tombstones). We reserve two values of tag1 for that purpose ( 0x06 and 0x05 , respectively).
The pointer field points to the beginning of the hash table (a contiguous array of slots, allocated on the heap using malloc() .) We store the current size of the hash table in the least-significant 32 bits of the data field. The tag0 field contains the number of allocated slots (as it's always a power of two, we store log2(number of slots) + 1 , or zero if the table is empty).
The capacity field (most significant 32 bits of data ) deserves further interest: it is the number of slots available for storing user data. Initially, it is the same as the total number of slots, but, as in all open-addressing hash tables, erasing an element from the table marks the slot as “deleted” and renders it unusable until the next rehash. So erasing an element actually decreases the table's capacity.
Performance
IValue gives a substantial performance improvement over the old protobuf-based implementation:
creating arrays of strings is between 2x and 7x faster (depending on the string size; because of the small-string optimization, IValue is significantly faster for small strings)
is significantly faster for small strings) creating arrays of integers is also 7x faster (because we no longer allocate memory for every individual array element)
iterating over large arrays of integers is 3x faster (because the values in the array are now allocated contiguously)
Future Work
Even though Rockset documents are allowed to contain data of multiple types in the same field, the situation shown in the introduction is relatively rare. In practice, most of the data is of the same type (or NULL ), and, to recognize this, we are extending IValue to support homogeneous arrays.
All elements in a homogeneous array are of the same type (or NULL ). The structure is similar to the regular (heterogeneous) arrays (described above), but the pointer field points directly to an array of the native type ( int64_t for an array of integers, double for an array of floating-point values, etc). Similar to systems like Apache Arrow, we also maintain an optional bitmap that indicates whether a specific value is NULL or not.
The query execution code recognizes the common case where it produces a column of values of the same type, in which case it will generate a homogeneous array. We have efficient, vectorized implementations of common database operations on homogeneous arrays, allowing us significant performance improvements in the common case.
This is still an area of active work, and benchmark results are forthcoming.
Conclusion
We hope that you enjoyed a brief look under the hood of Rockset’s engine. In the future, we’ll share more details about our approaches to building a fully dynamically typed SQL database; if you’d like to give us a try, sign up for an account; if you’d like to help build this, we’re hiring! | https://medium.com/rocksetcloud/ivalue-efficient-representation-of-dynamic-types-in-c-ef92a30cc1e8 | ['Tudor Bosman'] | 2019-09-13 19:09:12.879000+00:00 | ['Database', 'Protobuf', 'Sql', 'Software Engineering', 'Programming'] |
فيروس كورونا: لماذا يجب عليك التصرف الآن؟ | Coronavirus - When Should You Close Your Office?
How To Use Coronavirus Work From Home Model If you want to use this model, make a copy. I can't allow editing, as there… | https://medium.com/tomas-pueyo/%D9%81%D9%8A%D8%B1%D9%88%D8%B3-%D9%83%D9%88%D8%B1%D9%88%D9%86%D8%A7-%D8%AA%D8%AD%D9%84%D9%8A%D9%84%D8%A7%D8%AA-%D8%A7%D9%84%D8%B1%D8%B3%D9%88%D9%85-%D8%A7%D9%84%D8%A8%D9%8A%D8%A7%D9%86%D9%8A%D8%A9-%D9%88%D8%A7%D9%84%D8%AA%D9%83%D9%86%D9%88%D9%84%D9%88%D8%AC%D9%8A%D8%A7-%D8%AA%D8%AE%D8%A8%D8%B1%D9%83-%D9%85%D8%A7-%D9%8A%D8%AC%D8%A8-%D8%A3%D9%86-%D8%AA%D9%81%D8%B9%D9%84%D9%87-%D9%88%D9%85%D8%AA%D9%89-c23c88d91c0b | ['راقية بن ساسي'] | 2020-04-09 16:39:57.029000+00:00 | ['الصحة العامة', '1st', 'فيروس كورونا', 'كوفيد 19', 'Coronavirus'] |
Netflix’s Not-So-Secret Weapon in the Streaming Wars | It seems like every other day someone is announcing a new streaming service. Soon-to-be launched HBO Max, Disney+, Apple+, and short-form contender Quibi are all due to join Netflix, Hulu, and Amazon. Making for one crowded space.
What’s most interesting is how with each new announcement there is a chorus of voices screaming about how this could be Netflix’s demise. And to be clear, added competition may eat away at the first-mover advantage that they’ve enjoyed thus far.
But what’s really interesting is that to-an-article, no one is talking about company culture.
The Netflix High-Performance Culture
When I started at Netflix in 2001, the culture wasn’t set. We were just another startup doing everything we could to grow without a real mind for what the company culture would be.
But that didn’t last long because just over a year later, CEO Reed Hastings and CPO Patty McCord introduced the “high-performance culture.” Here’s a link to the original deck. It was to be a deliberate attempt to, as Patty would say, create the company that would great to have worked for. The idea is to hire the best, pay them top-of-market, and get out their way.
It wasn’t born fully formed and definitely evolved over time but in essence, it was the “you’re an adult, act like one” approach to management. Things like the “No Vacation Policy” approach were extensions of that.
Side Note: It was not an “unlimited vacation” add-on to the culture but a natural result of it. If you wanted to take x amount of time off, and you were able to manage it, then do it. You’re an adult, act like it.
I won’t go into to much detail here, you can read “Powerful” by Patty McCord for that (and no, that is not an affiliate link). But the culture sought to eliminate bottlenecks and create as much efficiency as possible. And it worked.
We were able to not only produce a tremendous volume of high-quality work, but we were also able to turn on a dime (you can read an example about that in this article).
It allowed us to punch well above our weight for years. In fact, when I left Netflix the marketing department had about 40 people. That’s covering all aspects of marketing (customer acquisition/conversion via direct response marketing, brand marketing, pr/comms, etc.) for all of the United States, Canda, Latin America, and the UK and Ireland.
And it continues…
The Netflix high-performance culture has extended into how they approach content creators. Instead of asking for a pilot, running it, seeing how it does, and having executives give “notes” Netflix simples says something like “We’d like two seasons.” And, since they pay “top of market” they don’t nickel and dime creatives on how much they’ll earn.
Everything Netflix does when approaching talent is 180 degrees opposite of how traditional Hollywood players have done and continue to do it.
It’s why Netflix has been able to lure mega showrunners Shonda Rhimes and Ryan Murphy among others.
So…
To be sure, matching content-to-consumer will be a big key to the success of any these new streaming services.
But I think it’s worth asking, what are the cultures at the varying competitors like? Are they set up to be able to quickly take advantage of new opportunities? Do they actually spur innovation and creativity? Are they able to compete in the content cost arms race created by Netflix?
Only time will tell but for now, I’m of the opinion that the Netflix “High-Performance Culture” is a secret weapon that should not be ignored. | https://medium.com/swlh/netflixs-not-so-secret-weapon-in-the-streaming-wars-dd3fd9d58199 | ['Barry W. Enderwick'] | 2019-07-16 13:01:51.765000+00:00 | ['Netflix', 'Streaming', 'Disney', 'Apple', 'HBO'] |
Reshape R dataframes wide to long | How do you reshape a dataframe from wide to long form in R? How does the melt() function reshape dataframes in R? This tutorial will walk you through reshaping dataframes using the melt function in R.
If you’re reshaping dataframes or arrays in Python, check out my tutorials below.
Why melt but not other functions?
Common terms for this wide-to-long transformation are melt, pivot-long, unpivot, gather, stack, and reshape. Many functions have been written to convert data from wide to long form, melt() from the data.table library is the best. See melt() documentation here. Why?
Python’s pandas library also has the equivalent melt function/method that works the same way (see my pandas melt tutorial here)
function/method that works the same way (see my pandas melt tutorial here) melt alone is often enough for all your wide-to-long transformations; you won’t have to learn pivot_longer or gather
alone is often enough for all your wide-to-long transformations; you won’t have to learn or Other functions like gather and pivot_longer are often just wrapper functions for melt() or reshape() — these other functions simplify melt and often can’t deal with more complex transformations.
and are often just wrapper functions for or — these other functions simplify and often can’t deal with more complex transformations. melt is more powerful but isn’t any more complicated than the other functions.
is more powerful but isn’t any more complicated than the other functions. data.table package’s implementation of melt , which is extremely powerful—much more efficient and powerful than the reshape library’s melt function. From the documentation:
The melt and dcast functions for data.tables are for reshaping wide-to-long and long-to-wide, respectively; the implementations are specifically designed with large in-memory data (e.g. 10Gb) in mind. | https://towardsdatascience.com/reshape-r-dataframes-wide-to-long-with-melt-tutorial-and-visualization-ddf130cd9299 | ['Hause Lin'] | 2020-08-20 13:45:48.526000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Software Engineering', 'Programming'] |
How to Create 2D and 3D Interactive Weather Maps in Python and R | How to Create 2D and 3D Interactive Weather Maps in Python and R plotly Follow Mar 5, 2018 · 7 min read
Robert FitzRoy. Source: nzhistory.govt.nz.
It was the year 1860. Robert FitzRoy, of England and New Zealand, was using the new telegraph system to gather daily weather observations and produce the first synoptic weather map. He coined the term “weather forecast” and his were the first ever to be published daily.
Fitzroy probably would have never imagined in his wildest dreams what the weather map scene would be like 160 years into the future…
⏩ ⏩ ⏩
In 2018, weather maps are commonly produced in the Grid Analysis and Display System (GrADS), R, and Python.
Kazakhstan, parts of Russia and China, and Japan had a colder than normal start to winter in 2017–18.
Weather and climate maps in Plotly add a new layer to the interrogation of our atmosphere. When it comes to these maps, Plotly’s point of difference comes in its ability to hover-over and interact with specific data points on the fly, instead of looking at a legend and ‘ballparking’ a value.
Below, you’ll find several new examples of weather and climate maps created from freely available data on the web. If you’d like to create similar maps, you’ll have to get familiar with the NCEP Reanalysis plotter, an amazingly useful tool that is used to investigative the climate system and day-to-day weather.
Here’s a short tutorial:
Go ahead, immerse yourself in this new and amazing interactive weather world that we are so fortunate to have in 2018.
Temperature Anomaly
Do you live in the eastern North America? If so, you’ve were subject to some of the coldest temperatures (with respect to normal) on the globe during December and January 2018. Siberia, a place that is already brutally cold during winter, was also colder than average.
On the flip side, the Arctic (winter), Europe (winter), eastern Australia (summer), and New Zealand (summer) have had above average temperatures during December-January. In fact, January was New Zealand’s warmest month of any month on record (nationwide records began in 1909).
See the blue patch to the west of South America? That’s associated with a climate signature known as La Niña, defined by cooler that normal sea temperatures in that part of the world that has wide-reaching impacts on a global scale.
Temperature Difference from Normal 🌡️
Data source: NCEP Reanalysis Plotter.
Data used to create this plot: GitHub.
Python code: Jupyter notebook.
R code: Make this in R.
Rainfall Anomaly 🌧️
We can apply the same techniques that we used to create the temperature anomaly map above to precipitation (rain and snow).
The map below looks at precipitation as a difference from normal for the period December to January 2018.
The Western Pacific had wetter than normal conditions chiefly because of La Niña. La Niña is associated with warmer than average sea surface temperatures near the Philippines, Papua New Guinea, and across Southeast Asia. These warm seas result in more rising motion than average in the atmosphere, which culminates in rainy conditions.
In direct contrast, there is quite a bit of brown shading about southern Africa as Cape Town approaches Day Zero. In addition, California sticks out along the West Coast of North America for being a much drier than normal location.
Precipitation Difference from Normal
Data source: NCEP Reanalysis Plotter.
Data used to create this plot: GitHub.
Python code: Jupyter notebook.
R code: Make this in R.
Zoom
While you can always zoom in on a global projection, maybe you’re looking for something more localized. In this case, you can set your latitude and longitude as such:
layout = Layout(
title=title,
showlegend=False,
hovermode="closest", # highlight closest point on hover
xaxis=XAxis(
axis_style,
range=[lon[-185], lon[-105]], # set your custom longitude
autorange=False
),
yaxis=YAxis(
axis_style,
range=[lat[50], lat[85]], # set your custom latitude
autorange=False
This the resulting graph will be a zoom to North America. For further detail, you can add state borders like this:
# Function generating state lon/lat traces def get_state_traces():
poly_paths = m.drawstates().get_paths() # country polygon paths
N_poly = len(poly_paths) # use all states
return polygons_to_traces(poly_paths, N_poly) # Get list of of coastline, country, and state lon/lat traces traces_cc = get_coastline_traces()+get_country_traces()+get_state_traces()
Python code: Jupyter notebook.
Here’s a closer look at southern Africa where Cape Town where the water taps are expected to run dry on June 4th. After three years of low rainfall and drought or near-drought conditions, locals and visitors are limited to 50 liters (about 13 gallons) of water per day. For some perspective, the average American uses about 380 liters per day or almost 8 times the Cape Town restricted amount.
Python code: Jupyter notebook.
Mercator projections, like the graphs above, are popular because they preserve direction and are ideal for navigation. They also have further educational value — such as being a good way of learning the shapes of nations.
However, Mercator maps misrepresent the size of countries and are poorly defined at the poles.
For a different perspective, Plotly user empet mapped the flat Earth onto a sphere for a 3-D experience.
Precipitation Difference from Normal in 3-D
Learn to map mercator projections onto a sphere.
Sunshine Anomaly 🌞
Outgoing longwave radiation (OLR) is a measure of the amount of energy emitted to space by earth’s surface, oceans and atmosphere. OLR is a good proxy for clear skies — the larger (on the positive side) the OLR anomaly, the more unusually clear it might have been at a given location. Conversely, a large negative anomaly might imply more cloud cover than normal and frequent thunderstorms or rain.
Below, we investigate OLR as a difference from normal for the period December 2017-January 2018.
Above normal sunshine: New Zealand, Queensland, Zimbabwe, Zambia, Portugal, western USA, Alaska.
Below normal sunshine: Central America, northern Africa, eastern Canada, western India, Philippines and South Asia, western Australia.
OLR Difference from Normal
Data source: NCEP Reanalysis Plotter.
Data used to create this plot: GitHub.
Python code: Jupyter notebook.
R code: Make this in R.
OLR Difference from Normal in 3-D
Python code: Learn how to make this spherical plot.
R code: Make this in R.
Sea Surface Temperature Anomaly
Sea surface temperatures are the main driver of climate variability in the global tropics (10ºN-10ºS latitude) and play a major role in the global sub-tropics and midlatitudes as well.
Here, we can identify that a La Niña is present by the ‘cool pool’ of water found along the coast of western South America and into the eastern and central equatorial Pacific. La Niña, which is part of the natural El Niño-Southern Oscillation (ENSO) cycle, is considered to be a climate driver.
What is not totally natural is the ‘red blob’ between Australia and New Zealand — known as the Tasman Sea marine heatwave. Although this climate feature’s development was fueled by the ENSO cycle, the extent of the anomaly was the largest ever observed, almost certainly influenced upward by the tailwind that is anthropogenic global warming.
Data source: NCEP Reanalysis Plotter.
Data used to create this plot: GitHub.
Python code: Jupyter notebook.
R code: Make this in R.
Forecast Wind Speed and Direction
The weather goes as the wind blows. Using weather data in Plotly, not only can you diagnose cyclones, but zoom to low levels to see how much of a breeze is forecast in your town.
The plot below was originally created by Plotly user ToniBois and the 3-D version below was created by empet. If you look carefully, you’ll notice Hurricane Hermine (September 2016) swirling perilously off of the East Coast of the United States.
The data used to create this plot comes from the NCEP Products Inventory.
Colorscales
Looking for the right colorscale to complement your weather map? Look no further: https://react-colorscales.getforge.io/.
The “CMOCEAN” and “DIVERGENT” colorscales are recommended for weather and climate charts. For the temperature chart, we used “balance”
For the precipitation chart, we used “BrBG”
For the outgoing longwave radiation chart, we used “RdYlBu”
For the sea surface temperature chart, we used “curl”
Weather and climate maps in Plotly add a new layer to the interrogation of our atmosphere. When it comes to weather maps, Plotly’s point of difference comes in its ability to hover-over and interact with specific data points on the fly, instead of looking at a legend and ‘ballparking’ a value. | https://medium.com/plotly/how-to-create-2d-and-3d-interactive-weather-maps-in-python-and-r-77ddd53cca8 | [] | 2018-03-05 17:17:19.683000+00:00 | ['Climate Change', 'Data Science', 'Weather', 'Data Visualization'] |
Why I (Still) Clap | I have never clapped to bring attention to great writing.
Before the change to read time rewards, I saw several writers write that they clapped to bring attention to great writing. This sounds like a worthy cause. Some say they will continue to do this. I applaud them (pun not intended.)
However, I am not convinced clapping brings attention to writers or writing. For some, it might. But I have never read a story because it got thousands of claps. Most of the writers (with a few exceptions) whose stories get huge numbers of claps write stories I am not interested in. The number of claps a story receives has never influenced what I read. | https://medium.com/mark-starlin-writes/why-i-still-clap-a6a10c92a604 | ['Mark Starlin'] | 2019-10-30 22:59:44.496000+00:00 | ['Clapping', 'Medium', 'Money', 'Writing', 'Partner Program'] |
The New Data Engineering Stack | DATA ENGINEERING
The New Data Engineering Stack
Technologies for the Complete Data Engineer
Remember the time when the software development industry realized that a single person can take on multiple technologies glued tightly with each other and came up with the notion of a Full Stack Developer — someone who does data modelling, writes backend code and also does front end work. Something similar has happened to the data industry with the birth of a Data Engineer almost half a decade ago.
For many, the Full Stack Developer remains a mythical creature because of the never-ending list of technologies that cover frontend, backend and data. A complete Data Engineer, on the other hand, doesn’t sound as far-fetched or mythical. One of the reasons for that could be the fact that visualisation (business intelligence) has become a massive field in its own right.
A Data Engineer is supposed to build systems to make data available, make it useable, move it from one place to another and so on. Although many companies want their data engineers to do visualisations, it is not a common practice. Still, the BI skillset is definitely a good-to-have for a Data Engineer.
Here, I am going to talk about the technologies which are too important to ignore. You don’t have to master all of them. That’s not possible, anyway. It is important to be aware and somewhat skilled at most of these technologies to do good things in the data engineering space. Don’t forget that newer technologies will keep coming and older technologies, at least some of them, will keep moving out.
The philosophy of listing these technologies comes from a simple idea of borrowed from the investing world — where the world is going.
Databases
Even with the bursting on the scene of many unconventional databases, the first thing that comes to mind when we talk about databases is still relational databases and SQL.
Relational (OLTP)
All relational databases, more or less, work in the same way. Internal implementation differs, obviously. It’s more than enough to be skilled in one or two of the four major relational databases — Oracle, MySQL, Microsoft SQL Server and PostgreSQL. I haven’t heard of a company that works without a relational databases, no matter how advanced and complex their systems are.
The Big Four — Oracle, MySQL, MS SQL Server, PostgreSQL.
This website maintains the database engine ratings for all kinds of databases. Head over just to see what kind of databases companies are using these days.
Warehouses (OLAP)
OLTP based relational databases are, by definition, meant for transactional loads. For analytical loads, data lakes, data warehouses, data marts, there’s another list of databases. In theory, you can create data warehouses using OLTP databases but at scale it never ends well. Been there, done that.
Data warehouses have a different set of database management systems, the most popular out of which are Google BigQuery, Amazon Redshift, Snowflake, Azure Data Warehouse and so on. The choice of a data warehouse usually defaults to the cloud service provider a company is using. For instance, if a company’s infrastructure is on AWS, they’d surely want to use Amazon Redshift as their data warehouse for reducing friction.
The Big Four — BigQuery, Redshift, Snowflake, Azure DW.
Having said that, there are good chances that the future of the cloud will not be a cloud, it is probably going to be multi-cloud, which means companies would be able to choose their data warehouses almost irrespective of where their existing infrastructure is without worrying too much about inter-cloud friction.
Others
Different use cases require different solutions. Geospatial data requires geospatial databases like PostGIS, time-series data sometimes requires specialised time-series databases like InfluxDB or TimescaleDB. Document-oriented databases, key-value stores have made their place in the database ecosystem by offering something that relational database had struggled to offer for the longest period of time, i.e., the ability to efficiently store, retrieve and analyse semi-structured and unstructured data.
The Big Eight — MongoDB, InfluxDB, neo4j, Redis, Elasticsearch, CosmosDB, DynamoDB, Cloud Datastore.
Then there are graph databases, in-memory data stores and full-text search engines — which are solutions for very specific problems. It’s difficult to choose from hundreds of databases but these are the major ones. The ones I have left out are probably close seconds of these eight.
Cloud
With the mainstreaming of cloud computing with cloud service providers like AWS, Azure and Google Cloud, infrastructure has been democratised to a great degree. Smaller companies don’t have to worry about CapEx that incurred from infrastructure anymore.
It couldn’t have been more of a blessing for data engineering that a host of amazing services by all the major providers are available which charge on the pay-what-you-use basis. Companies have moved to the serverless computing model where the infrastructure is up only for the time when the compute & memory are needed. Persistent storage is a separate service.
The Big Three — Google Cloud, Azure, AWS.
For a data engineer, it’s important to know all the major data-related cloud services provided by at least one of the three cloud providers. We’ll take an example of AWS. If you’re a Data Engineer who’s supposed to be working on AWS, you should know about S3 & EBS (for storage), EC2 & EMR (for compute & memory), Glue & Step Functions & Lambda (for orchestration) and more. Same goes for other cloud providers.
Orchestration
For more engineering-centric teams, Airflow has been the obvious choice for a an orchestrator in the last two to three years. Cloud platforms have their own orchestrators. For instance, with AWS, you can use a mix of Glue, Step Functions and Lambda. Google Cloud offers implemented a fully-managed cloud version of Airflow called Cloud Composer. Azure also offers similar services.
The Big One — Airflow.
Some of the old school orchestration, workflow engines and ETL tools have adapted well and are still very much relevant. For instance, Talend is still used widely as an orchestrator. This brings us to the much dreaded ETL.
ETL
All things considered, SQL has been the best option for doing ETL till date. Recently, many other technologies like Spark have come in the space where more compute & memory gives you quicker results by exploiting the principles of MPP computing.
Traditionally, ETL has been done mostly using proprietary software but those days are long gone now. More open-source toolkits are available in the market to be used by the community. There’s also a host of fully-managed ETL solutions provided by companies that are dedicated to data integration and ETL. Some of them are Fivetran, Panoply and Stitch. Most of these tools are purely scheduled or triggered SQL statements getting data from one database and inserting into another. This is easily achievable by using Airflow (or something similar).
The Big Two — SQL, Spark.
Fishtown Analytics’s dbt is one of the only tools that concentrates on solving the Transformation layer problems in the ETL. The fact that dbt is completely SQL-based makes it so attractive to use. I’m looking forward to having cloud dbt services by the major cloud providers. Something might already be in the works.
Infrastructure
The DevOps space has split into three in the past couple of years —core DevOps, DataOps and DevSecOps. Data Engineers are expected to know their infrastructure now. This means that whatever stack they are using, they should be able to resolve operational issues concerning the infrastructure — databases, data pipelines, data warehouses, orchestrators, storage and so on.
For provisioning infrastructure and maintenance, there are several cloud platform independent tools like Pulumi and Terraform are available in the market. Platform specific tools like CloudFormation (for AWS) have also seen wide acceptance.
The Big Two — Terraform, Pulumi.
If you have drunk the kool-aid of a multi-cloud future, it’s better to know at least one of the two aforementioned Infrastructure-as-Code tools. IaC comes with its own benefits like the ease of implementing immutable infrastructure, increased speed of deployment and so on.
CI/CD
Whether it is deploying infrastructure or SQL scripts or Spark code, a continuous integration and continuous deployment pipeline is the standard way to do it. Gone are the days (or gone should be the days) when engineers used to have access to the machines and they’d log in to a database and execute the DDL for a stored procedure on the database server.
The Big Four — Jenkins, AWS CodePipeline, Google Cloud Build, Azure DevOps.
Many have realized the risk of doing that, unfortunately after many years of having suffered from unintended human errors.
Testing
The whole point of the data engineering exercise is to make the data available to the data scientists, data analysts, and business people. Without proper testing, any project is at risk of catastrophic failure. Manual testing of data is highly inefficient and, honestly, it isn’t doable at scale.
The Big Two — Pytest, JUnit.
So the best way out is to automate the tests. Any of the automation test frameworks available for testing backend code also works for testing Data Engineering components. Tools like dbt can also be used for automation testing. Otherwise, widely used tools like Cucumber, Gherkin for BDD are available. Pytest, JUnit and others can also be used.
Source Control
I have already written about source control for SQL. I don’t want to repeat all the information I had shared in the other piece, so I am just sharing the link here
Source control everything. The pipelines, the database DDLs, the orchestrator code, test cases. Everything.
Languages
Although Python should be the obvious answer to the question of which language do data engineers use, there is a host of technologies built on Java & Scala. The whole Hadoop ecosystem is based on Java. Talend, the orchestrator + ETL tool is also written in Java.
Not everyone is required to know both the languages though. Most widely used technologies have a wrapper for the other language to make the product more acceptable. The most common example of this is PySpark which allows Data Engineers to use Python to interact with Spark.
The Big Three — SQL, Python, Java.
The same can be said for SQL. If there was one language that data engineers should understand, it is SQL. After-all, it is the language data speaks.
Conclusion
A Data Engineer is not just an ETL person now. They’re not just a database person either. A Data Engineer is an amalgamation of all the things we have talked about in this piece and maybe some more. Again, remember that mastery of all these technologies is not possible, but one can certainly be aware and be skilled in some of these. That’s what is the need of the hour. And this will probably be the case for next couple of years.
P.S. — Some of the readers have shared some ideas in the comments, some technologies that I might have missed and some general disgreements about where the world is going in data engineering. I’ll update the article based on the comments wherever relevant as soon as possible. Thanks everyone for the suggestions! | https://towardsdatascience.com/the-new-data-engineering-stack-78939850bb30 | ['Kovid Rathee'] | 2020-12-09 12:52:45.337000+00:00 | ['Towards Data Science', 'Programming', 'Software Development', 'Data Engineering', 'Data'] |
5 Must-Read Data Science Papers (and How to Use Them) | 5 Must-Read Data Science Papers (and How to Use Them)
Foundational ideas to keep you on top of the machine learning game.
Photo by Rabie Madaci on Unsplash
Data science might be a young field, but that doesn’t mean you won’t face expectations about your awareness of certain topics. This article covers several of the most important recent developments and influential thought pieces.
Topics covered in these papers range from the orchestration of the DS workflow to breakthroughs in faster neural networks to a rethinking of our fundamental approach to problem solving with statistics. For each paper, I offer ideas for how you can apply these ideas to your own work
We’ll wrap things up with a survey so that you can see what the community thinks is the most important topic out of this group of papers. | https://towardsdatascience.com/must-read-data-science-papers-487cce9a2020 | ['Nicole Janeway Bills'] | 2020-11-01 17:49:21.605000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Programming', 'Data Analytics'] |
Joining the global protest fray: Islamists march on the Pakistani capital | By James M. Dorsey
A podcast version of this story is available on Soundcloud, Itunes, Spotify, Stitcher, TuneIn, Spreaker, Pocket Casts, Tumblr, Podbean, Audecibel and Castbox.
Pakistan, long viewed as an incubator of religious militancy, is gearing up for a battle over the future of the country’s notorious madrassas, religious seminaries accused of breeding radicalism.
Islamist-led protests also threaten to be a fight for the future of the government of prime minister Imran Khan.
The stakes for both the government and multiple Islamist and opposition parties and groups are high.
Pakistan earlier this month evaded blacklisting by the Financial Action Task Force (FATF), an international anti-money laundering and terrorism finance watchdog, but only by the skin of its teeth.
Maintaining Pakistan on it grey list since June of last year, FATF warned the South Asian nation that it would be blacklisted if it failed to fully implement an agreed plan to halt the flow of funds to militant groups by February of next year when the watchdog holds its next meeting.
The warning was reinforced by a statement by FATF’s Chinese president, Xiangmin Liu. China has long shielded Pakistan from blacklisting.
“Pakistan needs to do more and faster. Pakistan’s failure to fulfil FATF global standards is an issue that we take very seriously. If by February 2020, Pakistan doesn’t make significant progress, it will be put on the blacklist.” Mr. Xiangmin said.
Pakistani officials acknowledged that Mr. Xiangmin’s comment underlined the seriousness of their country’s predicament but said it would serve as an incentive to push forward.
That is likely to energize Islamist opposition to Pakistani efforts to comply with FATF demands that would impose strict oversight on their funding and financing of social and cultural activities, including the operation of tens of thousands of religious seminaries.
A five-party Islamist coalition that demands “true Islamization” and the establishment of shariah law, led by Maulana Fazlur Rehman, the 66-year old head of Jamiat Ulema-e-Islam and a former member of parliament, organized a countrywide march scheduled to converge on the capital Islamabad on October 31.
Mr., Rehman said the march of up to one million people was a declaration of “war” against Mr. Khan’s government. He demanded the government’s resignation. His protest is likely to secure a degree of support from other major opposition parties like the Pakistan People’s Party (PPP) and the Pakistan Muslim League-Nawaz (PML-N).
With government efforts to engage the opposition in talks to fend off the march on Islamabad going nowhere, both Pakistani security forces and stick-wielding Islamist volunteers clad in yellow uniform-like garb have been preparing for the march. Security forces have virtually sealed off Islamabad’s government district.
The government is also considering closing roads leading to the capital and banning media coverage.
Pakistani media reported that authorities were also contemplating digging ditches along footpaths leading to Islamabad to prevent protesters from circumventing roadblocks by foot.
The Islamists were further energized by a controversial meeting last month on the sidelines of the United Nations General Assembly between Mr. Khan and George Soros, the billionaire philanthropist behind the Open Society Foundation. The foundation was banned from Pakistan in late 2017 as part of a crackdown on non-governmental organizations.
Mr. Soros, a Hungarian-born Jew who survived the Holocaust, and the foundation are globally in the bull’s eye of populist, ultra-nationalist and militant religious opposition to what they term ‘globalists’ and ‘cosmopolitans.’
The attacks, like in the case of the Islamist coalition in Pakistan as well as Hungarian prime minister Victor Orban and other nationalist and far-right forces, often take on anti-Semitic connotations.
Mr. Orban, who studied on a scholarship provided by Mr. Soros’ philanthropy, has charged the billionaire with secretly plotting to flood Hungary with migrants and destroy it as a nation.
Mr. Rehman, accusing Mr. Khan of being a “Jewish agent,” was particularly irked by the fact that the prime minister was believed to have asked Mr. Soros to assist in reforming Pakistani madrassas in a bid to counter radicalization and ensure that the seminaries adopt curricula approved by the ministry of education.
Greater government control of the seminaries would substantially weaken the significant street power of Islamist parties that often fare poorly in elections.
The emerging power struggle between Mr. Khan and the Islamists is in many ways an effort by the Islamists to force the military that long supported them to choose between them and the prime minister.
Mr. Khan is believed to have had military support in the electoral campaign that brought the former cricket player to office on a promise to end corruption and improve living standards.
Instead, a persistent economic crisis forced Mr. Khan to agree to a US$6 billion bailout by the International Monetary Fund (IMF) that involves stark austerity measures.
The Islamists ability to march on Islamabad has some analysts suggesting that they would not be able to do so without at least a military nod.
Whatever the case, the march could not come at a more awkward moment for Mr, Khan.
Mr. Rehman hopes to capitalize on popular discontent as Pakistan struggles to overcome the economic crisis and seems unable to garner substantial international and Muslim support in condemning India’s withdrawal of the disputed area of Kashmir’s autonomy.
Earlier this week, police in Islamabad employed water cannons to disperse teachers protesting the fact that they had not been paid for months.
Complicating affairs is the fact that solving the economic crisis, confronting India in the dispute about Kashmir and meeting FATF’s demands are all intertwined.
Militants and others have degrees of financial manoeuvrability because much of the Pakistani economy remains unrecorded. In addition, despite crackdowns, various militant groups like Jamaat-ud-Dawa and Jaish-e-Mohammed remain useful proxies in battles over Kashmir. All of which mitigates against full compliance with FATF’s demands.
That is the murky playground in which Mr. Rehman and his Islamist alliance is seeking to stir the pot.
Dr. James M. Dorsey is a senior fellow at Nanyang Technological University’s S. Rajaratnam School of International Studies, an adjunct senior research fellow at the National University of Singapore’s Middle East Institute and co-director of the University of Wuerzburg’s Institute of Fan Culture | https://medium.com/the-turbulent-world-of-middle-east-soccer/joining-the-global-protest-fray-islamists-march-on-the-pakistani-capital-1084e2d5678d | ['James M. Dorsey'] | 2019-10-26 04:07:00.823000+00:00 | ['Pakistan', 'Terrorism', 'Religion', 'Islam', 'China'] |
Swirling Thoughts | Swirling Thoughts
Free Verse poem.
Image by Hamed Mehrnik from Pixabay
Swirling thoughts
go round and round
I have no idea when
they will finally
stop torturing me.
Swirling thoughts
give me things
to constantly think about
to concern myself with
I must bear the burden.
Swirling thoughts
have been weighing
heavily on me
for quite some time
I shall meet my demise soon.
Swirling thoughts
I don’t know how
to make them stop
one thing I do know
is that I’m at the end of the road. | https://medium.com/the-partnered-pen/swirling-thoughts-881df583687d | ['Brian Kurian'] | 2020-09-10 15:30:46.321000+00:00 | ['Mental Health', 'Life Lessons', 'Poetry', 'Poem', 'Life'] |
Blockchain Startups Wanted | A couple weeks ago we announced dLab/emurgo, a new blockchain and distributed ledger technology (DLT) accelerator program operated by SOSV (“the Accelerator VC”) and our partners at EMURGO. The new program is designed to bring the resources of both organizations together under one roof with a mission to invest in, support, and scale early-stage startups who are tackling truly important problems in this space; startups who have ambitions to create society-scale outcomes. We’re looking to fund five of these for our first cohort in February.
What follows is a list together of some of the topics we’re interested in exploring with dLab. We expect this to evolve over time, and it’s in no way meant to be comprehensive. In fact, it’s usually the case that the best applicants to SOSV’s programs are working on ideas that we haven’t even yet considered.
However, solutions that address these particular topics are sorely needed, and our hope is to find some of you out there who are already tackling them. If you are, give us a shout. We’d love to talk to you (and if you’ve already applied, thank you — we’ll be in touch soon!)
Blockchain for the Masses
DLT faces many challenges. Lots has been written about Ethereum’s scaling challenges, for example. But the largest obstacle to widespread adoption isn’t transactional throughput but usability: we need more easy-to-use solutions that make tokens and associated services accessible to the next ten million users. This could take many forms, but integrated mobile and web multi-protocol wallets, user-friendly portfolio management, token custody services, and practical payments and point of sale solutions are some obvious areas where attention to user friendliness could be the defining characteristic. And it’s worth considering that the biggest opportunity, of course, isn’t in the first world, but in the developing world.
Developer Tools + Data Oracles
The other side to making decentralization technology more usable is making it more accessible to new developer talent. We need more high-level frameworks, libraries, tools, services, and educational resources in order to 100x the number of competent developers in the space. How do we incentivize entrepreneurs and developers to focus their efforts on ecosystem development? More creative alternatives to bounty systems are one avenue. And for dApps to bridge the divide and become more useful we’ll need to work with more external data sources, via new types of oracles where incentives are thoughtfully designed to guarantee accuracy and ongoing verification.
Governance + Self-Sovereign Identity
Many have written about the promise of blockchain for achieving a more open and transparent democratic process, for both organization management and government-level uses… as well as meta-level on-chain governance. Do we believe in a world where voting for change is transparent, immutable and audit-able? Of course. But there’s a lot of work to do before we get there. Rather than representative token ownership (where quantity of tokens is considered), many such applications require the notion of a singular verifiable digital identity, which means bridging online and offline identities in a reliable way. How is such individuality proven? Verified? What does it mean to stake your own reputation on verifying a peer? For those of us already in the first world, self-sovereign identities hold the additional promise of giving us more control over our data (medical history, social media, etc), removing power from companies who store and monetize it in proprietary databases. And for the billions of people who are unbanked in the developing world, a digital identity is the first step to gain access to financial services and to participate in the global economy.
Logistics + Supply Chain
There’s a lot going on here already. But there’s a lot to be done. Solutions that can prove origin, custody, and integrity of food and medicine are a good start. There is a startling lack of transparency as a product — whether it’s food, medicine or anything else — changes hands several times from its origin to its ultimate destination. Problems that occur, which may result in theft or spoilage or a reduction of effectiveness, are not obvious, or products arriving at the destination may be counterfeit. Paperwork, cost, and complex compliance are issues that can be addressed by using shared digital ledgers. Given SOSV’s involvement in disruptive food (FOOD-X; also based in New York) this is a particularly interesting cross-disciplinary area for us.
Decentralized Finance
It’s no surprise that the first major use case for immutable digital ledgers has been financial transactions. But ten years later, we’re still at the earliest stages of this transformation. What happens if anyone can build new types of financial products on top of open protocols like Ethereum (and whatever follows it)? And what if anyone can access those products, regardless of their jurisdiction or economic standing? The Decentralized Finance (DeFi) movement promises a remarkably open, transparent, and borderless financial infrastructure via stablecoins, crypto-collateralized lending, derivatives, issuance platforms (security tokens), insurance products, and other new instruments: all driven by smart contracts and tokenized assets. The market for these instruments, compared to the legacy financial system, is admittedly tiny, but it’s growing at Internet speed. To innovate in this realm used to be the exclusive domain of banks and other financial institutions; but now any inspired developer can test new ideas out on a live network. Proof of Stake blockchains also present new opportunities for validators, staking-as-a-service providers, and others who are exploring the evolving nature of custody.
Open Marketplaces + Data Driven Optimization
Any digital marketplace that exists today is built on top of a database owned by a corporation. In many cases, that makes perfect sense. In other cases, those systems could be far more fair, open, and efficient if they were decentralized and effectively ownerless. Decentralized energy markets are one particularly good example of this, especially when net metering is considered (and particularly so as battery storage costs drop). Does ride sharing need to be decentralized? Maybe. AirBnb? (c’mon you can be more creative than that) If all our sharing economy data ends up in blockchain-like ledgers, there’ll be plenty of opportunity to deploy machine learning techniques to analyze and optimize future exchanges. What can we learn from shared energy utilization data, for example? How could we use that information to optimize the ways that buildings collect and store (and share) energy? What about optimizing the way that taxis move about the city? Can a private data silo be better at that than independently incentivized developers operating off of an open data set?
Hardware, Intelligent Agents, and the Internet of Things
What does the wallet of the future look like? It’s probably not much like the hard wallets we’re seeing today, though they’re certainly a stepping stone. Do we expect a secure subsystem to exist in every phone in the future? If not, what else? What types of protocols are needed for efficient machine-to-machine communication and message relaying? What types of hardware will natively interact with these protocols? We’d love to see some applicants with hardware concepts as well, given our relationship with our hardware-focused sister program, HAX, in Shenzhen.
What are you working on?
Give us a shout. We’re currently accepting applications for the first dLab/emurgo program, to start in February in New York City. Applications close November 30th. | https://medium.com/dlabvc/blockchain-startups-wanted-8eec89ad32d1 | ['Nick Plante'] | 2019-07-31 22:05:46.512000+00:00 | ['Investing', 'Venture Capital', 'Innovation', 'Startup', 'Blockchain'] |
Better user personas? Maybe less slides, more role-play? | Better user personas? Maybe less slides, more role-play?
As UX Designers, we would like to create ‘working’ user personas so we can use them as a proxy between the users we researched and our team. To achieve this, we have to make those personas acceptable. What are the factors that can turn a flat persona into an interesting one — one that can hook your team in?
Photo by Alexandru Zdrobău on Unsplash
Yeah okay, more life. That’s a mouthful. Aren’t personas just a tool?
Well, let’s see.
Disclaimer: I’m new at UX, but maybe that’s a good thing. Just maybe this makes me able to notice things. Or I maybe I’m terribly mistaken.
Being a user researcher requires great people skills. This comes without saying. Good places to get more of those skills are not in UX itself. UX is extremely interdisciplinary activity, so looking outside is actually the norm here. In my opinion, UX researchers may want to borrow knowledge from the following fields to expand their toolbox:
movies
theater
computer games
roleplaying games
creative writing
psychology
sociology
everything that makes you at ease with the idea that you will bring characters to life
Being at ease with this idea requires also that a permissive culture exists at your company. Let’s use some pieces of knowledge to experiment with enhancing the personas we create.
Photo by Ayo Ogunseinde on Unsplash
Here are the personas, now make a product for those people…
‘Yeah. Okay, thanks. Sounds good. Will do…’
Problem: You don’t feel engaged by some of the personas you’ve seen so far on the internet. Should you?
Reading reports from projects usually helps a lot with this. You like reading those stories. Stories — exactly. They come with plots, timelines, struggles and solutions. Stories about people whose life got improved. You don’t get that by just looking at a persona template, though.
You’re not completely sure how that works, but if you think that the UX researcher creates a bunch of personas, hands them to the team and just says: ‘here you go — build a product for them so they can achieve their goals listed here’ — it seems to you like something that may work and probably usually works just fine. Like, meh. But can you still make it a bit better?
Transferring the effort of identifying with the personas to the team seems like a sure-fire way of failing. I think — correct me if I’m wrong — that the burden of making sure the personas will be accepted belongs in large part to the UX researcher who created them, not to the team. The team doesn’t owe us any acceptance, it’s something that we will have to earn, not force.
You may also think that maybe the moment of presenting the personas during a meeting is an important point in the project timeline. I would say yes, but it should not be the last time you will be creating them. I’m not talking about iteration and validation of research here — that’s a different thing. I’m talking about roleplaying, actively creating a persona from the moment of presenting it to the team, until the project ends.
You already know that a good persona should have the following characteristics:
be based on actual research
be actionable and goal-oriented
and be role-based , not demographic data based
, not demographic data based be believable
Please take a moment to think if it’s also possible to:
hook the team in
assure better chance of buy-in from leadership
Photo by Aatik Tasneem on Unsplash
Meeting time: presenting your personas
You may think this is a crucial moment in the process — the moment you stand in front of our teammates and set yourself on a task to convince them to your vision. You want the managers to ‘buy’ your personas. Lack of buy-in from the leadership at this point may delay or completely derail the project and make your research efforts invalid.
As user researcher, you want to make sure your work is acceptable to the team so that you can move the process forward without unnecessary friction. Take a moment to think about it, please. You are already creating fictional characters, why not go a step further with it? This will require some storytelling skills from you — but that’s something that can be learned.
If you want to make an impression that your personas are a highly valuable deliverable that will successfully inform the next stage of your team’s workflow, you will have to defend them and use them throughout the entire project. For this, your personas probably should be ‘kept alive’ by the entire team until the project ends. You may want to think about it as an active, long-term process.
Photo by Joshua Oluwagbemiga on Unsplash
What makes a persona actionable AND interesting?
Same thing that makes the characters in the book you read recently interesting. Same thing that makes the characters in your favourite Netflix series interesting so much, that you want to see what happens next.
You may want to find out how that happens.
What makes it so interesting to spend hours of our time watching or reading about people who actually don’t exist?
Please let me share my personal research project I’m currently working on. My goal is to practice and learn storytelling that would be usable in creating better user personas.
My project: I created a website which generates stories about fictional UX designers. It’s very basic right now and the stories aren’t that good. But they are improving as I’m studying this subject and changing things there. If you are interested, please give it a try here. And if you have ideas that could work here, I’m looking for collaborators!
The first version of my Persona Generator
My goal here is to build a framework for creating better, more lifelike and actionable personas of any kind. Currently the stories are just a randomly generated pieces of text and are not based on any input data — but certainly that’s very important next step and something I’m planning to add next. I’d like to find out if there is a way to feed data to the system and be get something convincing in the output.
One thing I know is that humans don’t bond with data. We may be interested in the curious results of research, though. We do bond with other people and (sort of) with our pets.
We are certainly social creatures. We use our empathy to create bonds and get to know people around us. So, in order to make sure our personas ‘work’, I think they should become a bit more human. What does that mean?
Photo by Henk Mohabier on Unsplash
Yeah, it’s a flatline… meh.
Sometimes our reaction to something is just that: a meh, a shrug, a whatever. This may apply to personas who seem a bit flat. If we are creating them anyway, might as well make full use of them. Otherwise — why the hassle?
We might as well write a list of goals or Agile user stories and be done with it.
Hint 1: Characters, not personas?
Human bonding is a mutual, interactive process. There are two sides. Sure, a piece of paper with printed persona on it can’t interact. Same goes for a slide. What can we as UX researchers do to bring the character to life?
I think one of the solutions could be the same thing people have been doing elsewhere for a long time already: role-playing, as in: RPG games.
Aren’t personas very similar in function to RPG character sheets?
Kinda geeky, kinda cool, isn’t it? I think that having an inclination to playing make-believe can be a great asset in any user researcher’s toolbox.
RPG games are special because they enable us to take on a completely different role and express new things. This can touch very deep human need and open us up to completely new opportunities.
Through playing a character we can communicate emotions: affection, trust, but also negative emotions: distrust, aversion. Anything that will bring the otherwise flat construct to life, make it more interactive and — interesting.
We can also do something else here, too: embrace human weirdness. I think we are too afraid of including such aspects in our personas because of political correctness — this may actually lead to missed opportunities.
Example: if you’ve noticed that a large group of people you researched are experiencing a burnout — make a burned out persona. Flesh this out in details. And stick with it. Make it important. Because this actually matters.
Photo by Priscilla Du Preez on Unsplash
Social connections
What would make this persona become accepted by your teammates if they met them in real life? You already know your team, what makes them interested in another person?
In other words, how to make the persona become a temporary team member?
Don’t create isolated personas. It should be pretty easy to bring them into your team. You can even practice this when you go with your work friends to a dinner or a pub after work.
When a group forms, it’s usually because some or all of those criteria are met amongst the membmers:
shared activities
solidarity
common commitments
equality or difference of status
norms and values
You can play around with those, and also turn them upside down if you need — to show why this persona would not be able to become a member of the group if that’s the case. | https://medium.com/procedural-stories/better-user-personas-maybe-less-slides-more-life-2f9219368cba | ['Paul Pela'] | 2019-03-15 10:43:17.159000+00:00 | ['Storytelling', 'Procedural Generation', 'User Research', 'UX', 'UX Research'] |
Top Classification Algorithms In Machine Learning | Classification in machine learning and statistics is a supervised learning approach in which the computer program learns from the data given to it and make new observations or classifications. In this article, we will learn about classification in machine learning in detail. The following topics are covered in this blog:
1. What is the Classification in Machine Learning?
2.Classification Terminologies In Machine Learning
3.Classification Algorithms
Logistic Regression
Naive Bayes
Stochastic Gradient Descent
K-Nearest Neighbors
Decision Tree
Random Forest
Artificial Neural Network
Support Vector Machine
4.Classifier Evaluation
5.Algorithm Selection
6.Use Case- MNIST Digit Classification
What is Classification In Machine Learning
Classification is a process of categorizing a given set of data into classes, It can be performed on both structured or unstructured data. The process starts with predicting the class of given data points. The classes are often referred to as target, label, or categories.
The classification predictive modeling is the task of approximating the mapping function from input variables to discrete output variables. The main goal is to identify which class/category the new data will fall into.
Let us try to understand this with a simple example.
Heart disease detection can be identified as a classification problem, this is a binary classification since there can be only two classes i.e has heart disease or does not have heart disease. The classifier, in this case, needs training data to understand how the given input variables are related to the class. And once the classifier is trained accurately, it can be used to detect whether heart disease is there or not for a particular patient.
Since classification is a type of supervised learning, even the targets are also provided with the input data. Let us get familiar with the classification in machine learning terminologies.
Classification Terminologies In Machine Learning
Classifier — It is an algorithm that is used to map the input data to a specific category.
— It is an algorithm that is used to map the input data to a specific category. Classification Model — The model predicts or draws a conclusion to the input data given for training, it will predict the class or category for the data.
— The model predicts or draws a conclusion to the input data given for training, it will predict the class or category for the data. Feature — A feature is an individual measurable property of the phenomenon being observed.
— A feature is an individual measurable property of the phenomenon being observed. Binary Classification — It is a type of classification with two outcomes, for eg — either true or false.
— It is a type of classification with two outcomes, for eg — either true or false. Multi-Class Classification — The classification with more than two classes, in multi-class classification each sample is assigned to one and only one label or target.
— The classification with more than two classes, in multi-class classification each sample is assigned to one and only one label or target. Multi-label Classification — This is a type of classification where each sample is assigned to a set of labels or targets.
— This is a type of classification where each sample is assigned to a set of labels or targets. Initialize — It is to assign the classifier to be used for the
— It is to assign the classifier to be used for the Train the Classifier — Each classifier in sci-kit learn uses the fit(X, y) method to fit the model for training the train X and train label y.
— Each classifier in sci-kit learn uses the fit(X, y) method to fit the model for training the train X and train label y. Predict the Target — For an unlabeled observation X, the predict(X) method returns predicted label y.
— For an unlabeled observation X, the predict(X) method returns predicted label y. Evaluate — This basically means the evaluation of the model i.e classification report, accuracy score, etc.
Types Of Learners In Classification
Lazy Learners — Lazy learners simply store the training data and wait until a testing data appears. The classification is done using the most related data in the stored training data. They have more predicting time compared to eager learners. Eg — k-nearest neighbor, case-based reasoning.
— Lazy learners simply store the training data and wait until a testing data appears. The classification is done using the most related data in the stored training data. They have more predicting time compared to eager learners. Eg — k-nearest neighbor, case-based reasoning. Eager Learners — Eager learners construct a classification model based on the given training data before getting data for predictions. It must be able to commit to a single hypothesis that will work for the entire space. Due to this, they take a lot of time in training and less time for a prediction. Eg — Decision Tree, Naive Bayes, Artificial Neural Networks.
Classification Algorithms
In machine learning, classification is a supervised learning concept which basically categorizes a set of data into classes. The most common classification problems are — speech recognition, face detection, handwriting recognition, document classification, etc. It can be either a binary classification problem or a multi-class problem too. There are a bunch of machine learning algorithms for classification in machine learning. Let us take a look at those classification algorithms in machine learning.
Logistic Regression
It is a classification algorithm in machine learning that uses one or more independent variables to determine an outcome. The outcome is measured with a dichotomous variable meaning it will have only two possible outcomes.
The goal of logistic regression is to find a best-fitting relationship between the dependent variable and a set of independent variables. It is better than other binary classification algorithms like nearest neighbor since it quantitatively explains the factors leading to classification.
Advantages and Disadvantages
Logistic regression is specifically meant for classification, it is useful in understanding how a set of independent variables affect the outcome of the dependent variable.
The main disadvantage of the logistic regression algorithm is that it only works when the predicted variable is binary, it assumes that the data is free of missing values and assumes that the predictors are independent of each other.
Use Cases
Identifying risk factors for diseases
Word classification
Weather Prediction
Voting Applications
Naive Bayes Classifier
It is a classification algorithm based on Bayes’s theorem which gives an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
Even if the features depend on each other, all of these properties contribute to the probability independently. Naive Bayes model is easy to make and is particularly useful for comparatively large data sets. Even with a simplistic approach, Naive Bayes is known to outperform most of the classification methods in machine learning. Following is the Bayes theorem to implement the Naive Bayes Theorem.
Advantages and Disadvantages
The Naive Bayes classifier requires a small amount of training data to estimate the necessary parameters to get the results. They are extremely fast in nature compared to other classifiers.
The only disadvantage is that they are known to be a bad estimator.
Use Cases
Disease Predictions
Document Classification
Spam Filters
Sentiment Analysis
Stochastic Gradient Descent
It is a very effective and simple approach to fit linear models. Stochastic Gradient Descent is particularly useful when the sample data is in a large number. It supports different loss functions and penalties for classification.
Stochastic gradient descent refers to calculating the derivative from each training data instance and calculating the update immediately.
Advantages and Disadvantages
The only advantage is the ease of implementation and efficiency whereas a major setback with stochastic gradient descent is that it requires a number of hyper-parameters and is sensitive to feature scaling.
Use Cases
Internet Of Things
Updating the parameters such as weights in neural networks or coefficients in linear regression
K-Nearest Neighbor
It is a lazy learning algorithm that stores all instances corresponding to training data in n-dimensional space. It is a lazy learning algorithm as it does not focus on constructing a general internal model, instead, it works on storing instances of training data.
Classification is computed from a simple majority vote of the k nearest neighbors of each point. It is supervised and takes a bunch of labeled points and uses them to label other points. To label a new point, it looks at the labeled points closest to that new point also known as its nearest neighbors. It has those neighbors vote, so whichever label the most of the neighbors have is the label for the new point. The “k” is the number of neighbors it checks.
Advantages And Disadvantages
This algorithm is quite simple in its implementation and is robust to noisy training data. Even if the training data is large, it is quite efficient. The only disadvantage with the KNN algorithm is that there is no need to determine the value of K and computation cost is pretty high compared to other algorithms.
Use Cases
Industrial applications to look for similar tasks in comparison to others
Handwriting detection applications
Image recognition
Video recognition
Stock analysis
Decision Tree
The decision tree algorithm builds the classification model in the form of a tree structure. It utilizes the if-then rules which are equally exhaustive and mutually exclusive in classification. The process goes on with breaking down the data into smaller structures and eventually associating it with an incremental decision tree. The final structure looks like a tree with nodes and leaves. The rules are learned sequentially using the training data one at a time. Each time a rule is learned, the tuples covering the rules are removed. The process continues on the training set until the termination point is met.
The tree is constructed in a top-down recursive divide and conquer approach. A decision node will have two or more branches and a leaf represents a classification or decision. The topmost node in the decision tree that corresponds to the best predictor is called the root node, and the best thing about a decision tree is that it can handle both categorical and numerical data.
Advantages and Disadvantages
A decision tree gives an advantage of simplicity to understand and visualize, it requires very little data preparation as well. The disadvantage that follows with the decision tree is that it can create complex trees that may bot categorize efficiently. They can be quite unstable because even a simplistic change in the data can hinder the whole structure of the decision tree.
Use Cases
Data exploration
Pattern Recognition
Option pricing in finances
Identifying disease and risk threats
Random Forest
Random decision trees or random forest are an ensemble learning method for classification, regression, etc. It operates by constructing a multitude of decision trees at training time and outputs the class that is the mode of the classes or classification or mean prediction(regression) of the individual trees.
A random forest is a meta-estimator that fits a number of trees on various subsamples of data sets and then uses an average to improve the accuracy in the model’s predictive nature. The sub-sample size is always the same as that of the original input size but the samples are often drawn with replacements.
Advantages and Disadvantages
The advantage of the random forest is that it is more accurate than the decision trees due to the reduction in the over-fitting. The only disadvantage with the random forest classifiers is that it is quite complex in implementation and gets pretty slow in real-time prediction.
Use Cases
Industrial applications such as finding if a loan applicant is high-risk or low-risk
For Predicting the failure of mechanical parts in automobile engines
Predicting social media share scores
Performance scores
Artificial Neural Networks
A neural network consists of neurons that are arranged in layers, they take some input vector and convert it into an output. The process involves each neuron taking input and applying a function which is often a non-linear function to it and then passes the output to the next layer.
In general, the network is supposed to be feed-forward meaning that the unit or neuron feeds the output to the next layer but there is no involvement of any feedback to the previous layer.
Weighings are applied to the signals passing from one layer to the other, and these are the weighings that are tuned in the training phase to adapt a neural network for any problem statement.
Advantages and Disadvantages
It has a high tolerance to noisy data and able to classify untrained patterns, it performs better with continuous-valued inputs and outputs. The disadvantage with the artificial neural networks is that it has poor interpretation compared to other models.
Use Cases
Handwriting analysis
Colorization of black and white images
Computer vision processes
Captioning photos based on facial features
Support Vector Machine
The support vector machine is a classifier that represents the training data as points in space separated into categories by a gap as wide as possible. New points are then added to space by predicting which category they fall into and which space they will belong to.
Advantages and Disadvantages
It uses a subset of training points in the decision function which makes it memory efficient and is highly effective in high dimensional spaces. The only disadvantage with the support vector machine is that the algorithm does not directly provide probability estimates.
Use cases
Business applications for comparing the performance of a stock over a period of time
Investment suggestions
Classification of applications requiring accuracy and efficiency
Classifier Evaluation
The most important part after the completion of any classifier is the evaluation to check its accuracy and efficiency. There are a lot of ways in which we can evaluate a classifier. Let us take a look at these methods listed below.
Holdout Method
This is the most common method to evaluate a classifier. In this method, the given data set is divided into two parts as a test and train set 20% and 80% respectively.
The train set is used to train the data and the unseen test set is used to test its predictive power.
Cross-Validation
Over-fitting is the most common problem prevalent in most of the machine learning models. K-fold cross-validation can be conducted to verify if the model is over-fitted at all.
In this method, the data set is randomly partitioned into k mutually exclusive subsets, each of which is of the same size. Out of these, one is kept for testing and others are used to train the model. The same process takes place for all k folds.
Classification Report
A classification report will give the following results, it is a sample classification report of an SVM classifier using a cancer_data dataset.
Accuracy
Accuracy is a ratio of correctly predicted observation to the total observations
True Positive: The number of correct predictions that the occurrence is positive.
True Negative: Number of correct predictions that the occurrence is negative.
2. F1- Score
It is the weighted average of precision and recall
3. Precision And Recall
Precision is the fraction of relevant instances among the retrieved instances, while recall is the fraction of relevant instances that have been retrieved over the total number of instances. They are basically used as the measure of relevance.
ROC Curve
Receiver operating characteristics or ROC curve is used for visual comparison of classification models, which shows the relationship between the true positive rate and the false positive rate. The area under the ROC curve is the measure of the accuracy of the model.
Algorithm Selection
Apart from the above approach, We can follow the following steps to use the best algorithm for the model
Read the data
Create dependent and independent data sets based on our dependent and independent features
Split the data into training and testing sets
Train the model using different algorithms such as KNN, Decision tree, SVM, etc
Evazuate the classifier
Choose the classifier with the most accuracy.
Although it may take more time than needed to choose the best algorithm suited for your model, accuracy is the best way to go forward to make your model efficient.
Let us take a look at the MNIST data set, and we will use two different algorithms to check which one will suit the model best.
Use Case
What is MNIST?
It is a set of 70,000 small handwritten images labeled with the respective digit that they represent. Each image has almost 784 features, a feature simply represents the pixel’s density and each image is 28×28 pixels.
We will make a digit predictor using the MNIST dataset with the help of different classifiers.
Loading the MNIST dataset
from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784')
print(mnist)
Output:
Exploring The Dataset
import matplotlib
import matplotlib.pyplot as plt
X, y = mnist['data'], mnist['target']
random_digit = X[4800]
random_digit_image = random_digit.reshape(28,28)
plt.imshow(random_digit_image, cmap=matplotlib.cm.binary, interpolation="nearest")
Output:
Splitting the data
We are using the first 6000 entries as the training data, the dataset is as large as 70000 entries. You can check using the shape of the X and y. So to make our model memory efficient, we have only taken 6000 entries as the training set and 1000 entries as a test set.
x_train, x_test = X[:6000], X[6000:7000]
y_train, y_test = y[:6000], y[6000:7000]
Shuffling The Data
To avoid unwanted errors, we have shuffled the data using the numpy array. It basically improves the efficiency of the model.
import numpy as np
shuffle_index = np.random.permutation(6000)
x_train, y_train = x_train[shuffle_index], y_train[shuffle_index]
Creating A Digit Predictor Using Logistic Regression
y_train = y_train.astype(np.int8)
y_test = y_test.astype(np.int8)
y_train_2 = (y_train==2)
y_test_2 = (y_test==2)
print(y_test_2)
Output:
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(tol=0.1)
clf.fit(x_train,y_train_2)
clf.predict([random_digit])
Output:
Cross-Validation
from sklearn.model_selection import cross_val_score a = cross_val_score(clf, x_train, y_train_2, cv=3, scoring="accuracy") a.mean()
Output:
Creating A Predictor Using Support Vector Machine
from sklearn import svm cls = svm.SVC()
cls.fit(x_train, y_train_2)
cls.predict([random_digit])
Output:
Cross-Validation
a = cross_val_score(cls, x_train, y_train_2, cv = 3, scoring="accuracy") a.mean()
In the above example, we were able to make a digit predictor. Since we were predicting if the digit were 2 out of all the entries in the data, we got false in both the classifiers, but the cross-validation shows much better accuracy with the logistic regression classifier instead of support vector machine classifier.
This brings us to the end of this article where we have learned Classification in Machine Learning. I hope you are clear with all that has been shared with you in this tutorial.
If you wish to check out more articles on the market’s most trending technologies like Python, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Data Science. | https://medium.com/edureka/classification-in-machine-learning-2402675b7817 | ['Sahiti Kappagantula'] | 2020-09-28 07:25:20.622000+00:00 | ['Machine Learning', 'Algorithms', 'Classification', 'AI', 'Data Science'] |
Grow Your Mental Resilience to Stress Less About People’s Opinions | Next door to us lives a guy who is the embodiment of happy confidence. Whenever he walks past our windows, he’s singing out loud to his headphones. His voice is mostly out of tune, but that doesn’t matter; it’s his energy that catches my eye.
Unbeatable optimism is his signature vibe. He could be out there battling a hurricane and horizontal rain and still be singing. Day by day, lockdown or not, he refuses to give in and turn miserable.
Well, I love him. I always take a moment to listen when I hear his voice coming. He makes me smile. As long as the singing guy is still singing, things cannot be that bad.
And the best thing is, he doesn’t care what other people think. He doesn’t even care when they laugh. How does he manage to put his own contentment above what others think so effortlessly?
Playing the safe game
Most of us wouldn’t do it. You and I would most likely cave in to our natural fear of being judged, even if we just had the best news of our life and are dying for a victory dance in the grocery queue.
Most of us can’t always just be ourselves because we are conscious of how others see us. And sometimes, this fear of judgment can stop us from making the progress we are seeking. If you have a beautiful singing voice but are scared to send out a sample in case others don’t like it, you miss an opportunity to be discovered. If you have a brilliant work idea and are afraid to present it because it sounds too crazy, you might be missing out on career growth.
Many people miss opportunities through similar safety behaviors. These safety behaviors allow them to avoid the discomfort zone of having their talents and ideas exposed (and inevitably, judged) by other people.
Self-doubt can affect everyone
If you, too, often get anxious about what others think, you are not alone. Self-doubt and fear of judgment affect even high achievers, celebrities, and people who otherwise come across as epitomes of confidence.
According to Forbes, the majority of smart and successful people tend to doubt their talents often. Mike Myers, who is not only an actor but a screenwriter, a comedian, and a producer, therefore clearly a multi-talented person, once said:
At any time I still expect that the no-talent police will come and arrest me.
Even the biggest names in the game can doubt the quality of their work or even their self-worth. To everyone’s surprise, Will Smith admitted in more than one interview:
I still doubt myself every single day. What people believe is my self-confidence is actually my reaction to fear.
In the Style Like U What’s Underneath interview series, hundreds of people from all backgrounds, including models and actors, agreed to open up about their vulnerabilities, body insecurities, and fears. Through their sharing of raw feelings and deeply personal stories, the project went viral overnight because millions of people could relate or felt the same.
Build resilience to care less
Caring less is not about being careless. It’s about stressing less. When you build your mental resilience, you care less about external validation because you strengthen your own sense of self in the process. You feel less exposed in situations requiring involvement. Confidence is buildable, just like a muscle.
The other day I noticed a simple but powerful equation when walking through the Leake Street tunnels. It’s an equation about Abstract Art:
Abstract Art = Erm, anyone could have done that + Yeah, but you didn’t.
It reminded me that the people who judge the most are often the ones who do the least. Perceptions are subjective. How others see the outcome is secondary to the evolution of your journey through the process. So the most important thing is to show up. For yourself.
Choose what’s more important than fear
It can be tricky, but it is possible to work with your fear. In fact, the ability to use fear and transform it into growth is what Will Smith sees as the most crucial step of his career:
Courage is not the absence of fear, but the recognition that something else is more important.
When I worked in corporate publishing, I used to get mad anxiety when giving presentations in meetings. The bigger the conference room and the more people squeezed in it, the worse I crumbled. As soon as all the piercing eyes fixated on me, my mind would blackout, I’d get hot flushes, and lose my voice. Yeah, it was bad.
Then one day, I was asked to present a new idea to the CEO and his team.
I had to choose. And I knew that this time, I couldn’t afford to choose my fear.
So I sat down and analyzed everything I felt, using the ramble technique. Then, I thought of solutions to make me feel safer. I knew I’d feel less exposed if I sat down facing the audience rather than standing up. I found a presentation buddy to back me up in case I stumbled. I made my slides super-visual to draw attention away from my face.
This new set-up was my personalized safety net. I didn’t rock the presentation, but it was ok. And for me, ok was indeed a huge victory.
When facing fear, rambling about it on paper is a great start. It works by giving you clarity. Don’t hold anything back. Once your doubts are black on white, they’ll look different. You’ll probably find it easier to detach at least a little and analyze your notes more rationally. Next, find your motivation. Ask yourself what’s more important, your fear, or a record deal? Then, see if you can create any actionable bullet points. Always start with what seems the easiest.
If you prefer diving into positive waters rather than your fears, you can start by answering this list of 20 Questions to Help Increase Your Confidence.
Mind your (inner) tone
The inner voice in our head can be harsh and unforgiving at times. But you can break through its mean noise by asking a simple question:
Would you say this to your best friend or someone you love?
Very often, we find it hard to give ourselves the same love and respect we give others. But the bottom line is, you deserve to treat yourself like you treat your best friend. And your inner voice needs to reflect that.
Instead of focusing on your imperfections, take some time to think about your perfections. And make sure to make a list too. Showing yourself the same love and kindness you have for others when they struggle is an antidote to all the external influences that seed doubts and insecurities within us daily.
“We feel that the process of self-acceptance, while it might sound flowery, is actually not; it’s a heavy-duty, important shift that has to happen.” — Elisa Goodkind, Style Like U
We all crave external approval or social validation sometimes. But these two things don’t mean much without your own inner endorsement. Acceptance, validation, and appreciation are the most profound when they first come from you.
Look for realness in others
Real life does not airbrush. Perfection is subjective. Comparing ourselves to others can cause insecurity and anxiety. Better create the version of yourself that you like the most.
As the body positivity movement proves every day, sharing our scars makes us feel connected. Imperfections tell stories. And stories empower people.
More celebrities now use their platforms to encourage sharing realness. Even royalty no longer hesitates.
I loved what Princess Eugenie did on her wedding day. She chose to wear a wedding dress with a deep open back that exposed her scar from a scoliosis spinal surgery. Without saying a word, she made a profound statement about self-love and acceptance of the marks each personal journey can leave behind.
When you start seeing beauty in the realness of others, it becomes much easier to accept and love your own. Ignore the trolls and focus on people who see you for you. They see your uniqueness and they see your beauty, inside and out, when you let them.
Whenever you feel doubtful about your talent, skills, or the way others see you, remember what Brene Brown says about vulnerability:
Finding balance in your vulnerability expands your world. Staying vulnerable and authentic is a risk we have to take to experience real connections.
And it gets better.
As you build your resilience of mind, and with practice, the things making you feel vulnerable now eventually get easier. And in doing so, they become the stepping stones to building greater confidence. | https://medium.com/publishous/grow-your-mental-resilience-to-stress-less-about-peoples-opinions-5e5c86457887 | ['Martina Doleckova'] | 2020-12-17 17:51:32.916000+00:00 | ['Self', 'Mental Health', 'Life Lessons', 'Life', 'Self Improvement'] |
KAT VON B | What we know, and totally dig about legendary digital nomad Kat Von B.
She’s a coffee sipping, wine slurping, international travel blogging hotel snob that means what she says and says what she means. She’s the only one that tells her what to do… GOALS! She’s created a mean team of creatives that support her in her quest in changing social media norms and exploring what is possible in destination marketing and beyond. Kat Von B’s motto? “Over deliver!”
Since late February Kat Von B has also been focusing on expanding the public’s awareness of SingularityNET. She’ll be working hard to expand their reach and influence.
SingularityNET has a dedicated following of well over 100,000 people. With Kat’s close to a decade of experience working with major online communities, and the huge respect SingularityNET has for their fans and token-holders across Reddit, Twitter, and Telegram — this is a match made in heaven!
SingularityNET is becoming the go-to provider of AI resources for the blockchain industry. They are receiving a large amount of demand from decentralized networks, with numerous projects requesting that they integrate their network’s to provide first-mover advantages in AI.
These guys have a super confident team that know the impact SingularityNET can have, and they want to make sure that their vision is reaching as many people as possible, especially those who may not be an expert in AI technologies. In cahoots with Kat and her rock solid ability to mobilize millions of people across global communities, even your toddler is gonna be all over this!
And whilst I’m on the girl power trip, I just wanted to remind you that SingularityNET was born from a collective will to democratize the power of AI. Sophia, the world’s most expressive robot, is one of the first use cases. For more on this click.
SOURCES: SINGULARITY NET BLOG | https://medium.com/visionaire/kat-von-b-3dbc6cca6903 | [] | 2018-06-03 04:16:53.394000+00:00 | ['Technology', 'Digital Nomad', 'Artificial Intelligence', 'Blockchain', 'Travel'] |
Serialization challenges with Spark and Scala — Part 2— Now for something really challenging… | Following on from the introductory post on serialization with spark, this post gets right into the thick of it with a tricky example of serialization with Spark. I highly recommend attempting to get this working yourself first, you’ll learn a lot!
Each example steps through some ways you may try to debug the problems, eventually resulting in a working solution. Once again the code samples can be found on ONZO’s Github, and the numbering on this article should match up with the code there :)
9 — base example
**FAILS**
Now for some practice! This example is relatively complex and needs a few changes to work successfully. Can you figure out what they are? Kudos if so! The next few examples walk through a solution step by step, and some things you may try.
10 — make classes serializable
**FAILS**
One approach to serialization issues can be to make everything Serializable. However in this case you will find it doesn’t solve the issue. You’ll find it easier (but not that easy..!) to spot why if you look at the complete examples. It’s because when trying to serialize the classes it will find references to testRdd and also the shouldBe method. This will trigger serialization of the test class (you can see the full code in Github) that contains these, and the test class is not serializable.
11a — use anon function
**FAILS**
In order to debug this you might try simplifying things by replacing the WithFunction class with a simple anonymous function. However in this case we still have an a failure, can you spot the issue now?
11b — use anon function, with enclosing
**PASSES**
Did you spot it? By enclosing the reduceInts method the map function can now access everything it needs in that one closure, no need to serialize the other classes!
12a — use function with def
**FAILS**
Taking small steps, we now replace the anonymous function with a function declared with a def. Again you will find this fails, but seeing why isn’t easy. It is because of the intricacies of how def works. Essentially a method defined with def contains an implicit reference to this , which in this case is an object
which can’t be serialized. You can find out more about the differences between def and val here.
12b — use function with val
**PASSES**
Declaring the method with val works. A val method equates to a Function1 object, which is serializable, and doesn’t contain an implicit reference to this , stopping the attempted serialization of the Example object.
12c — use function with val explained part 1
**FAILS**
This example serves to illustrate the point more clearly. Here the addOne function references the one value. As we saw earlier this will cause the whole Example object to be serialized, which will fail.
**BONUS POINTS**
One helpful experiment to try here is to resolve this by making the Example object serializable.
You will note that you still get a serialization error. Can you see why? There are actually 2 reasons:
1) testRdd is referenced inside the WithSparkMap class, leading to the whole Spec trying to be serialized (please see Github link for full code which will explain this more!)
2) The shouldBe method is also referenced, again leading to the whole Spec trying to be serialized
12d —use function with val explained part 2
**PASSES**
As above, the best way to fix the issue is to reference values only in the more immediate scope. Here we have added oneEnc, which prevents the serialization of the whole Example object.
13 — back to the problem, no class params
**PASSES**
Coming back from the issue we originally had, now we understand a little more let's introduce our WithFunction class back in. To simplify things we've taken out the constructor parameter here. We're also using a val for the method rather than a def. No serialization issues now!
14 — back to the problem, with class params
**FAILS**
We’ve now added back in the class params. Can you spot why this fails? The plusOne function references num , outside of the immediate scope,
again causing more objects to be serialized which is failing.
15a — back to the problem, with class params, and enclosing
**PASSES**
This is now a simple fix, and we can enclose the num value with encNum which resolves the last of our serialization issues. Finally, this is a complete working example that is equivalent to our first implementation that failed!
15b — adding some complexity e.g.7b — testing understanding
**FAILS**
One more failing example! Can you see why the above fails?
The issue is that encNum won’t be evaluated until plusOne is actually called, effectively within the map function. At this point then the num value will need to be accessed, causing additional serialization of the containing object and the failure here.
Conclusion
Hopefully these examples have made a little clearer how serialization of functions works with Spark and Scala, good luck with your Spark serialization challenges! | https://medium.com/onzo-tech/serialization-challenges-with-spark-and-scala-part-2-now-for-something-really-challenging-bd0f391bd142 | ['Tim Gent'] | 2018-09-19 12:13:03.956000+00:00 | ['Apache Spark', 'Spark', 'Big Data', 'Scala'] |
The Three Pillars of Happiness | Teaching cheerfully until his painful death in 270 BC, Epicurus contributed enormously to the school of philosophy during his lifetime, his legacy surviving for thousands of years.
Epicurus lived in Athens with his closest friends and spent his days trying to solve the perennial puzzle that troubles us all: happiness. While most philosophers contemplated at length what it means to be good, Epicurus instead aimed to uncover the key principles of contentment.
Naturally, his early works attracted severe criticism from other scholars. Surrendering more intellectual pursuits in favour of searching for happiness, peers ridiculed Epicurus in the beginning, labelling him as a pleasure-hungry, pseudo-philosophical hedonist.
Rumours even circulated claiming that Epicurus would engorge himself with lavish ten-course feasts every evening, others insisting that he frequently partook in orgies with several women at a time.
Meanwhile, poor Epicurus lived modestly out in the countryside. His diet consisted of little more than bread, olives and an occasional slice of cheese as a treat whilst he studied happiness from his humble home and garden in Athens.
Teaching passionately until the very end of his life, Epicurus spent his days hashing out a wealth of thought-provoking material which would be quoted for many years after his death.
One of his most famous sayings summarises the core principle behind his teachings:
“Do not spoil what you have by desiring what you have not; remember that what you now have was once among the things you only hoped for.”
This is but one of Epicurus’s maxims for joy. He proposed that we all make three mistakes when searching for happiness, and it is the corresponding solutions to these mistakes that I’ll be discussing in this article.
1. Cultivate True Friendships
In contrast to the false stories attached to his name, Epicurus wasn’t interested in sex or romance. He argued that our obsession with romantic relationships is, in fact, the cause for a lot of misery.
Instead, Epicurus suggested that true, meaningful friendships are fundamental to our happiness.
Friendships are not marred by the same bitterness, jealousy and resentment that romantic relationships often are. Therefore, instead of searching tirelessly for lovers or sex, we should spend our time cultivating positive relationships with our friends.
The only issue with friendships, he stated, is that we just don’t see our friends enough. Life gets in the way, and often we neglect those dearest to us in favour of other pursuits.
In addition to this, many of us are reluctant to open ourselves fully to our friends because we fear that they can’t be trusted or that we’ll be met with rejection.
We may have many friends and acquaintances, but not all are held in equal regard where trust and openness are concerned.
While Epicurus didn’t explicitly discuss which characteristics denote true friendship, Stoic philosopher Seneca later revisited Epicurean philosophy and set out some guidelines detailing the criteria of positive and meaningful relationships.
Seneca held that true friends should inspire us to improve and become happier. They should have our best interests at heart.
Our best friends shouldn’t just reflect our interests but also our values. Cheaters, liars and fakes are all being driven by vices which, Seneca suggests, will only impact us negatively should we admit such people to our friendship. It is better to commit to friendships that uplift and enlighten us.
Seneca also advises that, when we ultimately decide that a person should be accepted into our lives, we should welcome them wholeheartedly and trust them fully.
As he writes in letters to his friend Lucilius,
‘Ponder for a long time whether you shall admit a given person to your friendship; but when you have decided to admit him, welcome him with all your heart and soul. Speak as boldly with him as with yourself… Regard him as loyal and you will make him loyal.’
Such relationships will enable us to lead better, more peaceful lives. That’s the first pillar of happiness.
2. Produce Meaningful Work
The next thing that many of us feel we need in order to be happy is wealth.
Though Epicurus’s ideas were formulated almost two-thousand years ago, today we are more motivated by money than ever before.
So much so, in fact, that the majority of us spend our entire lives working hard in the hopes that someday we will have enough money to buy an expensive house and retire early.
Our obsession with earning money wills us to work tirelessly, driving ourelves to exhaustion and causing us tremendous amounts of stress and unhappiness.
Epicurus argues that the key to satisfaction in our working lives isn’t earning a lot of money, but the knowledge that we’re producing meaningful work.
We all long to feel that we’re making a difference. Deep down, we don’t care about large sums or job titles, but the feeling that we’re playing our part in making the world a better place for other people.
In order to live happily, it’s critical that we love our work. After all, it’s this work that comprises a large chunk of our time. It only makes sense that we do the things we enjoy, and few things deliver as much joy as the knowledge that we’re helping others.
Instead of slaving from nine until five every day in a job that you hate, seek to discover how you can provide meaning and support others.
In the words of Charles Dickens,
“No one is useless in this world who lightens the burdens of another.”
3. Learn to Live Happily With Less
Lastly, Epicurus considered our fixation with desire.
We seek to fill gaps in our lives by chasing after our wills and drives, such as striving to earn more money, get in shape or find a romantic partner. We endlessly follow our desires in the hope that, by achieving them, we will finally feel happy.
But there’s a catch: pursuing these desires only delays the arrival of peace.
By chasing mindless pleasure, we’re missing the mark completely. Epicurus argues that our longing for luxury, status and pleasure conceal a deeper hunger for satisfaction.
In one of the earliest wellbeing experiments to date, Epicurus relinquished his pursuit of desire and instead made three fundamental changes in his life. He sought to measure how these changes affected both his and his followers’ levels of happiness.
Surrounding himself with good people: We don’t need sex or wealth to feel happy — just our friends. It’s no good seeing them every now and then. Regularity of contact is crucial. Epicurus was so convinced that friendship was the key to happiness, in fact, that he purchased a large house in Athens and moved in with many of his dearest companions. Pursuing the work he loved: Epicurus and his friends took big pay cuts in exchange for free time to produce their own work. Living together, he and his friends wrote, practiced pottery and cooked. They lived happily together, prioritising meaning over wealth. Finding peace of mind: In their shared household, Epicurus and his friends spent their spare days seeking calm. They meditated, spent time alone reflecting and wrote in journals. These practices were hugely successful in helping them find peace of mind.
A Happiness Revolution
After following all three of these principles for some time, the Epicurean household became so happy with their lives that word spread like wildfire.
Neighbouring communities couldn’t believe the success of Epicurus’s commune. Epicurean schools began to open all over the Mediterranean, inspired to learn how to find contentment using the once-ridiculed philosopher’s practices.
The reach of Epicurean philosophy began to spread far beyond his commune. His influence was enormous, making vast contributions not only to philosophy, but to religion and politics, too.
Centuries later, Karl Marx would produce a Ph.D thesis about Epicurean philosophy. Communism is, after all, merely a misguided and failed version of Epicureanism.
Epicurus’s works are still quoted today, shaping the landscape of the modern world of self-improvement. His ideas hold more value now than ever.
At its core, Epicurean philosophy is founded upon one key principle. Happiness cannot be found in the material, but only by living modestly and meaningfully.
This practice can benefit every single one of our lives — even if only a little.
The Takeaway
Epicurus sought to teach others how to live happily for one reason: nobody seemed to know how. Many of us are still puzzled by the problem of happiness.
We think we know that sex, money and luxury are the solutions to our misery — but they are not. These things only provide fleeting pleasure that fails to produce any long-lasting changes to our wellbeing.
Instead, Epicurus advises that we reflect on the moments that bring us true happiness. We should pay close attention to small, wonderful things that populate our daily lives and cultivate gratitude for all that we have, surrendering our desire for more.
We should make time to spend with friends and ensure that, as Seneca advocates, these friends help us to improve and grow. It is these friends that provide our lives with joy and meaning.
We should forego our strenuous jobs and long hours in favour of meaningful and inspiring work that serves to make the world a better place.
And lastly, we should seek peace of mind in our spare time. Through meditation and learning to live happily in the present moment, we will no longer crave luxury or wealth, content with what we have already.
Through exercising these practices, we may all learn to live happily — even if it means spending our days living out in the country, eating cheese and reading philosophy books with a couple of good friends down the hallway.
Before You Leave
I run a daily newsletter The Daily Grind where I send out tips to creatives and entrepreneurs about success, wellness and honing their craft.
If that sounds like you, follow this link to sign up. | https://medium.com/mind-cafe/the-three-pillars-of-happiness-81cc21c0333d | ['Adrian Drew'] | 2019-06-12 17:13:17.052000+00:00 | ['Philosophy', 'Happiness', 'Life', 'Mental Health', 'Self Improvement'] |
My Top 30 People to Know in Startups Around Phoenix | How more people don’t already know Adam is beyond me. As a serial entrepreneur behind a number of cool initiatives, even having sold one of his startups to Entrepreneur Media itself, I’ve learned a lot from him. He and his brother, Matthew, are always up to something excellent together, usually juggling multiple startups at once. But more importantly, Adam helps a ton of local founders however he best can, supporting them as an incredible advisor and ultimate connector.
Both an entrepreneur and entrepreneur servant, Brandon has been loyally working to help build the ecosystem through a variety of efforts, including his pivotal role in the formation of the StartupAZ Foundation. He has tirelessly championed the Phoenix community in a way that has stayed true to the spirit of generosity that we have claimed. Most importantly, he’s someone that you cannot help but want to be around, as authentic and humble as they come.
These two are not a couple or anything, but they sure do make a killer team. Serving the #yesphx community by managing its social media presence, they have upped our game and made our community more engaged and connected as a result. They are passionate about Phoenix, having helped PHX Startup Week, Social Media Club Phoenix, House of Genius, and Social Media Day in multiple capacities. And that’s just the tip of the iceberg, as they are always volunteering to serve their city in some shape or form.
If you haven’t yet heard of Shelvspace, you will. It’s on a solid trajectory to be a big winner in Phoenix. Though Dave is quietly and consistently growing its mark in the industry, he has been serving startups and founders for well over a decade in Phoenix. What he is building is significant, but more importantly, I want him to be known because he is a significant individual with one of the best heads on his shoulders of any local leader.
If you don’t know Ed yet, I’m sure you’ve seen him around, whether that’s chowing down at Li’l Miss BBQ, or enjoying good company one of our many local startup events. He’s the one who, when he smiles, the room lights up — and he’s always smiling! He has a vast depth of experience and he’s always willing to show up and help out, whatever the need.
Greg has been writing about and promoting Phoenix from a variety of roles ever since he served at CEI Gateway. Now at TheraSpecs, he has only picked up the pace, continuing to serve by helping grow his skills and help many other inbound marketers across the valley. He may also have some mad lip-sync skills, but you didn’t hear that from me.
Jerrod has been helping startups on their sales and technology game for a while now, having served in a key role at Tallwave. But now he’s helped co-found his own startup, ClickIPO, which is off to the races. Whether he’s helping startups hone and hack their way to success, or do it on his own, he’s one to know.
Whether he’s helping make things happen for PHX Startup Week, hosting his own events, or helping others as they work on their startups and themselves, JP is always there to lend a hand and make things happen. Follow the emoji apps and apparel and you’ll find him somewhere in the community.
Now, granted, Kate is more famous than anyone else on this list, since she was the name behind our community’s relatively recent New York Times profile. But she should be, both her experience and commitment warrant the attention of all. More importantly, it is her spirit that people will find the most refreshing thing about her, having donated of herself to serve the community in a variety of ways. As a transplant from Silicon Valley, the Valley of the Sun is very glad to have her.
Kim is one of those people that, as soon as you sit down with her, you’re at ease. She’s served the city through a variety of means, including at ASU E+I and as a volunteer at House of Genius. Now she’s serving the larger entrepreneur community in a role that directly helps our small business owners and economy at the BBB. Get to know her and watch her go!
Kristin’s kindness proceeds herself. As a local non-profit, entrepreneur, and refugee advocate, she has fought tirelessly to help propel others forward, whether that was helping grow initiatives to serve female entrepreneurs, Code Day Phoenix, or one of her many other projects. There are few people who have been as consistent in their service of others as Kristin and we’re all better for having her in our midst.
There are few people working harder than Kristin to support entrepreneurs. She’s been doing that tremendously well by helping female-led businesses in particular, both through Empowered PhXX and launching Month Month AZ, always fighting for 50/50 equality in business ownership. She deserves our applause and support.
I’ve quite enjoyed watching Kunal as he has gone from Silicon Valley transplant to ASU student to active Phoenix advocate and supporter. Whether he’s helping students plug in to the community or supporting new community members at Galvanize, he is always cheerfully and faithfully helping the community press forward.
I first met Kyle when he was stationed at DeskHub. He helped me move items for PHX Startup Week’s first year, and we’ve been friends ever since. He is always up to something, making things happen for the community. He is a true community servant and we are richer with him in it. Plus, what would we do without that contagious, boisterous laugh of his?
Liz is crushing it. As the CEO of High Rock Accounting, not only is she steadily growing an incredible, technology-enabled startup right here in Phoenix, she’s also helping other startup founders with their own books. Moreover, she’s always seeking to help others and look for ways to help others press forward. While her firm is watching startups’ bottom-lines, the bottom-line for me is this: she rocks.
The amount of work Mallory does behind the scenes, whether it was previously serving as Daniel Valenzuela’s chief of staff, or now in her role for the Startup AZ Foundation, she is instrumental in making things happen for our city. I am so grateful for her contributions, I find it difficult to express, but those who know her already know — she would do anything for Phoenix. We are lucky to have her.
I love Mat deeply. As much as he may have gained a reputation for being loud about Phoenix getting its act together, I know firsthand that he does this from a heart of deep care for the city and its entrepreneurs. He’s all about startups. All. A. Bout. Them. More importantly, if you need help, he’ll be there to give it, whatever the personal cost to him. We are all richer with him in our city, and I look forward to watching as he grows PubLoft.
Matt is my partner in crime, the better half of my community efforts. Without him, there would be no PHX Startup Week and there would be a far less polished and organized effort around #yesphx. Matt is the #MVPofGSD for all things community-related. More importantly, he’s a true friend and someone that looks to selflessly serve his city in all ways he can.
While Coplex is a name many already know, Michael is one of the guys behind the brand that is making things happen not only for the startup studio, but for the startups they serve. Without Michael, there would not be a One Million Cups currently operating in Phoenix, nor would there be such a beautiful #yesphx website, which he personally helps project manage along with Matthew “G” Giacomazzo and others. The community is thankful.
While not a startup founder, per se, Noah is most definitely an entrepreneur, having helped grow Keyser’s brand within the entrepreneur community and served many of our most notable startups, always with a smile. Not only is he one of the most enjoyable personalities to hang out with in town, either (he’s an ENFP, folks). He lives out the principles of generosity in how he serves everyone. Noah is the real deal, folks.
Paige makes things happen. One of the key ingredients to PHX Startup Week’s first and second year success, she has gone on to travel the world, eventually landing herself back in Phoenix. I am also grateful to have had the opportunity to work alongside her as a co-founder in my last startup. If there’s an event that’s happening in Phoenix for the startup community, she’s probably had a hand in helping it happen one way or another. It wouldn’t be the same without her.
Running his own visual media company, Beeing, Patrik has committed himself to scaling that company right here in Phoenix. But he’s gone far beyond that, as well, having donated a great amount of his time to multiple efforts in capturing that very startup community and its inhabitants, including his personal work on the latest #yesphx video. If you need a creative storyteller, he’s your man.
Paul is one of the most intentional people I have ever met. He also happens to be one of the most intelligent. When I met him, he was determined to make the jump from the world of finance and hedge funds into helping lead product-oriented teams — and he did exactly that. Also, he has helped spearhead the effort to boost our local skills by launching the Product Tank meetup, where Phoenix product managers are given the opportunity to exchange ideas and experiences about design, development and management, business modeling, metrics, UX, and more. In short, Paul gets it.
There are no words grand enough to adequately describe my friend, Raoul. He is a bright shining light of Phoenix, radiant from the inside out. The way he extends his help to everyone within the community reaches well beyond the realm of reason, always seeking to support, coach, and encourage others. He has also had a hand in helping run massive pitch competitions like the first ever #StreetPitch held at PHX Startup Week. Do yourself a favor and meet him asap, as he has some exciting things about to launch.
You may have recently heard of the Arizona Founders Fund. But what you probably haven’t heard is the story behind that seed fund. Romi has been steadily and strategically working toward launching the investment vehicle over years, having committed himself to making that happen for our market at great personal cost to him. And finally, with the support of others within the community he did it. Our ecosystem is already better for it.
I haven’t known Stacie as long as some of the others on this list, but I’m so grateful to have met her. From her time serving as the CEO of Videoloco to her recent role as the Chief People Officer for super startup, CampusLogic, she has dove right into the community to serve wholeheartedly. She is all about creating the type of culture we need in Phoenix and putting her oomph behind every effort to do so, whether helping other leaders form those cultures, or forming them herself.
I would be remiss not to mention
While it may be “cheating”, there are a few last groups of people who deserve some honorable mention, including…
People are regularly thanking and congratulating me for #yesphx. I have so very little to do with the effort any longer other than serving as one of its myriad champions that such pats on the back feel altogether silly. Don’t believe me? Check out the answer to this frequently asked question. While I have already named some of these same individuals above, there are still others who have generously given of their time to make things happen behind the scenes. They each deserve our thanks for their contributions to helping build the Phoenix scene.
ASU has made a very concerted effort over the last couple years to better engage with the community and build bridges between the university and larger startup community. That is especially thanks to the leadership of Ji Mi Choi within E+I, but it is no less thanks to the support of Tracy Lea, Brent Sebold, and others both past and presently serving its students and faculty.
Meetups are a critical part of any community, and that is no less true in Phoenix. I wish to recognize the effort it demands and the growing number of meetups around town that have served as a foot-in-the-door for so many of our community’s now active members. Keep on keeping on!
All those Phoenix lawyers and firms
While startup folk don’t first think of or thank their lawyers, what would we do without them? Our community has been tirelessly supported by the event sponsorship and critical counsel of partners at firms like Osborn Maledon, Hool Coury Law, Weiss Brown, Polsinelli, Fennemore Craig, DLA Piper, and others. | https://medium.com/yesphx-stories/my-top-30-people-to-know-in-startups-around-phoenix-e347e09c6e78 | ['Jonathan Cottrell'] | 2017-08-29 01:17:36.996000+00:00 | ['Entrepreneur', 'Community', 'Lists', 'Phoenix', 'Startup'] |
Queer Lives in Fiction | The past couple of weeks have been super tough for LGBTQ people around the world. While we’ve made incredible progress towards full equality and acceptance in many regions of the globe, troubling signs are emerging.
The United States is awash in waves of fascism. Violent anti-LGBTQ atacks are on the rise, and Trump supporters everywhere are protesting our equality. The FIFA World Cup is being held right now in Russia, where violent anti-LGBTQ pogroms are taking place in some regions and where LGBTQ people everywhere are severely persecuted.
The world shrugs in the face of the ongoing brutality. Almost nobody is protesting. The games go on as if gay men were not being hunted down, interned in concentration camps, tortured, and killed.
At Th-Ink Queerly, we’re writing a lot about current events and activism, but we also want to pour even more oil on our fiction campaign. We want to set a bonfire ablaze with our stories that examine and celebrate the lives of queer people.
We want to encourage one another and to share our lives with those who may lack understanding. This past week showcases some of our goals very well.
Let me show you.
First off, celebrated Canadian cartoonist Sean Stephane Martin graced our pages with a two-part short story — illustrated with his own art, of course. Sean’s Doc and Raider, by the way, is the longest-running LGBTQ comic strip in history.
In this narrative adaptation of his work, Sean takes us back to the early days, when his “boys” lived in a tiny studio in Montreal’s Plateau district. When they move in, it doesn’t take them (or their perceptive pussy cat) very long to figure out that they’re sharing their living space with somebody who might just belong to the Twilight Zone.
Sean pens a ghost story that turns surprisingly poignant. I found myself wiping away a tear toward the end. He illustrates not only the ordinary lives and loves of two gay men, but the tragedy of life that often surrounds us without our knowledge.
He’d already started to fade in the night air when suddenly he turned back and looked at me, his face part happiness and yet part fear. “You will… remember me, oui? That would be… nice.” I turned and hugged my man as tight as possible. “C’mon, let’s go inside.”
Alex David Bevan shifted gears this past week when he posted Chapter Two of his Colossus series. While this chapter is part of a longer work, a sort of gay Romeo and Juliet novella, it stands perfectly well on its own as a short story.
Do you think it’s easy in today’s world for young LGBTQ people to overturn societal expectations and come out to their family and friends? Yes, sometimes it is relatively painless — for some lucky people. For many, though, the experience of growing up queer is traumatizing and extraordinarily difficult.
Alex invites you to join a young, closeted gay man at a family dinner in the American heartland. Come and have a look at how hard things can really be, and at how brave queer young people have to be.
In that moment of terror, Caleb first considered suicide as an option. It was not something that had ever crossed his mind, but now it was front and center. In that moment he did not plan on living beyond the annual Johnson Labor Day party. He decided he was strong enough to never look his younger brother in the eyes again. It would hurt less. Caleb smiled. “Who wants coffee?” he asked, wondering how best to choke down the cherry pie that the Burbidges had baked for dessert.
Speaking of bravery, what would you do if your whole life was a lie — not just to the world around you, but to yourself? What would you do if the world were just about to discover your secret? What would you do if your life were about to shatter into a million splintery fragments?
What would do if you were the pastor of a conservative Christian megachurch, and a newspaper reporter had just discovered you having chemsex with a young street hustler in a seedy motel?
Brian Pelletier invites you to sit down and examine the life of a man in exactly that position. He’s penning a tale not of heroes and villains, but of humans in all their truth, frailty, and strength. He invites you to examine Monsters in the Closet.
Gene glances around the room and sees no trace of Jackson. He must have left while Gene was unconscious. He feels trapped and out of control without him. No one is on his side now. He doesn’t want to meet with her and face the truth, but he can’t think of any other plan. And Hank! What will Hank do?
In many places, being queer IS ordinary, or at least maybe it feels that way to straight and cis people who take our acceptance for granted. But what happens when allies presume too much? What happens when they see us as symbols of their own progressive values? What happens when they see our labels stand out in more vivid relief than our humanity and individuality?
Gloria Bates and BFoundAPen are collaborating on a story set in a college campus so progressive that the PR staff feels free to highlight queer students on its web site — without even asking first. Queer Bash, an LGBTQ mixer, is a popular beginning-of-term party with students all over campus.
So, why does at least one young LGBTQ student feel uneasy with all the attention?
We’re going to find out in With Our Bare Hands. Come and take a peek inside a contemporary American campus!
“Right, sure, whatever. Well, I’m sure you know the school already has your face plastered all over their website. That’s how everybody knows your face.” “Hmmph,” I scowled. “Can they do that?” “It makes them seem diverse. Good for admissions, plus they don’t actually have to do any work to be more inclusive. It’s best not to think about it too much.” He furrowed his brow, thinking for a moment. “Anyway, hey, I’m Amari! I trust I’ll see you at our GSA meetings, right?”
My own series, David and the Lion’s Den (James Finn) is beginning to wind up to a conclusion. Join me and my cast of only-in-New-York characters as we travel back in time to the Greenwich Village of 1989, when HIV was a death sentence, and hope had not yet managed to raise its head.
Watch life and love march inexorably forward even in the face of tragedy. I have some Moonlit Kisses in store for you, as our mysterious Colombian boy grows into a young man and falls for a young woman. His life may finally be getting back on track.
In Chapter 20, David apologizes to Richard, who finally shares his story with us. How did a retired Madison Avenue ad man become Manhattan’s oldest working dominatrix transvestite sex worker? And why? Richard counsels David to accept his fear and to embrace a sense of the ridiculous. Living life, Richard claims, is more important than understanding it.
He got his kiss that night. She tasted of the bubblegum ice cream he treated her to. Her breasts were warm, soft, and plump in his hands. Her hair smelled of flowers as her heartbeat merged with his.
“Nothing. I felt absolutely nothing. I just watched from outside. I watched my friends fall in love, fall out of love, cheat on their wives, trade them in for younger models, then get tired of them too.” “And you didn’t ever cheat?” “Of course I did! My whole life was one big paint-by-the-numbers cheat. I cheated every day of my life just by pretending to be me. Sometimes, rarely, I’d take an extra long lunch and cheat for real. Cruise some rough trade in Central Park. Ten minutes of being me.”
That wraps it up for this week, folks. I hope some of you choose to use part of your weekend to enjoy some story-telling, to thinking about real queer lives, and about what it really means to be human.
And remember,
When you think fiction, Th-Ink Queerly! | https://medium.com/th-ink/queer-lives-in-fiction-d84eaa9a448 | ['James Finn'] | 2018-07-11 23:52:13.201000+00:00 | ['LGBTQ', 'Love', 'Life', 'Storytelling'] |
The Dark Side of Perfectionism | The Dark Side of Perfectionism
That high achiever you wish you could emulate? Don’t.
Image by Lukas Bieri on Pixabay
I’m a perfectionist. And an over-achiever. And a person on the outside who looks at times like “Wonder Woman.”
But the reality is much darker.
I get up each morning at three a.m. to write because it’s the only time I have to do so as a mom of two. I stress about each carbohydrate I put in my mouth, obsess over each time I am weak and choose to put sugar in my creamer instead of drinking it black, and feel guilt over each lesson plan that doesn’t have the “pop” and “pow” that will keep my students happy and engaged.
And these worries have brought me a fair amount of things of which to be proud.
For example, I am seeing growth in my goal of becoming a better writer. I remain at the same weight I was in high school, which is something that few of my middle-aged friends can say, and I am a well-loved and respected teacher.
Sounds like a great life, right?
Well, at least the part about being so successful.
But what most people around me who say “I wish I had that discipline” or “How do you manage to do it all?” don’t know is that, each morning when I set my Google Maps for success , I also take medications. For anxiety. For depression. For Obsessive-Compulsive Disorder.
In short, I’m sick. And sometimes I feel this drive for perfection is “killing me softly.”
And I frequently wonder which came first — the mental illness or the perfectionism?
I’m not sure. But I do know one thing: these two “diseases” feed off of each other. They are, in fact, the best of friends.
When Perfectionism gets tired, Anxiety helps him back to his feet. When Anxiety is feeling down, Perfectionism comforts him and urges him to get back up.
And their “ganging up” on me is draining me of my happiness. My sense of peace. My physical well-being.
And I know without a doubt, I’m not alone.
So here it is, the real truth about what it means to live with these two “bullies.”
Image by Elias Sch. on Pixabay
There is no such thing as feeling satisfied.
A Psychology Today article entitled “Pitfalls of Perfectionism” cites psychology instructor and author Miriam Adderholdt, who explains the difference between a healthy drive for excellence and the dangerous compulsion that is characteristic of perfectionists. She states that “excellence involves enjoying what you’re doing, feeling good about what you’ve learned, and developing confidence. Perfection involves…finding mistakes no matter how well you’re doing.”
Case in point from my own life.
I recently took classes to attain my English as a Second Language certification. The final step was to take the Praxis exam and pass it. The highest score you could achieve was a two-hundred. I made a 199. And as much joy as I felt in the high score, I could never quite let go of the missing one point.
Crazy, isn’t it? But that’s what it means to be a perfectionist.
For example, have you ever heard of the 80/20 principle in dieting? It involves eating healthily 80 percent of the time and 20 percent of the time allowing yourself “indulgences” in foods that may not be beneficial to weight loss.
Perfectionism involves holding oneself to the “100% principle,” and any weakness that deviates from this goal is reason for feelings of misery and guilt.
And even though we perfectionists know that this goal is simply unattainable, we still hold ourselves to this impossible standard. And when the failures occur, as they inevitably will, we mope for a bit, and then move to the only solution we see as viable: more work.
Image by Free-Photos on Pixabay
There is no rest.
American cartoonist Jack Kirby says it best when he say that “perfectionists are their own devils.”
And if this is true, the “devil” of perfectionism allows “no rest for the wicked” who fall under its exhausting work demands.
This “lust” for perfection has a punishment akin to Dante’s poem The Divine Comedy, where the lustful are subjected to violent winds which incessantly push them back and propel them forward in a cycle of torment from which they can never achieve rest or peace.
In the article “8 Signs You’re a Perfectionist (and Why It’s Toxic to Your Mental Health” esteemed psychologists and researchers Paul Hewitt and Gordon Flett state the truth that every sufferer of perfectionism knows: that perfectionists “tend to feel that they better they do, the better they are expected to do.”
And this translates into a never-ending cycle of labor, because once a perfectionist reaches success at one level, his or her immediate response is how to achieve success at the next level.
Take well-known perfectionist and musical artist Kanye West.
A Forbes article on the singer recounts his almost fatal perfectionism in his attempts to make sure his debut album College Dropout was finished in a timely manner. During the making of this album the singer was involved in a serious car accident. West fell asleep at the wheel, which was most likely due to “spending nearly every waking hour in the studio.” As a result of the accident, West’s jaw was wired shut, and yet he still rapped out (literally and metaphorically) his hit “Through the Wire,” which was inspired by the event itself.
West’s actions are typical of a perfectionist, where each minute wasted leads to a heightened anxiety that one may be stepping closer into the quicksands of failure.
And so we perfectionists push on. And on. Past exhaustion. Past pain. Past every “rest stop” on their quest for the illusory destination of perfection.
And quest to move forward never ends, thus causing a mountain of physical and emotional problems. In a Grazia article entitled “Is Your Perfectionism Affecting Your Mental Health,” clinical psychologist Dr. Nihara Krause states the connection between perfectionism and mental health issues, explaining that “perfectionism and eating disorders go hand in hand, as well as conditions like obsessive-disorder, depression and anxiety.”
Even more disturbing is a fact by Medical News Today which cites a study that found “over half of people who died by suicide were described by their loved ones as perfectionists.”
Photo by Steve Johnson on Unsplash
The bottom line:
Even though perfectionism can lead to high achievement and esteem in our society, the psychological suffering and mental health issues that result are rarely worth the corresponding accomplishments. This growing phenomenon is leading to a plethora of unhappy humans who can never enjoy the simple pleasures in life and truly be “in the moment” to enjoy the blessings that surround them: the love and happiness brought on by doing “unproductive” yet fun-filled things with family and friends, the emotional release that accompanies pursuing pleasures for only pleasures sake, and deeper intimacy with loved ones and friends that comes from wonderful bouts of laughter, leisure and even a bit of laziness.
So sufferers like me would do well to seek help or at least ask themselves if this lifestyle is truly one in which they are willing to risk all to “make a name for themselves.” If unchecked this “disease” may lead to more than a prestigious title written beside their name on the office door or a fleeting mention of their name along with their success on the front page of a newspaper.They may find, in fact, that the name they make is one written on their own tombstone.
If you enjoyed this article, please read more from me by signing up here. | https://medium.com/the-ascent/the-dark-side-of-perfectionism-edfe80e7c9b9 | ['Dawn Bevier'] | 2020-01-21 13:21:01.313000+00:00 | ['Mental Health', 'Mental Illness', 'Perfectionism', 'Self', 'Self Improvement'] |
We Have to Define What Edge Computing Is | We Have to Define What Edge Computing Is
Edge Computing promises millions of dollars of revenue. However, we do not yet have a definition of what it is.
Photo by Edurne Chopeitia on Unsplash
The first time I heard about edge computing was back in 2015. Since then, I have been working for startups to enable distributed data-driven solutions for the edge. It looks like everybody (or almost everybody) is aware of what edge computing is. However, all this time I have been working in a technological paradigm without a clear definition statement.
At the moment of writing this post, there is no clear definition of what the edge is. These are some definitions you can find online. Some of them are the ones used in the Wikipedia entry for edge computing.
all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required
Karim Arabi
Your mobile phone and all your wearables are the edge according to this definition.
anything that’s not a traditional data center could be the ‘edge’ to somebody
ETSI
This may include elements such as server proxies.
the edge node is mostly one or two hops away from the mobile client to meet the response time constraints for real-time games’
Gamelets — Multiplayer mobile games with distributed micro-clouds
In this case, the edge is not the final user device. It is something between the cloud and the user.
computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work
Paul Miller
This definition points out the idea of data proximity.
Well, I have to say that all these definitions are correct. Why? Because the edge is so abstract that it admits almost any definition. The edge is so vaguely defined, that becomes something blurry and difficult to demarcate in any architectural design. Also, we have the cloud and the fog. What does the edge have to do with the cloud? And the fog? | https://medium.com/swlh/we-have-to-define-what-is-edge-computing-1e96d5f46e69 | ['Juan M. Tirado'] | 2020-09-29 08:31:55.340000+00:00 | ['Technology', 'Software Development', 'Data', 'Future', 'Programming'] |
Why all Writers Can’t Be Content Creators | Why all Writers Can’t Be Content Creators
Do you have what it takes to create effective online content? Not every writer does
Image/Monoar/Pixabay
Most writers, be they aspiring or long-established have a love for the written word and many develop very distinctive styles, suited to their chosen genre. Your ability to write, however, does not guarantee your success as a content creator.
It is possible to dabble in both forms and some writers can make the transition between the two disciplines appear effortless. It isn’t.
I am going to examine the very specific skills that are required of a content creator and also look at motivating factors for pursuing this field.
The Content Creators Toolbox
A brief overview of the things I consider to be essential to the success of effective content creation.
Journalistic discipline with regards to content
Flexibility in terms of adopting an amorphous style
Rigorous research
Sound knowledge of the mechanisms of social media
An understanding of search engine optimization (SEO) and a firm grasp of online marketing strategies and self-promotion
An ability to remain completely objective
Not all content creators are born equal and whilst it is true that anyone can generate online content, few do it well, and even fewer enjoy the benefits of real financial reward. Let’s examine the points raised above. | https://medium.com/thecre8tive/why-all-writers-cant-be-content-creators-948c90e7b206 | ['Robert Turner'] | 2020-02-27 00:57:36.586000+00:00 | ['Content', 'Content Writing', 'Content Creation', 'Content Marketing', 'Writing'] |
The Unexpected Arrival of the 2020 Summer Movie Season | The 2020 Summer Movie Season, or Lack Thereof
There has never been a summer movie season like this one. And hopefully there never will be again.
With the deadly arrival of COVID-19 in the United States earlier this year, the country was placed under lockdown. This involved closing virtually all places of social gathering, including movie theaters. The last “normal” weekend at the box office was March 13–15, when the latest Pixar release Onward spent its second week at #1. However, it had declined a step 73% from the prior weekend, reflecting the imminent closure of cinemas.
Promotional Image for “The Empire Strikes Back” (Copyright: Disney/LucasFilm)
The #1 movie at the box office this past weekend was 1980’s Star Wars, Episode V: The Empire Strikes Back. The 40-year-old film was released at a select few open indoor theaters and a number of drive-ins. It is estimated to have grossed between $400,000 and $500,000 this weekend. This in stark contrast to the same weekend last year when Spider-Man: Far From Home grossed $45 million in its second weekend.
The impact of COVID-19 on the film and television industry has been astronomical and, at present, completely incalculable. Not only have theaters closed, but virtually all production has stopped, leaving projects in limbo, release schedules in chaos, and countless workers unemployed.
The plan was to reopen theaters this month, just in time for tentpoles like Tenet, Christopher Nolan’s mysterious spy film, and Mulan, Disney’s epic live action remake of the animated film. But just like that COVID-19 came back with a mighty vengeance and reopening plans were scrapped.
Without a clear end to the pandemic in sight, Hollywood has had to completely upend their strategy. Hot new releases like Onward, The Invisible Man, Sonic the Hedgehog, Call of the Wild, and Emma. made the transition from the big screen to streaming with an unprecedented turnaround time. Planned theatrical releases for dozens of films were scrapped in favor of sales to streaming services. And even the Oscars made major changes. For the first time in their history they are deeming films that never played in theaters to be eligible and they have pushed back the award season two full months to attempt to deal with the massive disruption to the release schedule. (Although it is increasingly looking like two months will not be sufficient, given the United States’s horrific inability to manage the pandemic effectively.)
The move to releasing films that were intended for theatrical release to streaming services has not been without controversy. Although the Academy changed their rules with seemingly little resistance, a prominent movie theater chain threatened to sue a major studio and numerous auteurs had snarky sound bites about the destruction of the big screen experience.
For the most part, however, the angry theater chains and whiny auteurs have had little to worry about. The quantity and quality of high profile releases making their way to streaming services during the pandemic has been less than stellar. I can only recall four truly high profile releases: Trolls World Tour (which I took a pass on, but was probably a welcome relief for many parents trying to juggle their jobs with school closures); Extraction (a mind-numbingly dull Chris Hemsworth-led action film that would have come and gone on Netflix even without COVID); Da 5 Bloods (the latest Spike Lee film, which remains my favorite film of the year); and Hamilton (the filmed version of the blockbuster Broadway show premiered on Disney Plus to huge buzz and acclaim last week).
That’s it. Four big releases over four months. I, for one, was unimpressed and took a head-first dive into television. I spent most of my entertainment time in the past four months oscillating between catching up on prestige television that I was woefully behind on and rewatching my favorite series for some sense of normalcy. That is, until this weekend…
The Unexpected Arrival of the Summer Movie Season
This past weekend, things felt different. Three very distinct high-profile films arrived on different streaming platforms with solid reviews and sizable buzz. Netflix gave us a fresh superhero film starring Charlize Theron. Hulu gave us a mind-bending romantic comedy starring Andy Samberg. And Apple Plus gave us a WWII film starring none other than Tom Hanks.
For the first time since March, it felt like there was an exciting array of fresh film options to choose from. I watched all three in 24 hours and am very pleased to say that they were all worth my time. Check out my reviews of all three below:
Promotional Image for “The Old Guard” (Copyright: Sydance/Netflix)
The Old Guard (Netflix)
Based on the comic book by Greg Rucka (who also wrote the film’s screenplay) and brought to life by director Gina Prince-Bythewood (whose previous credits include Love and Basketball and The Secret Life of Bees), this film tells the story of a band of immortal mercenaries attempting to fight for justice. They are led by Andromache of Scythia (or Andy for short), who is played with the same level of technical skill and emotional nuance that Oscar-winning actress Charlize Theron brings to everything she does. The group is being hunted down by a group that is eager to figure out the secret of their immortality (with the stated goal of ending disease). While evading capture they bring on a fresh member, a US Marine named Nile Freeman (KiKi Layne) who has recently demonstrated the same immortal properties of the rest of the group.
The premise is more inspired than your average Marvel Cinematic Universe film and it is certainly more interesting than the most recent big budget Netflix action film Extraction. The film benefits from strong acting by its impressive ensemble, which also includes Oscar nominee Chiwetel Ejiofor, Matthias Schoenaerts, Marwan Kenzari, Luca Marinelli, and Harry Melling (although seeing Melling as a grown-up villain after a decade of watching him as Dudley Dursley in the Harry Potter films is quite jarring). Unlike the vast majority of action films, it contains bold pro-feminist and pro-LGBTQ messages that resonate. It also features terrific action sequences, strong production values, and a killer soundtrack. But it’s not without its faults. The screenplay is a bit clunky, lacking dramatic urgency at key moments and featuring some painfully trite dialogue. It is also a bit long and seems to be begging to kick off a franchise in a way that makes its ending feel manipulative and unsatisfying. Nevertheless, it is more than worth a two hour time investment for fans of the genre and Theron.
The Old Guard Rating: 3.5/5 stars
Promotional Image for “Palm Springs” (Copyright: Neon/Hulu)
Palm Springs (Hulu)
One of the freshest and most unexpected films I have seen in a long time, Palm Springs plays like the inspired union of Groundhog Day, The Good Place, and Eternal Sunshine of the Spotless Mind. The film, which is directed by Max Barbakow and written by Andy Siara, deservedly made a big splash at Sundance earlier this year. It follows a thirtysomething bro (Saturday Night Live alum and Brooklyn Nine-Nine star Andy Samberg) who attends a wedding in Palm Springs with his impossibly cloying girlfriend and gets stuck in an infinite time loop. He is joined in the time loop by the bride’s emotionally tortured sister (Cristin Miloti, best known for How I Met Your Mother and the Broadway musical Once) and reckless, terrifying wedding guest Roy (JK Simmons, the Oscar winning star of Whiplash).
The premise is absurd and admittedly confusing at first, but the film turns out to have endless joys to offer. It has moments of absolutely uproarious and unexpected hilarity, mind-bending twists that require careful attention, and moments of profundity and poignancy that really resonate. It balances romantic comedy, surrealism, and elements of science fiction and fantasy with aplomb. The acting is superb, with Samberg and Miloti doing truly special work. The supporting cast, including great character actors like Peter Gallagher, Dale Dickey, and June Squibb, is also terrific. The film benefits immensely from a brief 90 minute running time that means it never outstays its welcome. Its brief running time also makes it all the easier to rewatch. And Palm Springs is a film that demands to be rewatched, not only due to its quality but also its complexity.
Palm Springs Rating: 4.5/5 stars
Promotional Image for “Greyhound” (Copyright: Sony/AppleTV+)
Greyhound (AppleTV Plus)
Greyound is essentially a pulse-pounding and technically marvelous 75 minute action sequence that is book-ended with some standard issue World War II movie sequences featuring the “good woman waiting back home” and the “beaten down American hero” dedicated to God and America. Presumably, that description made some sector of the readers swoon and some roll their eyes. Like The Old Guard and Palm Springs, whether viewers go for Greyhound is likely to be in large part due to how they feel about the genre and the star. And even though WWII films and Tom Hanks don’t normally make my pulse quicken, I found a lot to admire here.
The screenplay is written by Tom Hanks himself and is an adaptation of C.S. Forester’s 1955 novel The Good Shepherd. Like the book, the movie focuses on the Battle of the Atlantic. Specifically, it follows Naval Commander Ernest Krause as he tries to lead a convoy of supply ships heading from America to Britain through the Mid-Atlantic Gap. The Gap, also known as the Black Pit, is a place where ships have to fend for themselves without air cover and are particularly vulnerable to German U-Boats.
The film is a striking production with Aaron Schneider’s direction, Shelly Johnson’s cinematography, Blake Neely’s score, and Mark Czyzewski and Sidney Wolinksy’s editing all being top-notch. It also benefits immensely from its extremely brief (at least by war movie standards) run time, which prevents it from flagging dramatically and spares us unnecessary subplots. Ultimately, though, it feels slight. It is not as epic in scope as Dunkirk nor is it as visceral and emotionally powerful as 1917 and fails to resonate long after the credits role. But nevertheless, it works overall and will certainly satisfy fans of the genre and its star.
Greyhound Rating: 3.5/5 stars | https://medium.com/rants-and-raves/the-unexpected-arrival-of-the-2020-summer-movie-season-f4eb206e9440 | ['Richard Lebeau'] | 2020-07-13 19:33:14.092000+00:00 | ['Film', 'Movies', 'Streaming', 'Culture', 'Coronavirus'] |
From Spark to Draft: How to Take an Idea and Develop It Into a Story | Photo by Ethan Hoover on Unsplash
The road from story spark to finished manuscript is long and rocky. Often we stumble at the first pothole. The first gate. Or we get so far and the landslide of life kills the story idea before we manage to finish a first draft. Even a short story or a picture book needs a mountain of perseverance to come to completion.
How many times have you had a spark of an idea for a story and then not known what to do with it? Do you keep an ideas journal? Or like me, do you clip little bits from newspaper stories or save images? It’s rare that we come up with a whole story in one attempt, from spark to story outline to draft. Usually what we have is an idea that interests or excites us, but we’re not sure where to take it next. Is it really that original? Is it even a story yet?
Mostly the answer to both of those questions is NO. The most exciting and original stories come from … waiting. Waiting and thinking and asking what if, and thinking some more. Part of that waiting and thinking process is to add more sparks. If you go about developing your idea and adding more things to it, you may well end up with something far, far better. One idea is OK, another idea that crashes into it and creates sparks is what you’re after.
Often writers get so excited about their initial single idea that they run with that and can’t understand why the final story is flat or uninspiring. Our brains tend to offer us the most familiar options first. A brain is like a massive filing cabinet, and the drawers at the back are dusty and rusty. The drawers at the front that we use all the time run smoothly. But they do so because you’re constantly calling on what you already know. Even if you think it’s new, it’s very likely not to be.
If you have ever done any free writing, particularly the methods espoused by Natalie Goldberg in Wild Mind, you’ll know that feeling that hits around the ten-minute mark (she recommends writing for a minimum of 20 minutes). It’s as if your brain has poured out all the usual stuff, and then it starts drawing on a whole other world of material you didn’t even remember or know was in there. That’s what these steps below are working from.
Like any kind of writing, your idea is only the first step. It’s what you do with it after that that counts. You may find all of these helpful, and use them at different times — or use all of them in a progression. It’s good to start with a spark or idea you’ve had that you like — but haven’t done anything with yet.
Write down your idea — everything you think you know about it so far. (Apart from anything else, that way you know you won’t lose your original impulse and thoughts.) Brainstorm all around your idea — use diagrams, word lists, word maps, pictures. Fill at least one whole page with whatever comes to mind. Push your idea as much as possible, and don’t censor yourself — no matter how weird or unconnected, get it down on the big page.
I use A3 sketch pads for this. Don’t rush it. Come back to the page several times over a day or more. Add more pages if you want to. Add ideas about your characters and your settings. Use a highlighter marker and mark anything that connects to your initial idea in an exciting or different way. Keep a look out for anything that creates a little buzz in your brain. Now go back to your first idea and be open-minded about what you can add from your brainstorming. You should look first at the things that created the buzz. How do they connect to your idea? How will they add to your idea, make it more interesting, more original? Most importantly, how will they add depth to both your characters and plot? Think about structure. Yes, right at this point! You need a story that has a central “problem” or conflict, and this may well have been part of your initial spark. You need the conflict to increase, and you need the tension to increase throughout the story. If you’re not sure, look at your brainstorming. Are there ideas in there that can be used to increase conflict? Who is your main character? What is different about them? How do their character traits help to both increase the problem and create the solution? These might sound like formulaic questions, but they are the basic structural elements that so many people ignore at their peril. The story you build on top will be yours alone, and original as you want it to be, but without the structure to hold it up, the story will falter and maybe fail. Where is the highest point of action in your story? Is this the climax? (It should be.) Is it going to come about ¾ of the way through the story, or near the end? If it’s in the middle, it needs to move. How does the story end? Does it bring the reader back to the beginning in some way (circular) or does it take the reader to a new place? What do you think the theme is — what the story is really about? Is it layered underneath? If you don’t think you have a theme, can you see where you might add a little more to suggest it? How will your story begin? Can you start with a great sentence or two that sets the scene, starts the action, will lead to the problem (or introduce it straight away, perhaps)? For some writers, once they have these opening sentences, the rest of the story will flow. For others, they will have to start with a “holding place” sentence or two and come back later to rework it. You will start to feel as if you want to write the story — now! Hold off until you have such a strong sense of the ‘whole’ of the story that you know you can write a first draft. This doesn’t mean you should outline it, but that the story feels fully realised enough to write it all now without getting stuck.
This won’t work for everyone. True pantsers who like to just write and write until they have something to work with probably won’t like this method at all. But if you’re tired of half-finished stories or stories that dribble away from your ‘spark’ and never come to fruition, give it a try. | https://sherrylclark.medium.com/from-spark-to-draft-how-to-take-an-idea-and-develop-it-into-a-story-8cab7ad156cd | ['Sherryl Clark'] | 2019-04-18 00:07:15.095000+00:00 | ['Writing Tips', 'Brainstorming', 'Fiction Writing', 'Stories', 'Writing'] |
The Facebook Neural Network that Mastered One of the Toughest AI Benchmarks | The Facebook Neural Network that Mastered One of the Toughest AI Benchmarks
The Hanabi Challenge has been considered by many of the next frontiers in AI.
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Earlier this year, researchers from DeepMind and Google published a paper proposing the game of Hanabi as the new frontier for artificial intelligence(AI) agents. The reason for the designation is that Hanabi combines many of the most difficult challenges for AI models in a single game. The card game operates in an environment of imperfect information and requires collaboration from different players in order to master a task. A few days ago, Facebook AI Research(FAIR) published a paper proposing an AI agent that can achieve state-of-the-art performance in Hanabi while also unveiling new ideas for operating in cooperative environments with imperfect information.
Reading this you might be wondering if we need AI to master yet another game. After all, in recent years, we have witnessed AI agents achieved super human performance in all sorts of games that involve complex strategic analysis like Go, incomplete information like Poker or multi-agent competition like StarCraft. However, in most of those games agents were competing against other agents or humans or simply collaborating with a known team of agents. The challenge of Hanabi is that it requires agents to coordinate with others in a partially-observable environment with limited communication. As humans, we are constantly confronted with those types of situations and we typically address them by formulating a mental model of how other agents will behave in different situations. This is typically known as the theory of mind. To solve Hanabi, AI agents would need to develop communication mechanisms that allows them to cooperate(while competing) in order to achieve a specific goal. But let’s start by understanding the dynamics of the Hanabi game.
Hanabi
Hanabi is a card game created by French game designer Antoine Buza. The game can be seen as a form of “cooperation solitaire” and typically consists of two to five players. Each player holds a hand of four cards (or five, when playing with two or three players). Each card depicts a rank (1 to 5) and a color (red, green, blue, yellow, and white); the deck (set of all cards) is composed of a total of 50 cards, 10 of each color: three 1s, two 2s, 3s, and 4s, and finally a single 5. The goal of the game is to play cards so as to form five consecutively ordered stacks, one for each color, beginning with a card of rank 1 and ending with a card of rank 5. What makes Hanabi special is that, unlike most card games, players can only see their partners’ hands, and not their own.
At any given turn, a Hanabi player can perform one of three actions: : giving a hint, playing a card from their hand, or discarding a card.
I. Hints: The active player can give a hint to any other player. A hint consists of choosing a rank or colour, and indicating to another player all of their cards which match the given rank or color.
II. Discard: Whenever fewer than eight information tokens remain, the active player can discard a card from their hand. The discarded card is placed face up (along with any unsuccessfully played cards), visible to all players.
III. Play: Finally, the active player may pick a card (known or unknown) from their hand and attempt to play it. Playing a card is successful if the card is the next in the sequence of its colour to be played.
Players lose immediately if all fuse tokens are gone, and win immediately if all 5’s have been played successfully. Otherwise play continues until the deck becomes empty, and for one full round after that. At the end of the game, the values of the highest cards in each suit are summed, resulting in a total score out of a possible 25 points.
SPARTA: Solving Hanabi Using Search Strategies
Hanabi combines cooperative gameplay and imperfect information in a multiplayer setting. Effective Hanabi players, whether human or machine, must try to understand the beliefs and intentions of other players, because they can’t see the same cards their teammates see and can only share very limited hints with each other. This is an ideal setting for search algorithms.
Based on that assumption, the FAIR team proposed a strategy known as Search for Partially Observing Teams of Agents (SPARTA). The SPARTA model follows the same principles used during the construction of the Pluribus agent that famous that mastered six-player, no-limit Texas Hold’em. SPARTA also uses a precomputed full-game strategy but only as a “blueprint” to roughly approximate what will happen later in the game after various actions are taken. It then uses this information to compute an improved strategy in real time for the specific situation it is in.
Conceptually, SPARTA consists of two fundamental methods: single-agent search and multi-agent search. Similarly, the multi-agent strategy defines multiple agents can perform search simultaneously but must simulate the search procedure of other agents in order to understand why they took the actions they did.
Single Agent Search
In the single search model, a single agent performs search assuming all other agents play according to the blueprint policy. This allows the search agent to treat the known policy of other agents as part of the environment and maintain beliefs about the hidden information based on others’ actions.
Since the searcher is the only agent determining its strategy in real time while all other agents play a fixed common-knowledge strategy, this is effectively a single-agent setting for the searcher (also known as a partially observable Markov decision process). The searcher maintains a probability distribution over hands it may be holding. Whenever another agent acts, the searcher loops through each hand it could be holding and updates its belief about whether it is in fact holding that hand, based on whether the other agent would have taken the observed action according to the blueprint strategy if the searcher were holding that hand. Each time the searcher must act, it estimates via Monte Carlo rollouts the expected value of each action given the probability distribution over hands. In doing this, the searcher assumes all agents (including the searcher) play according to the blueprint strategy for the remainder of the game.
Single-agent search improves overall performance because it allows the search-enhanced player to make better decisions. However this method has the limitation that the searcher’s teammates are still using just the blueprint strategy in this scenario and therefore are sometimes still acting sub-optimally.
Multi Agent Search
The single-agent search model, assumes that all agents agree beforehand on a blueprint policy, and then also agreeing that only one agent would ever conduct search and deviate from the blueprint. This model has some marked limitations. For instance, , if Bob conducts search on the second turn after Alice conducted search, then Bob’s belief about his probability distribution over hands is incorrect. This is because it assumes Alice played according to the blueprint strategy, while Alice actually played the modified strategy determined via search.
The multi-agent search addresses the main limitation of the single-agent search by allowing it multiple players to correctly conduct search in the same game. The key idea is that agents replicate the search procedures of teammates who acted to see what strategies their search procedures produced. The multi-agent search model assumes that all agents agree beforehand on both a blueprint policy and on what search procedure will be used. When an agent acts and conducts search, the other agents exactly replicate the search procedure conducted by the agent and compute agents resulting policy accordingly.
SPARTA and Hanabi
Being such a new game, there are not a lot of established benchmarks to evaluate the performance on Hanabi. Discussions with highly experienced human players has suggested that top players might achieve perfect scores in 2-player Hanabi somewhere in the range of 60% to 70% of the time when optimizing for perfect scores. The SPARTA agent optimizes for expected value rather than perfect scores and still achieves perfect scores 75.5% of the time in 2-player Hanabi as shown in the following figure..
Hanabi provides a unique environment that requires the collaboration of agents using imperfect information. Some of the ideas that the FAIR team pioneered with SPARTA can be immediately applicable to many real world scenarios. In addition to the paper, the FAIR open sourced the code for the strategy in GitHub. | https://medium.com/dataseries/the-facebook-neural-network-that-mastered-one-of-the-toughest-ai-benchmarks-cf90ae16d53c | ['Jesus Rodriguez'] | 2020-12-17 17:06:50.749000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence'] |
Attempted Writing | Attempted Writing
A Poem
Photo by sydney Rae on Unsplash
The intention might have been there, the tools, the motive; absentee mind, colloidal atmospheres of fog, and other mists, atrophy, the retreat in the face of just another day.
The scars from other broken hooks, from pages that now only scatter in your equivocal memories of other apartment rooms, none like this one, so big, so unwilling to totally capitulate and open up to the field of the paper.
What I’m looking at is the attempt: the page, the screen was blank, was forcibly blinking away, waiting, instigating nothing. Or, the instigation was over, and now with a cracked coffee mug in the periphery, expected to deliver today’s sermon.
But where the other forces failed, the fingers got relentless, got criminal, decided to go against that particular grain, open up reverie, open-heart reverie, or something surgical and brave. And bright.
This was an attempt at writing something like that… | https://medium.com/scrittura/attempted-writing-4ef5d16cab92 | ['J.D. Harms'] | 2020-12-18 15:36:16.227000+00:00 | ['Image', 'Musing', 'Poetry', 'Writing'] |
An Approach for a Hyper Local, Crowd Sourced, Data Driven Chat Bot for COVID | An Approach for a Hyper Local, Crowd Sourced, Data Driven Chat Bot for COVID Rajan Manickavasagam Follow Aug 11 · 5 min read
Overview
The COVID-19 pandemic has captured everyone’s attention and impacted our daily lives since the past few months. It is likely to remain that way for the foreseeable future too, as we learn to navigate around it.
Several organizations and volunteers globally have created various kinds of dashboards, databases, etc. so that information is available to everyone. Some of them are:
COVID-19 database by Johns Hopkins: https://github.com/CSSEGISandData/COVID-19
COVID-19 India dashboards: https://github.com/covid19india/covid19india-react
COVID-19 India clusters: https://github.com/someshkar/covid19india-cluster
WHO app: https://github.com/WorldHealthOrganization/app
COVID-19 time series data: https://github.com/pomber/covid19
And many others
Approach
From the various open source applications already created for COVID, came my inspiration too, for a chat bot. The idea is to empower local communities with data and insights so that they are better informed as they go about their daily tasks and routine. This might help people assess the “risk” where:
Some of them have to go to an office or public place for work
Children are going to schools/parks
People are heading out for chores/outdoors
And similar scenarios
Concept
Governments, NGOs, businesses and health officials all over the world are taking several initiatives to keep people safe. However, it is imperative that local communities also take steps to safeguard themselves. The idea is to create a chat bot that is set up and maintained by each local community, like a ham radio movement.
Such chat bots could provide both quantitative and qualitative data to the users. This chat bot could act as a digital sentinel for each local community. So, let’s call them DISCO (Distributed Information Sentinels for COvid) chat bots. Each of these DISCO bots will be maintained by each community.
Technology
The first step is to gather the relevant data. This can be done on a Google Spreadsheet, as it allows multiple people to update the sheet at the same time. In the example here, data from the Aarogya Setu app is used to capture the COVID-positive cases and at-risk cases for a given location. A sample spreadsheet is here. This is the kind of “quantitative” data that the DISCO bot could provide.
Often, many people have queries regarding the pandemic. An FAQ conversation model can be built that can also be integrated into the DISCO bot. A sample is provided here. This is the kind of qualitative data the DISCO bot could provide.
Next, we need a host computer to run the bot. It could be a desktop, laptop, server, Cloud or even a Raspberry Pi. This bot would download the above spreadsheet regularly and keep a local cache (for quicker responses to user queries). My copy of the DISCO bot for the local community is running on my Raspberry Pi 3 that has been idle for years.
Lastly, the community can use Telegram chat app on a compatible device to interact with the bot. Scroll below for the demos on how to set up and use the DISCO bot.
Drivers and Goals
The following drivers and goals have driven the design and implementation of this chat bot:
Local
Yet Global
Community/Volunteer Driven
Accessible
Simple and Cheap
Local
As we have seen so far, each community can set up their own little ‘database’ in the form of a Google spreadsheet. The idea is not to have a global or even a national instance of the chat bot, but for each community/locality/suburb to maintain one for themselves.
With many countries and communities in various stages of lockdown at some time or the other, many people — professionals, students, elderly and children (basically everyone) are home-bound most of the time. Hopefully, this idea motivates people to reuse this chat bot and build something similar for their local communities. This way, people can learn/share some knowledge on technology and help their communities at the same time.
Yet Global
Largely, people are limiting their travel and daily routines to their immediate and nearby areas. This approach can be adopted by communities and organizations around the world.
Use cases for this bot could be residential communities, educational institutions, workplaces (where remote working is not possible), etc.
Community/Volunteer Driven
The two key stakeholders for this chat bot are volunteers:
Data Volunteers: Their responsibility is not just to enter/maintain the data, but to also ensure that the data is “as true” as possible. Data volunteers are kind of playing the role of citizen journalists. One of the “journalistic ethics” is to have a story/lead/data verified by at least 2 independent sources. This is the reason for the Google spreadsheet to have 2 columns: “Data Source Verified By (1): and “Data Source Verified By (2)”. It is also important to ensure that the data maintained is sufficiently protected and privacy maintained. For more details, please refer to Reuters Handbook.
Their responsibility is not just to enter/maintain the data, but to also ensure that the data is “as true” as possible. Data volunteers are kind of playing the role of citizen journalists. One of the “journalistic ethics” is to have a story/lead/data verified by at least 2 independent sources. This is the reason for the Google spreadsheet to have 2 columns: “Data Source Verified By (1): and “Data Source Verified By (2)”. It is also important to ensure that the data maintained is sufficiently protected and privacy maintained. For more details, please refer to Reuters Handbook. Technology Volunteers: Their responsibility is to set up and run the chat bot. The source code and installation instructions are available here at Github.
Accessible
Creating chat bots using Telegram APIs is incredibly simple. Also, Telegram provides various free chat clients across all devices and form factors — web, desktop, mobile and tablet apps.
Unless someone is on a “feature phone”, virtually anyone in the world should be able to access their respective local COVID chat bot over a data connection (mobile or broadband or Wi-Fi) from a device of their choosing.
Simple and Cheap
Simplicity here has 4 connotations:
Simple to maintain the data
Simple and light-weight to set up and run the chat bot
Simple and actionable insights
Simple for anyone to use the chat bot
All the technologies used in this chat bot are either free or open source or cheap to buy.
Chat Bot in Action
Step 1: Updating data in a Google spreadsheet
Step 2: Training a sample FAQ for the chat bot
https://youtu.be/IWphF4t34Pk
Step 3: Downloading Google spreadsheet to a local file cache and running the chat bot on a Raspberry Pi
https://youtu.be/ft0uEUycBXc
Step 4: Testing an alpha version of the chat bot in the Telegram Web app
https://youtu.be/CPXuFHfRNLk
Step 5: Testing the current version of the chat bot in Telegram app for iPhone
How to set up your own Bot
The source code and installation instructions are available here at Github. Feel free to customize to your requirements. You can set up and run a bot for your local community/organization. It should take roughly 1–2 hours to set it all up from scratch.
Summary
In the example videos above, the health and contact tracing application from the Indian central/federal government — Aarogya Setu (rough English translation — Bridge to your Health) has been used to manually collect anonymous data over a period of a few weeks, for the location where I live.
Take care and stay safe. | https://medium.com/engineered-publicis-sapient/an-approach-for-a-hyper-local-crowd-sourced-data-driven-chat-bot-for-covid-f994d6723731 | ['Rajan Manickavasagam'] | 2020-08-11 05:10:04.365000+00:00 | ['Coding', 'Data', 'Chatbots', 'Engineering', 'Technology'] |
My New Outlook on Life | My New Outlook on Life
Poetry
Photo by Lee Luis on Unsplash
A walk into fog
Wearing a thick layer of doubt
Changed me forever
It changed my outlook
I don’t fear the unknown
I don’t fear the fall
Fight your fears
Is the title of my new book
It is now my new attitude
Playing with fire
was my past
Swimming into the mystery of life
is now my favorite part
No one else holds
the strings of my kite
Flying with birds
is the motto of my life
I am not afraid of walking
down the streets alone at night
The darkness does not scare me
I hold the torch of hope
I can bring myself back to light | https://medium.com/scribe/outlook-on-life-850e9b04a176 | ['Simran Kankas'] | 2020-12-08 10:40:49.804000+00:00 | ['Self', 'Life Lessons', 'Motivation', 'Poetry', 'Life'] |
The One Thing Certain | The following is an excerpt from The Certainty of Uncertainty: The Way of Inescapable Doubt and Its Virtue by Mark Schaefer.
By the time you finish reading this book, you could be dead.
It’s not that long a book, but even so, a car accident, a slip and fall, a random crime, a plane crash, a sudden and devastating disease, a heart attack, a brain aneurysm, or any other random lethal misfortune could claim your life before you get to the final page. Or not. The problem is that you don’t know which fate awaits you.
We, perhaps alone among the creatures that inhabit the globe with us, can contemplate our own mortality. We are aware of the basic fact that one day we will cease to exist. We are conscious of the reality of our inevitable deaths, but we don’t know what it all means or what, if anything, lies beyond death.
Miguel de Unamuno, Agence de presse Meurisse
The Spanish philosopher and writer Miguel de Unamuno wrote that our fears and anxieties around death drove us to try to figure out what would become of us when we die. Would we “die utterly” and cease to exist? That would lead us to despair. Would we live on in some way? That would lead us to become resigned to our fate. But the fact that we can never really know one way or the other leads us to an uncomfortable in-between: a “resigned despair.” [ 1]
Unamuno refers to this “resigned despair” as “the tragic sense of life.” For Unamuno, this tragic sense of life created a drive to understand the “whys and wherefores” of existence, to understand the causes, but also the purposes, of life. The terror of extinction pushes us to try to make a name for ourselves and to seek glory as the only way to “escape being nothing.” [ 2]
There is an additional consequence to our mortality beyond this “resigned despair” and the “tragic sense of life.” Our awareness of our own mortality also creates a great deal of anxiety. Because we know neither the date nor the manner of our own deaths, we are left with unknowing and uncertainty, and are plagued by angst on an existential level.
There are two basic responses to that anxiety: acceptance and resistance. We could accept the reality of death, given that the mortality rate has remained unchanged at exactly one per person regardless of our attitudes toward death or attempts to deny it. But we seem to prefer resistance. This is not surprising; we have too many millions of years of evolutionary survival programming in us to surrender to non-existence without at least putting up something of a fight, even if we cannot ultimately win that fight. And when death does come, we bury and keep our dead, as if refusing to hand them over to the indifferent ground without one last act of defiant resistance. [ 3]
Some psychologists maintain that practically everything we do is a kind of resistance in reaction to our awareness of our mortality. [ 4] This terror management theory posits that our desire for self-preservation coupled with our cognitive awareness of our inevitable deaths leads to a “terror” that can only be mitigated in two ways. First, we mitigate this terror with self-esteem-the belief that each of us is an object of primary value in a meaningful universe. Second, we mitigate our terror by placing a good deal of faith in our cultural worldview. The faith we put in a cultural worldview gives us a feeling of calm in the midst of dread. Our commitment to an understanding of the world around us makes us feel safe and secure in the face of our looming mortality.
However, when those same worldviews are threatened, so too is that feeling of calm. For that reason, we have to defend our worldviews at all cost because they protect us from facing the terror of our mortal lives. [ 5] Preserving our worldviews is so central to staving off our existential dread that it turns out that the more we think about death and oblivion, the more invested we become in preserving those worldviews. [ 6]
It seems that one of our preferred methods of defending our worldviews and fending off this core terror is the attempt to establish as many certainties as possible, to know that there is something we can be certain of. In an effort to deny our mortality and the recognition that we are not ultimately in control of our own destinies, we try to control our world and one another and we seek to cling to as many certain truths as we can along the way.
We might be comfortable with uncertainties when they are restricted to trivial concerns or are unthreatening: the uncertainty of the solution to a crossword puzzle, or a sudoku, or a mystery novel are acceptable, and the resolution of those uncertainties with the solution to the puzzle or mystery brings a measure of emotional satisfaction. However, when the uncertainties involved deal with “real world” issues-whether we’ll have a long and healthy life, whether our beloved will be faithful to us, whether we’ll have job security, or whether we’ll find or maintain happiness-we are not as comfortable. In fact, we are more inclined to anxiety. [ 7]
This is especially true for the anxiety we feel about any of what psychotherapist Irvin Yalom calls the “four ultimate concerns”: death, freedom, existential isolation, and meaninglessness. [ 8] We’re anxious about death. We’re anxious about the choices we have to make. We’re anxious about the fact that we enter and leave the world alone. And we’re anxious because we fear that life has no intrinsic meaning. All of this creates in us a desire to obtain as much control and certainty as we can. We become increasingly concerned with getting “closure” and resolving our uncertainty. [ 9]
Even when we’re not consciously looking for certainty to resolve our anxieties, we seek it out. It’s not that we’re even always consciously aware of our need for certainty; much of the drive to be certain is deep in our psychology. We are driven to be certain as a consequence of the fact that our thought processes are divided into two basic domains. As psychologist Daniel Kahneman argues, there is a fast-moving, automatic “system” that we’re barely aware of (System 1), and a slower, effort-filled process that includes deliberative thought and complex calculation (System 2). [ 10] System 1 is designed for quick thinking and does not keep track of alternatives; conscious doubt is not a part of its functioning. System 2, on the other hand, embraces uncertainty and doubt, challenges assumptions, and is the source of critical thinking and the testing of hypotheses. However, System 2 requires a great deal more processing power and energy, and it can easily be derailed by distraction or competing demands on our brain power. Kahneman writes:
System 1 is not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. . . . System 2 is capable of doubt, because it can maintain incompatible possibilities at the same time. However, sustaining doubt is harder work than sliding into certainty. [ 11]
In short, certainty is easier on the brain than uncertainty is; uncertainty requires more mental effort.
Even beyond this function of the way our brains work, the human need to be certain is reinforced by the expectations of others. Experts are not paid high salaries and speaking fees to be unsure. Physicians are expected to give diagnoses that are certain even when that certainty is counterproductive to their effectiveness. Even when a little uncertainty might be life-saving in many cases-for example, ICU clinicians who were completely certain of their diagnoses in cases where the patient died were wrong 40 percent of the time-it is generally considered a weakness for clinicians and other experts to appear uncertain or unsure. [ 12] Thus, expectations from others often drive us to be more certain than we have cause to be.
We are creatures seeking meaning in an unpredictable world of unclear choices and random, seemingly meaningless happenstance; we crave certainty. We want to be certain about something. And so, we find that in the meaning-making areas of our lives, such as religion or politics, we are tempted to edge closer and closer to absolute certainty in doctrine, belief, and ideology.
But is such certainty even possible? Can we really know anything with certainty?
Even when we are certain that we know something, do we really know it or just think we do? Imagine I say to you, “I am certain that Lisa will say yes to our business offer.” What am I certain of, really? That Lisa will, in fact, say yes or the fact that I believe she will say yes?
This conundrum is complicated by the fact that there is a sensation known as the feeling of knowing that can fool us into unmerited certainty. The feeling of knowing is best illustrated by that feeling of satisfaction you receive when you’ve figured out a crossword puzzle, suddenly understood the point your teacher was trying to make, or have had some other insight in which you finally “get” something. However, this feeling of knowing turns out to be just that-a feeling; it’s independent of any actual knowledge. It’s similar to the feeling associated with mystical states and religious experiences, which can often feel like having come to know something without actually having any specific knowledge. People who have had a mystical experience will testify to having “understanding” or “knowing” but often cannot tell you what it is they have come to know. These experiences make it clear that there are instances of the experience of the feeling of knowing that don’t involve any actual thought or knowledge. There has been no thought, no deliberation, no thought process; there is only that feeling you know something. Thus, the feeling of knowing is what cognitive psychologists refer to as a primary mental state-a basic emotional state like fear or anger-that is not dependent on any underlying state of knowledge. [ 13]
Now, the feeling of knowing may be an important part of how we learn-it’s that boost of dopamine we get when we’ve learned something. Without this pleasant feeling, our drive to learn and comprehend might not be as strong. The problem is, because it is a mechanism of encouragement-it’s a real confidence booster-it can drive us to rush to conclusions in our thinking, even before those thoughts have been worked out. We may be in such a rush to get that burst of good feeling that we ignore the fact that we haven’t actually come to learn anything beyond our own belief. This feeling, as physician and author Dr. Robert Burton puts it, is “essential for both confirming our thoughts and for motivating those thoughts that either haven’t yet or can’t be proven.” [ 14]
We human beings are often hindered in our ability to distinguish actually knowing something from the feeling of knowing something. We have a troublesome combination of a highly fallible memory and an overconfidence about our knowledge. [ 15] We frequently fail to understand the limits of what we can know. We often imagine that our conclusions are the result of deliberate, reasoned mental processes when the reality is that our feelings of certainty come not from rational, objective thought, but from an inner emotional state. As Burton concludes: “ Feelings of knowing, correctness, conviction, and certainty aren’t deliberate conclusions and conscious choices. They are mental sensations that happen to us.” [ 16]
If this weren’t bad enough, we’re often blind to our ignorance because our brain likes covering the gaps in our knowledge. For example, you may not realize it, but you have blind spots in your visual field, which you can spot with a simple trick. If you hold your hands out with your index fingers pointing up, close your right eye and look at your right finger with your left eye, while you slowly move your left hand to the left-at a certain point the tip of your left index finger will just disappear. Keep moving your hand and the fingertip will reappear. We don’t normally notice this blind spot because our brain is really good at patching the holes in our sensory inputs and does so with our visual blind spot.
In the same way that our brain glosses over the blind spots in our visual field, it may be that we likewise cover the blind spots in our knowledge. When we are conscious and perceiving things, inconsistencies are more detrimental than inaccuracies, and so the holes in our knowledge are more troubling than the fact that the holes are filled with erroneous information. [ 17] As Kahneman noted above, System 1 of our mental processes is inclined to smooth over gaps in our understanding; it is a coherence-seeking machine designed for “jumping to conclusions.” [ 18]
The reality is, then, that very often our feelings of certainty are not grounded in any reality that is certain, but simply in our belief that we have truly come to know something. Our certainty often comes merely from feeling that we are certain.
Perhaps this false sense of certainty is a function of our human nature. As human beings, we often find a measure of ambiguity to be a motivator toward productivity in order to resolve the ambiguity, but we are uncomfortable with prolonged uncertainty and unknowing. And perhaps our desire for certainty arises out of a deeper desire to believe that things can, in fact, be known.
It was Arthur Conan Doyle who said, “Any truth is better than indefinite doubt.” In so doing, he articulated an attitude that has recently become an object of study. Scientific research into our reactions to ambiguity and uncertainty in the last decade has identified something known as the need for closure. This need for closure is a person’s measurable desire to have a definite answer on some topic-”any answer as opposed to confusion and ambiguity.” This impulse to resolve the tensions of uncertainty and ambiguity can be strong and may be what makes some people susceptible to extremism and claims of absolute certainty. [ 19] Even in those who are not tempted into extremism, there is nevertheless a discomfort with not knowing, with uncertainty.
It may also be the case that the desire for certainty is borne out of a feeling of powerlessness. We live in a time of increasing alienation and an increasing sense of disenfranchisement. As a result of technology and an ever more interconnected world, the world that people knew growing up is rapidly disappearing: the homogenous, ethnically privileged, culturally distinct society that many people knew is yielding to a diverse, multiethnic, multicultural society that no longer operates on the same assumptions as its predecessor. In addition, a predominantly rural way of life is yielding to a predominantly urban one. There is nothing inherently wrong with any of these changes but they can be unnerving to many. And many who find global changes terrifying feel powerless and out of control in their ability to stop them or at least to slow them to a comfortable pace. Being in command of the truth is at least being in command of something.
What does this all mean, then, for our quest for certainty as a way of coping with the anxieties of existence? Have we simply been going about the quest for certainty in the wrong way, relying too much on cognitive illusions and feelings instead of a more certain foundation, or should we abandon the quest for certainty altogether?
To answer that question, we must first undertake a journey, through our world, the language we use to communicate about the world and our experiences in it, and the great systems we construct that help us to give meaning to the experiences we have.
This passage is an excerpt from The Certainty of Uncertainty: The Way of Inescapable Doubt and Its Virtue by Mark Schaefer. © 2018 Mark Schaefer. All Rights Reserved.
Learn more about the book or learn about purchasing the book.
Notes
1. Unamuno, Tragic Sense of Life in Men and Nations, 38.
2. Ibid., 64.
3. Ibid., 46.
4. Greenberg et al., “Terror Management Theory,” 62.
5. Ibid., 71.
6. Ibid., 123.
7. Holmes, Nonsense, 9–10.
8. Yalom and Yalom, Yalom Reader, 172.
9. Holmes, Nonsense, 11.
10. Kahneman, Thinking, Fast and Slow, 20–21.
11. Ibid., 114.
12. Ibid., 263.
13. Burton, On Being Certain, 23.
14. Ibid.,89,100.
15. Pinker, Sense of Style, 302.
16. Burton, On Being Certain, 218.
17. R. Abutarboush, personal communication.
18. Kahneman, Thinking ,Fast and Slow, 87–88.
19. Holmes,Nonsense,11–12. | https://markschaefer.medium.com/the-one-thing-certain-e59c059d0398 | ['Mark Schaefer'] | 2020-04-25 21:10:10.147000+00:00 | ['Uncertainty', 'Certainty And Doubt', 'Doubt', 'Books', 'Certainty'] |
Web Scraping MQL5 Signals With Python | Python
First things first, you need to have at least Python 3.7 installed. You can check this guide if you don’t feel comfortable doing it all by yourself.
Using global dependencies over your python projects is not a good practice.
Imagine having several projects depending on the same files — in our case the dependencies. Whenever one of those dependencies is updated it could break all your other projects.
So instead, create a python virtual environment, this way all the dependencies will be locally in the project’s folder.
To do this, go to your terminal, cd to the project’s root folder, and type the code below.
python3 -m venv env
The virtual environment named env is created. Now let’s activate it!
source env/bin/activate
By doing this your terminal should have an (env) prefix indicating that you’re now in the virtual environment.
Let’s add some dependencies for the project.
pip install requests pip install beautifulsoup4
Now we’re good! | https://medium.com/swlh/web-scraping-mql5-signals-with-python-e2eb9bebe0f0 | ['Matheus V De Sousa'] | 2020-08-05 02:51:23.547000+00:00 | ['Python', 'Mql5', 'Vscode', 'Web Scraping'] |
Watts Up? Using Open Data to Save Energy at Harvard | Hi everyone! My name is Ike Jin Park. I am a freshman at Harvard college, considering Environmental Science and Public Policy as a concentration. Before Harvard, I have never done anything remotely related to open data. I’ve always been an environmentalist, not a computer scientist. So Open Data Project has been a quite challenging but rewarding new experience for me.
My high school campus in Hong Kong used software that displayed energy consumption of individual classrooms in real time and would accumulate usage statistics over time. With this data, one of the divisions under the campus green team designed campaigns to encourage energy conservation. Real-time monitors were great because they could report back to the school and the students on how much energy was saved and what that meant in terms of money value. I feel like far too often, environmental campaigns merely urge people make changes to their lifestyles, without showing them their impact.
I want to do something similar here at Harvard. The university already keeps track of monthly consumption figures for oil, energy, and heating for every building on Harvard campus, dating back to 1999. We think we can use this data set to design a platform that students can use to visualize, understand, and play around with their energy usage data.
At the same time, we are looking into ways to obtain more data. For instance, we can gather data on how much electricity individual dorm rooms / halls / houses are using real-time. The possibilities to influence behavior with these statistics are endless:
What if we made an “if…then…” progress so when energy consumption falls below certain point, pizza is ordered automatically? What if we were to calculate monthly energy savings achieved by students and deposit a certain percentage of that dollar saving to their Crimson Cash (flexible student spending money) account?
We aren’t sure how these interventions will change behaviors, but we want to pilot this with these projects. At any rate, mitigating needless energy waste becomes easier when we have solid data visualization.
Meet the team
Our team is composed of the following members:
Cathy Wang, a freshman with programming experience in Java and C. She will be working on both policy and data visualization with the stats we already have.
Harshal Singh who is also a freshman, with a background in big data through machine learning. Harshal has experience working with algorithms such as Decision Trees, Bayes, and KNN, and will be handling the data analysis part of the project.
Kevin Yoon, a sophomore with CS50/51 background. He will be managing the app development aspect of the project.
Next steps
Our preliminary research reveals some interesting products in this industry. Companies such as TED, Sense and Neurio all offer home energy usage sensors for around $200-300 per unit. My high school purchased their software from a company called En-Trak.
We’re still brainstorming next steps, but we’re swinging for the fences. We have already applied for a $3,000 grant offered by the Harvard University Office for Sustainability for student projects. We’re open to any and all new ideas! | https://medium.com/harvard-open-data-project/watts-up-using-open-data-to-save-energy-at-harvard-332cad927733 | ['Ike Jin Park'] | 2016-10-29 18:06:01.253000+00:00 | ['Data Science', 'Environment', 'Harvard', 'Open Data', 'Energy'] |
Basic Linear Regression Modeling in Python | For my course in Data Science, I performed a linear regression model on the sales of houses in King County. Data for sales in 2014 and 2015 was provided, and Python was used to do the modeling. This is a rundown of my first linear regression modeling project. For the full code, see the Github repo: https://github.com/MullerAC/king-county-house-sales
Business Case
It is important to establish a business case when starting a project like this. This is to create a goal and help move towards a meaningful conclusion. My last blog post’s project didn’t have any real goal, so it just stopped when I no longer felt curious. It’s better to start with a firm goal in mind, and build towards that. Here is the business case I used:
We will predict how much a house should be sold for in order to determine whether a house on the market is being underpriced or overpriced. Our clients are homeowners looking to sell their house, but do not know how much to sell their house for.
Data Cleaning
Both the data itself and a description of the columns was supplied. I read the data into a pandas dataframe to start cleaning it. After turning all the data into numerical data types, I handle any NaN values and create a few new features: ‘yr_since_renovation’, ‘yr_since_built’, and ‘renovated’. I then drop unneeded features: ‘view’, ‘sqft_above’, ‘sqft_living15’, ‘sqft_lot15’, and ‘date’.
import pandas as pd
import numpy as np df = pd.read_csv(‘data/kc_house_data.csv’)
df[‘yr_sold’] = df.date.map(lambda x: int(x.split(‘/’)[-1]))
df.replace({‘sqft_basement’: {‘?’: ‘0.0’}}, inplace=True)
df.sqft_basement = pd.to_numeric(df.sqft_basement)
df.fillna(0.0, inplace=True)
df[‘yr_since_renovation’] = np.where(df[‘yr_renovated’]==0.0, df[‘yr_sold’]-df[‘yr_built’], df[‘yr_sold’]-df[‘yr_renovated’])
df[‘yr_since_built’] = df[‘yr_sold’] — df[‘yr_built’]
df[‘renovated’] = df.yr_renovated.map(lambda x: 1 if x>0 else 0)
df.drop([‘id’, ‘view’, ‘sqft_above’, ‘sqft_living15’, ‘sqft_lot15’, ‘date’], axis=1, inplace=True)
df.to_csv(‘data/cleaned_data.csv’, index=False)
Exploratory Data Analysis
With the data cleaned, I start to analyze it. Scatter plots of each variable against the target, price, show which variables have obvious linear relationships. Insignificant relationships will be taken care of when I start looking at the p-values of the coefficients, so they don’t need to be handled now. This also makes clearer which variables are categorical and which are continuous.
import matplotlib.pyplot as plt
plt.style.use('seaborn')
%matplotlib inline df = pd.read_csv('data/cleaned_data.csv')
plt.figure(figsize=(15,30))
for i, col in enumerate(df.drop('price', axis=1).columns):
ax = plt.subplot(6, 3, i+1)
df.plot.scatter(x=col, y='price', ax=ax, legend=False)
plt.tight_layout()
plt.savefig('figures/scatter-plots.png')
plt.show()
Scatter Plots
Histograms show which of variables are normally distributed. None of them are, so I will need to log transform them later.
df.hist(figsize = (20,18))
plt.tight_layout()
plt.savefig('figures/histogram-plots.png')
plt.show()
Histograms
I look at collinear features using a heatmap and decide to remove the ‘yr_built’, ‘yr_renovated’, ‘price’, ‘yr_sold’, and ‘yr_since_renovation’ columns.
import seaborn as sns plt.figure(figsize=(7, 7))
sns.heatmap(corr, center=0, annot=True);
plt.tight_layout()
plt.savefig('figures/heatmap-before.png')
plt.show()
Heatmap of Collinear Features
Feature Engineering Baseline Model
In order to prepare for modeling, I need to create dummy variables of the categorical data. Those features are turned into strings from numerics in order to make the get_dummies method work. I also remove any punctuation that may be in the column names, as it may stop our model from working correctly.
df = pd.read_csv(‘data/cleaned_data.csv’)
categoricals = [‘floors’, ‘condition’, ‘grade’, ‘zipcode’]
df = df.astype({col: ‘str’ for col in categoricals})
df = pd.get_dummies(df, drop_first=True)
def col_formatting(col):
for old, new in subs:
col = col.replace(old,new)
return col
subs = [(‘ ‘, ‘_’),(‘.’,’’),(‘,’,’’),(“‘“,””),(‘™’, ‘’), (‘®’,’’),(‘+’,’plus’), (‘½’,’half’), (‘-’,’_’)]
df.columns = [col_formatting(col) for col in df.columns]
At this point, I create the train-test split, using the sklearn method. I keep the default of 75% train and 25% test, and set a random state for repeatability.
from sklearn.model_selection import train_test_split train, test = train_test_split(df, random_state=7)
train.to_csv('data/train.csv', index=False)
test.to_csv('data/test.csv', index=False)
With this, I can create the baseline model, using statsmodel.
from statsmodels.formula.api import ols predictors = '+'.join(train.columns[1:])
formula = 'price' + '~' + predictors
model = ols(formula=formula, data=train).fit()
The baseline model has the following metrics:
R2 of 0.821
Test RMSE of 139700
81 significant features (p-value < 0.05) of 103 features total
Baseline Model Q-Q Plot
Iterative Modeling Process
I created the following function to print out the metrics for any model, in order to easily ensure all steps taken improved the model.
import statsmodels.api as sm
import scipy.stats as stats
from sklearn.metrics import mean_squared_error train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv') def get_model_data(model, is_log_transformed=False):
train_r2, train_r2_adj = model.rsquared, model.rsquared_adj y_hat_train = model.predict(train.drop('price', axis=1))
y_train = train['price']
if is_log_transformed:
train_rmse = np.sqrt(mean_squared_error(np.exp(y_train), np.exp(y_hat_train)))
else:
train_rmse = np.sqrt(mean_squared_error(y_train, y_hat_train))
y_hat_test = model.predict(test.drop('price', axis=1))
y_test = test['price']
if is_log_transformed:
test_rmse = np.sqrt(mean_squared_error(np.exp(y_test), np.exp(y_hat_test)))
else:
test_rmse = np.sqrt(mean_squared_error(y_test, y_hat_test)) pvalues = model.pvalues.to_dict()
significant_items = {}
for key, value in pvalues.items():
if value < 0.05:
significant_items[key] = value
print('R2 =', train_r2)
print('R2 adjusted =', train_r2_adj)
print('RMSE (train) =', train_rmse)
print('RMSE (test) =', test_rmse)
print('number of significant features =', len(significant_items))
sm.graphics.qqplot(model.resid, dist=stats.norm, line='45', fit=True)
plt.title('Q-Q Plot')
To improve on our model, I took the following steps:
dropped collinear features, as determined during the eda
train.drop(['yr_built', 'yr_renovated', 'yr_sold', 'yr_since_renovation'] , axis=1, inplace=True)
test.drop(['yr_built', 'yr_renovated', 'yr_sold', 'yr_since_renovation'] , axis=1, inplace=True)
removed outliers from our datasets, more than 3 standard deviations above from the mean
mean = 5.402966e+05
std = 3.673681e+05
upper_cutoff = mean + (3*std)
train = train[train['price'] < upper_cutoff]
test = test[test['price'] < upper_cutoff]
log transformed applicable continuous features (no negative or zero values)
continuous = ['price', 'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot']
for col in continuous:
train[col] = train[col].map(np.log)
test[col] = test[col].map(np.log)
eliminated insignificant features (p-value < 0.05)
model_dict = list(dict(model.pvalues).items())
model_dict.sort(key = lambda x: x[1], reverse=True)
highest_pvalue = model_dict[0]
while highest_pvalue[1] > 0.05:
print(f'Dropping "{highest_pvalue[0]}" with p-value {highest_pvalue[1]}')
train.drop(highest_pvalue[0], inplace = True, axis = 1)
test.drop(highest_pvalue[0], inplace = True, axis = 1)
predictors = '+'.join(train.columns[1:])
formula = 'price' + '~' + predictors
model = ols(formula=formula, data=train).fit()
model_dict = list(dict(model.pvalues).items())
model_dict.sort(key = lambda x: x[1], reverse=True)
highest_pvalue = model_dict[0]
The final model has the following metrics:
R2 of 0.857
Test RMSE of 99701
90 significant features (p-value < 0.05) of 90 features total
Final Model Q-Q Plot
In addition to these metrics, I also plotted the residuals of the final model.
plt.scatter(model.predict(train.drop('price', axis = 1)), model.resid)
plt.plot(model.predict(train.drop('price', axis = 1)), [0 for i in range(len(train))])
plt.title('Final Model Residuals')
plt.tight_layout()
plt.savefig('figures/final-residuals-plot.png')
plt.show()
Final Model Residuals Plot
Finally, I save the model coefficients, so they can be analyzed and used to create a prediction function.
model.params.to_csv('data/final_model.csv', header=False)
Final Model Analysis
By looking at the coefficients of the model, some interesting observations can be made.
each bedroom decreases the sale price of a house by 5%
each bathroom increases the sale price of a house by 6%
a 1% change in square footage living area increases the sale price of a house by .48%
a 1% change in square footage lot area increases the sale price of a house by .07%
if the house is on the waterfront, the sale price of a house increases by 60%
a 1% change in square footage basement area decreases the sale price of a house by .00005%
a 1% change in square footage basement area decreases the sale price of a house by .00005% if you move north, a 1 degree increase in latitude increases the sale price of a house by 65%
if you move east, a 1 degree increase in longitude decreases the sale price of a house by 53%
a 1-year increase in the age of a house increases its sale price by .04%
a house that has been renovated has its sale price increased by 6%
a house that has been renovated has its sale price increased by 6% using a one-floor house as a baseline, a 1.5-floor house has its price increased by 1.5%, while a 3-floor house has its price decreased by 6.4%. Other numbers of floors are approximately equal in price to a 1-floor house.
using a condition of 1 as a baseline, a condition of 2 increases the price by 19%, a condition of 3 increases the price of a house by 31%, a condition of 4 increases the price by 35%, and a condition of 5 increases the price by 40%
Conclusions
The model could be improved in the future by adding interactive features, polynomial features, and map data.
The final model will be useful in predicting sale prices of houses in King County. We can use these predictions to help our clients set the prices for their houses, and find houses that are currently underpriced. | https://andrew-muller.medium.com/basic-linear-regression-modeling-in-python-7fc8a0843a3 | ['Andrew Muller'] | 2020-11-30 03:27:04.667000+00:00 | ['Data Science', 'Python', 'Linear Regression'] |
After the Fall | The rise
went almost entirely unnoticed;
gradual as it was.
Moment by moment,
Hour by hour,
Day by day.
Bright things grew back
And dead things fell off
And while most things didn’t seem
to be growing at all;
They were.
The fall
Had planted many scars
But the pain of healing
Drew focus away from
The delicate progress they enabled.
Moment by moment,
Hour by hour,
Day by day.
It can be hard to know
You’re rising
Whilst amongst
The thickest clouds,
‘Till suddenly, you see the sun, and
You are back where you belong. | https://medium.com/a-cornered-gurl/after-the-fall-a46e6ae83577 | ['Adam Millett'] | 2020-02-24 11:16:01.312000+00:00 | ['Self-awareness', 'Healing', 'A Cornered Gurl', 'Self Love', 'Poetry'] |
UI/UX Design Is a Link in the Chain Between Art and Tech | Once an idea has proven to be viable and a clickable prototype is ready, it’s time to start thinking about what users should see in the future app — a beautiful and functional UI/UX (user interface/user experience) design. While no crucial decisions are typically made at this stage, the design still can play a role in the product’s fate.
Why design is a double-edged sword
Many philosophical perspectives suggest that everything in this world consists of two opposing aspects, and design fits this duality well. Why? Because software design is simultaneously both an easy and a difficult process. What’s easy about it? When most of the research and prototyping are finished, the team builds the basic patterns of the app and formulates its main goal, which means that everyone knows what to draw and what the app is supposed to look like.
On the other hand, designing an app or website is difficult because it’s awfully subjective. Everyone likes different things, and that doesn’t mean that anyone is wrong. Just like in our daily life! Some people like strawberry ice cream, some people are crazy about homemade pizza, and for some, both are amazing.
Moreover, even an individual’s tastes can change over time. Maybe when you were a kid, you just couldn’t stay away from cinnamon rolls, but as an adult you can’t even stand to look at them. The same thing happens with your concept of beauty. Where once you might have liked an acid pink background in an app, now you see that light-brown would be much more appropriate.
For design decisions like these, A/B testing can save the day. It’s a great way to gather user feedback on your design and choose the variant that best suits your project’s future audience. All you need is a huge sample that accurately represents the target users. Otherwise, the test won’t help much, since feedback from a smaller audience won’t be enough to support appropriate design decisions.
Even though UX/UI design can be complicated, it’s still an interesting and unique process that manages to connect two opposite poles — creative art and precise technology — into one concept. How does this happen? Let’s figure it out!
Design and Art
Despite the fact that both design and art encompass a similar aspect of creativity — visual perception — there are some crucial differences in their main goals.
When creating a painting, the artist wants it to be beautiful. The buyer will eventually hang it on a wall, where it should be pleasing to the eye and suit the room decor.
UI/UX design must also be beautiful, but designs are created beautiful not just for the sake of being beautiful. Beauty here must solve problems and highlight the main product features. The primary purpose of UI/UX design is to build a smooth and understandable user flow with the help of visual solutions. The user should be satisfied with the way the app looks and the simplicity of its usage. If this is the case, the app will meet the audience’s needs and attract a lot of users, which in turn will provide the app owner with a decent profit.
This is why excellent artists and illustrators sometimes turn out to be awful at creating software designs. They might be great at creating amazing book illustrations or making cute things in Photoshop, but fail miserably when it comes to building solutions for user problems. They lack understanding about how to combine beauty and functionality. This is why you should pay enormous attention to your UI/UX team.
Design and Tech
The other aspect of UI/UX design is deeply rooted in technology, just as the term “user interface/user experience” is connected to convenience and understandable navigation of different software types: desktop, mobile, and web.
Here we can also see a kind of bond between these two ideas, just as between design and art, and yet this connection is not as strong as it may seem. In truth, development basically requires far less soft skill and imagination than UI/UX design.
Let’s imagine a company that has recently hired a junior developer without much experience, but who has a strong desire to grow as a professional. All else being equal, that person would need about half a year to become a mature tech specialist and to acquire a more or less solid background along with enough expertise to work on big, serious projects.
The professional growth of designers is something different. It’s not enough for a good designer to know all the manuals for Adobe Illustrator or Sketch by heart. There is supposed to be something far beyond these bare tools. A good designer should have a fine inborn sense of beauty that has been polished many times by their visual experience. A proper set of UI/UX design skills can take years to form, and still each new project can require a unique approach. This means that mastering essential design tools like Invision or Figma is the easiest task for a future UI/UX designer.
At the same time, developers usually underestimate design. It’s easy to imagine a situation where the designer creates a beautiful layout for a web-page, then gives it to a front-end developer who pays almost no attention to button locations or column sizes, thereby making the whole thing as bad as can be. This totally upsets the designer, the developer has to start everything over, and time and money are wasted on nothing.
The same thing can happen at the highest level of management. In most cases, tech companies are founded by tech people and they have tech people as their CEOs. These people care a lot about technology, which is good, but they don’t really care about design, which is bad. The best way here is to combine a professional UI/UX team and competent developers — and the perfect recipe for a successful product is ready to go!
Wrapping it up
UI/UX design stands directly between two poles — art and development. Even though design takes some important features from each, it nevertheless remains a unique activity that plays a crucial role in your product’s success.
Now we can definitely point out one important thing which is often underestimated:
Design matters.
Originally published at https://yellow.systems. | https://uxplanet.org/ui-ux-design-is-a-link-in-the-chain-between-art-and-tech-73c13f6126fe | [] | 2020-05-04 15:04:14.027000+00:00 | ['UI Design', 'Design Matters', 'Mobile App Development', 'UX Design', 'Web Development'] |
Start Machine Learning in 2020 — Become an expert from nothing, for free! | Who can become a machine learning expert in 2020?
This guide is intended for anyone having zero or a small background in programming, mathematics, and/or machine learning. There is no specific order to follow, but a classic path would be from top to bottom, following the order given in this article. If you don’t like reading books, skip the section, if you don’t want to follow an online course, you can skip this one as well. There is not a single way to become a machine learning expert, and with motivation, you can absolutely achieve it by creating your own steps.
But the goal of this article is to give a path for anyone wanting to get into machine learning and not knowing where to start. I know it can be hard to find where to start, or just what to do next when learning something new. Especially when you don’t have a teacher or someone to guide you. This is why I will list many important resources to consult ordered by “difficulty” with a linear learning curve. If you are more advanced, you can just skip some steps.
All resources listed here are free, except some online courses and books, which are certainly recommended for a better understanding, but it definitely possible to become an expert without it, with a little more time spent on online readings, videos and practice.
Don’t be afraid to replay videos or learn the same concepts from multiple sources. Repetition is the key to success in learning something new!
Find the complete list on GitHub
Tag me on Twitter @Whats_AI or LinkedIn @Louis (What’s AI) Bouchard if you share the list! | https://medium.com/towards-artificial-intelligence/start-machine-learning-in-2020-become-an-expert-from-nothing-for-free-f31587630cf7 | ['Louis', 'What S Ai'] | 2020-12-28 12:27:49.860000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Online Learning', 'Tutorial'] |
You Don’t Like Me Because I’m Vegan? | For him to express his disdain for someone who has been in his wife’s life for longer than he has, one would think it would be for a decent reason.
Maybe we got in a fight about animal rights and I said some hurtful things? Or maybe I’m just one of those annoying, pushy vegans.
But I know that neither of those are the case because the last time I saw Jay, six years ago, was before I committed to this lifestyle.
There was never any confrontation around this issue because we’ve never even talked about it.
So why does he, or anybody else, find the need to criticize me because of what I eat? Not just criticize and judge me, but hate me for it. It’s just food, dude.
It’s not like I’m murdering sentient beings that I claim to love just for my taste buds. Sorry, a pretentious vegan couldn’t help herself.
I just don’t eat some of the stuff that he eats. That’s all.
When Beth told me her husband hates me because I’m vegan, she was concerned that I would be offended. Sorry, but I’m not.
If he has had any emotional effect on me, it’s flattery. I’m flattered that being vegan is the worst thing he could find wrong with me. Jay couldn’t find any other reason to not like me other than one of what I consider my greatest strengths? I’m honored indeed.
But then, after the laughter wore off, it did start to bother me.
The thought of many other people in the world just like him really started to get to me. People that are so insecure that they are threatened by anyone with an opinion opposite of theirs.
I’m flattered that being vegan is the worst thing he could find wrong with me.
This idea that there could possibly be a great portion of the population that is so bothered by my food choices that they will go to any lengths to prove me wrong is severely troublesome.
I’m not sitting five states over talking about what he eats for dinner. Why is he so concerned with what’s on my plate?
I know why the meat and dairy industry spends so much effort convincing us that consuming these things is “natural,” but why does the average Joe (or Jay) care so much?
Why is he so bothered by what I’m not eating? | https://medium.com/tenderlymag/you-dont-like-me-because-i-m-vegan-918eef31c24 | ['Rosemary', 'Summers'] | 2020-03-12 18:14:17.407000+00:00 | ['Relationships', 'Vegan', 'Life Lessons', 'Lifestyle', 'This Happened To Me'] |
Are you having trouble figuring out what to capitalize in your title? | Are you having trouble figuring out what to capitalize in your title?
Here’s a useful tool that I use to double-check my titles.
Capitalize My Title is an easy straight forward tool.
Simply paste or type your title in the provided space and then select from seven different options.
Screenshot by Author — https://capitalizemytitle.com/
Happy writing. | https://medium.com/illumination/are-you-having-trouble-figuring-out-what-to-capitalize-in-your-title-9d5b15d0004a | ['The Dozen'] | 2020-12-28 03:48:17.399000+00:00 | ['Short Form', 'Writing Tips', 'Illumination', 'Grammar', 'Writing'] |
UX Writing 101- Part 2 | UX Writing 101- Part 2
Who does a UX writer work with and how can they help evaluate the design.
My journey continues. In my previous article, UX writing 101 — part 1, I looked at who UX writers are, what they do, and how they can craft copy that is clear, concise and useful.
Now let’s look at how UX writers can help build on the design, elevating it to new heights, helping you deliver an iconic online experience.
The words are important
Writing for product design has its quirks. The process is similar to cooking. You experiment with the ingredients and flavours. Adding a little of this, a touch of that and a sprinkle of magic to create something scrumptious.
With UX writing every word needs to have a purpose. Every word needs to count. The sentences you choose must be essential. The space and time you have are limited. You need to convey the right message at the right time.
A UX writer will need to:
Do the research
To write about your product well you need to understand it and how it works. You also need to know how your users are using it and how fits into their daily life.
To write about your product well you need to understand it and how it works. You also need to know how your users are using it and how fits into their daily life. Check out the competition
All brands have some sort of competition. Continually review competitor activity, use this intel as insights to help elevate your brand’s copy and designs. As they say, imitation is the best form of flattery.
All brands have some sort of competition. Continually review competitor activity, use this intel as insights to help elevate your brand’s copy and designs. As they say, imitation is the best form of flattery. Know the unknown
You’re having a conversation with your user. And so naturally they’ll have questions. Make the conversation run through the design, so it feels natural to the user answering any questions.
You’re having a conversation with your user. And so naturally they’ll have questions. Make the conversation run through the design, so it feels natural to the user answering any questions. Break things down
Engage with your users. Make your product desirable to them. Help the user understand what your product can do for them. And how it benefits them.
“Words are really important because the graphics don’t make sense sometimes.” — John Maeda
Working together
While it’s the UX writers job to create the content for the product. I’ve found that the content can be much more fruitful when they collaborate with a wide range of teams.
UX designers
I’ve hugely benefited from working closely with UX designers so that we can craft and build designs together and define the scope of the problem.
I’ve hugely benefited from working closely with UX designers so that we can craft and build designs together and define the scope of the problem. Stakeholders
They provide us with a better understanding of the metrics we’re trying to influence. And it’s an excellent opportunity to educate them about what good experience looks like.
They provide us with a better understanding of the metrics we’re trying to influence. And it’s an excellent opportunity to educate them about what good experience looks like. Product managers
Help us to understand the roadmap and set the priorities. We, in turn, provide them with the context of the user journey.
Help us to understand the roadmap and set the priorities. We, in turn, provide them with the context of the user journey. Developers
As they bring our designs and concepts to life, we work with them as small tweaks and changes may be required.
As they bring our designs and concepts to life, we work with them as small tweaks and changes may be required. Optimisation team
We’re always learning and discovering. Running A/B tests on designs and copy changes allow us to see if we’re on the right path.
Collaborating with these partners is key to the design process. We want to bring these folks on this journey with us, and not take them by surprise when revised designs or copy are put in front of them.
Designing with words
Yes, writing is tough. But to be a good designer, you need to have a good understanding of the content.
When you design or create user flow, you’re telling a story. And to tell a good story, it helps to have a UX writer beside you. As they can help narrate the customer journey from screen to screen.
The UX writer is an advocate for the customer along with the designer so that they consider every element of the user experience from the customer’s perspective.
Dot to dot
UX writers tend to juggle several projects, as they are needed in multiple areas. While this can be daunting, it can be an advantage as they’re able to connect all the dots, as they have a good overview of what’s going on. Bringing more insight into your design, helping to elevate it to the next level.
UX writers are the champions of consistency throughout the product language. And consistent brands are worth as much as 20% more than those with inconsistent messaging.
So there you have it. Hopefully, a better understanding of who UX writers are, what they bring to the table, and how they can help lift your designs. So invite them to the party, because it’s all about the experience, make it count. | https://medium.com/nyc-design/ux-writing-101-part-2-21c6c014fe24 | ['Jas Deogan'] | 2020-05-14 04:47:48.694000+00:00 | ['New York', 'UX Design', 'Content Writing', 'Content Creation', 'UX'] |
Python Django: The Simple Web Application Framework for Your Next Big Project | Python Django: The Simple Web Application Framework for Your Next Big Project StefanBStreet Follow Nov 22 · 10 min read
Django! Can the advantages of using this framework be exhausted? It’s seriously in doubt, as we will soon see.
Creating a good-looking, fast website is one of the most important tasks for a business in this day and age. High scalability, ease of creation, Dynamism, etc., are also some targeted factors considered important in creating a website. The Django framework comes in handy for meeting all these targets. With Django, a website capable of scaling for millions, even billions of users with good design can be easily and speedily set up. Whether it’s booking engines, shopping sites, content management systems, financial platforms, etc., Python Django is suitable for any project. Another wonderful thing about Django is that it is superbly cost-effective given that APIs in Django require less developer time than other frameworks such as Ruby on Rails, PHP, or .Net!
Django uses the model view controller (MVC) architectural pattern. It extensively uses Python to create settings, data models, and files. Django addresses the challenges of intense newsroom deadlines and rigorous requirements of highly experienced web developers. It emphasizes the principle of non-repetition, quick development, and reusability of components in addition to being able to leverage other Python skills.
Since Django is written with Python, it’s also easy to learn, just like Python. Developers just coming into the framework arena can easily learn Django, especially with some basic knowledge of Python already acquired.
Popular Websites Using Python Django
Companies, Organizations, and even Governments (Yes, government. You read that right!) have used Django to build a lot of things. Most of the successful tech companies today, especially those with large user base that keeps swelling each day, are choosing Django. Here we take a look at some of them and what they did with Django.
Instagram
Currently, Instagram is acknowledged as the largest online photo-sharing app. It has over a billion users all over the world. Instagram currently features the world’s largest deployment of the Django web framework, which is written entirely in Python, according to its engineering team. It uses Python Django on its backend.
Youtube
Just as Instagram is the largest photo-sharing app, YouTube stands at the apex when it comes to video sharing. YouTube was first built with PHP but has moved some of its backend services to Python Django due to the need to implement new features and growth in audience population. The audience growth is still on, and new features are still required and made available with Python Django.
Google
Google has nearly 4 billion users all over the world. Google founders decided to “use Python Django “wherever they can” and only use others “where they must” Google’s large traffic, connecting apps and computing needs are effectively and efficiently handled with Python Django. Putting it directly, Python Django is the power behind Google, the world’s most popular and most used search engine.
Nasa
Earlier, something was mentioned governments also use Python Django. Now, here is a popular example of such. The Aeronautics and Space Administration website of the US also features Python Django in handling its views and traffic as users explore pictures, news, and current space exploration on the website.
Facebook
Facebook is the biggest online community in the whole world. Python is used in handling a large part of the numerous posts, status, and picture uploads shared on the platform.
Netflix
All over the world, Netflix is known for movie streaming, downloads, and TV production. The analytics that recommends shows and movies for users of Netflix is developed with Python Django. Currently, Netflix has over 195 million subscribers all over the world, and again, thanks to Python Django, this large audience base is effectively handled.
The NY Times and The Washington Post
These two aren’t the only popular news media that use Python Django but are just selected as random samples. Python Django is very crucial for their scalability and handling of the massive data amount generated by their audience daily.
Spotify
Spotify provides digital music services. It gives users access to millions of songs, allowing them to access their music library anywhere. Spotify developers chose Django because it allows them to get a full range of Python features and make the most of it. Also, it provides fast backend and machine learning options. Spotify currently has over 320 million users all over the world.
Mozilla
Mozilla is one of the most popular web browsers all over the world. Queries received by Mozilla over its API monthly runs into hundreds of millions. This made them move to Python Django from PHP+CakePHP. Now, all the add-ons for the Mozilla browser and their support websites are powered by Django.
Django and Python
Django and Python are quite interwoven and shared some common similar feature terms in their descriptions, but from the information related to them so far, it’s obvious the two aren’t the same.
While Python is a high-level, open-source, dynamic, interpreted, general-purpose, object-oriented programming language used for machine learning, artificial intelligence, desktop apps, etc., Django is a Python framework for server development and full-stack applications. Prewritten bits of code can be used to build apps in Django, while in Python, a website can be built from scratch using libraries like the Flask Microframework or the Werkzeug library for low level HTTP functions . However, a significant advantage that learning Python offers is the ability it gives to use Django.
Django Features and Uses
The features and uses of Django aren’t really separable. Here we have them;
Rapid Development
The primary intention behind the design of Django is to make a framework that takes less time in building web applications. This aim was perfectly achieved as project implementation in Django is quite fast and less time-consuming. There is no need for an external backend to develop a functional website. No need for extra files for each task. No need to create separate server files to design the database and connect the same while also making another file for transferring data to and from the server. Django takes care of all this.
Full Fledged Python Web-Framework
To a large extent, Python can be identified as the major reason people actually start learning Django. It’s a tool capable of solving all kinds of problems in any sort of operation. From web development to machine learning and whatever you have in between.
Also, it’s quite simple and easy to use.
Some basic knowledge of Python is required before a developer can work with Django. There’s a possibility of learning Django without knowing Python, actually, but that will be a sort of tight squeeze. Why go down the difficult road when you can have it easy? Learning Python is a basic requirement that’s needed to ease the Django journey.
High Scalability
When we mention scalability, we are talking about the level or scope where our technology gets to implement. Scalability means to have modular components which decoupled and can be replaced if they are determined to be bottlenecks of the application. It also implies the ability to progressively scale from a low number of users to a high number of users. With scalability, new users/employees can easily catch up with the application. Scalability also involves having long-term support for things such as bug fixes or security patches.
One of the major concerns of people aiming to build applications is scalability. Even though an application may start small, there’s the possibility of it growing larger in size as time goes on, and not being ready for this growth may lead to a need for a rewrite. Therefore, there is a need to pick a scalable framework to avoid this kind of trouble as the app use grows.
With Django, the Scalability issue is easily put to bed, and the big apps have proven this with a large number of users worldwide that use Django.
There are two types of scaling, and they are vertical and horizontal scaling.
In Vertical scaling, an application is scaled by upgrading the machine it’s running on and hoping it will be able to process the number of requests being received by the application. Some limitations can be encountered at some points of using the vertical scaling approach. Some of the restrictions are: difficulty in creating an auto-scale infrastructure, the application cannot be downgraded or upgraded while it’s running, and it is less efficient and costs more.
Horizontal scaling means scaling by spawning more machines to serve the application side by the side of each other. It’s splitting the workload of requests received by a single machine onto numerous ones. For example, instead of a single machine receiving 1500 requests, 10 machines might be set to receive 150 requests each. This is quite preferable, better, and beneficial, as more machines can be added as fit needs, and scaling of the application can go on without a hitch as the application user base grows.
Django is quite great when it comes to horizontal scaling.
Some tools for scaling in Django are; Digital Ocean spaces, Azure Blog Storage, Google Cloud storage, etc.
Security
It’s a generally accepted fact that when it comes to security, Django is quite superb. Backend developers can testify to this. Django covers loopholes (such as cross-site scripting, clickjacking, CRFL injection, Encrypted connection, SQL injection, and cross-site request forgery) by default. It provides a secure way of managing usernames and passwords. Part of what accounts for Django’s security is the fact that the writing of Django code is done from scratch.
Ease of Usage
This is made possible because Django uses the Python programming language, which is now the most chosen language by developers joining the coding train in recent years. It offers better code readability, and this puts developers who are beginners at great ease. Also, the fact that Django is free, open-source, and maintained by a large community of users adds to its ease of usage.
Time and crowd -tested
Part of what a developer looks forward to when learning new technologies is its ability to withstand the dynamic changes happening from time to time in the tech world. Django has been the framework in the front row of responding to new vulnerabilities and issues. The latest release focuses on boundary features and edge case issues.
Also, Django has been around for more than 13 years now and is still very popular (many thanks to the Python factor), with the number of developers choosing it for web development growing astronomically by the day. This growing patronage is a good prove of Django’s task accomplishment capability and stable characteristic.
Great documentation for real-world application
Compared to other open-source technology frameworks, Django offers the best documentation for its framework to develop different kinds of real-world applications. For any developer, better documentation is like a well-established library. At the time Django came around, most of the other frameworks use an alphabetical list of modules and all the methods and attributes. This is good for quick reference but not so good for someone who just got into the framework arena -another area where Django stands out. Django developers are doing quite a great job maintaining their documentation quality.
The Django Community
One of the best aspects of the Python world is the community, which is also true of Django. Governed by the DSF (Django Software Foundation,) all Django events have a code of conduct. The community is quite supportive and is a good channel to share and connect. Lots of good groups are also flourishing there. Like the Python community, the Django community contributes numerous utilities and packages for the wider world.
Search Engine Optimization has to do with adding a website to the search engine to appear at the top search results.
Quite often, it seems optimization of SEO, and the developer’s jobs are at cross purposes. This is so because the algorithms used by search engines don’t really cooperate much with the web developer. Websites are created in human-understandable form but have to be added to the server in URL form for best recognition by the search engine. Python’s Django makes this problem go away by advocating the use of human-readable URLs, which is of immense help in search engine site rankings.
Versatility
Django is capable of working with any client-side framework and can also deliver content in almost any format. These formats include RSS feeds, XML, HTML, JSON, etc. it can be used to build any type of website, be it news sites, social networking sites, content management sites, etc. Just name it, and Django is capable of it.
DRY (Don’t Repeat Yourself) principles
Django concentrates much on getting the most out of every code line, spending less time on stuff like code re-orientations or debugging, etc. this is where the DRY principle comes in. DRY code means all data uses change simultaneously rather than a need to be replicated, and its fundamental reason to the use of variables and functions in all programming.
Apart from all mentioned so far, Django can also simplify form creation, collection, and processing. It includes a robust administrative site and a robust authentication and permission system for security purposes.
Django…. The future
Thinking of Python for a project automatically means thinking Django. With the fast growth in the number of Python users all over the world and the caliber of organizations and companies choosing it for their projects, it’s obvious the future of the Django frameworks is very bright, and it can only grow larger. This, therefore, makes Django the smart choice framework. The benefits are just too numerous, too good, too appealing, to ignore. | https://medium.com/swlh/python-django-the-simple-web-application-framework-for-your-next-big-project-7153892c4277 | [] | 2020-11-23 08:21:23.556000+00:00 | ['Python', 'Api Development', 'Django Rest Framework', 'Django', 'Web App Development'] |
So you want to serve your country: A (biased) guide to tech jobs in federal government | A bunch of excited (and confused!) folks have reached out for advice as they think through possibly joining government service. I’m juggling a few things at the moment, so wanted to share my high-level advice. If you need more advice, search some of these terms on twitter to find lots of folks working on these issues who are super friendly and helpful!
What do you care about?
This work is… not easy. The best way to prepare to make it through a day where you realize that a single person is the only thing keeping a critical system serving Veterans running is to make sure the end goal you’re working toward is close to your heart. To know what you care about, you don’t need to know anything about the structure of government. Are you worried about housing? Poverty? Corporate criminals running amok?
What the job can feel like on the best days. CC0 by me
What kind of work do you like?
Some tech work in government is talking procurement folks out of being swindled by deeply goofy blockchain schemes, and some of it is working with with Veterans experiencing homeless to learn how they access the internet to fill out forms. It can be overwhelming to figure out where you should apply and what each team does. There are a couple well-known ways to join, but they’re all acronyms, so I typically explain it like this:
18F — Build it / Buy it
USDS — Fix it
PIF — Try it
OSTP — Fuel it
Agencies — Own it
Edited to add: It’s of course more complex than that, and you can follow along with conversations like “is 18F building/buying or coaching/training?” by following folks who are living it.
But how do you even apply?
This is so confusing! Sorry in advance. For the first three groups, 18F, USDS, and PIF, there’s a link to apply and the instructions are clear and great. They all take a much, much longer time than many private sector folks are used to, but are actually some of the fastest hire-ers in government.
For OSTP, my understanding is that most people are asked to join by the existing teams, and are “detailed” from other agencies or academia. For example, when I started at OSTP, I was working for another agency but reported to OSTP. The best way to get a job at OSTP is probably to already work in government and/or hunt down Kumar Garg for advice.
Agencies (there are roughly 280 of them) all have tech and/or design needs in some respect. Sometimes it intersects with policy issues, sometimes it’s building things. The link I shared above is the best place to apply for those jobs.
IMPORTANT NOTE: To apply successfully to agency jobs, you’ll need to submit what’s called a “government resume.” This means it must be multiple pages, should include every job you’ve ever had, every responsibility you’ve ever had, and address all of the skills asked about in the job posting. So if the job posting says “Candidate must have experience in widgets” you *must* say “I have been interested in widgets for 3 years” (or whatever your experience with widgets is).
To illustrate, here’s a real example: While trying to hire technical talent at an agency, I’d recruited a bunch of smart folks to apply, including an engineer who was working at GitHub. When I got the list of eligible people I could choose from to hire, the GitHub engineer was not on the list. I asked the folks who sent me the list why she hadn’t made the cut. It was because, and I quote, “she didn’t have ‘version control’ listed on her resume, and it was in the job description.”
If you’ve worked in the private sector, it may be unthinkable to you to produce a 5+ page resume, let alone list random things that you might have a ton of experience in (version control!!!!!!) even if it’s obvious (SHE WORKED AT GITHUB!!!!!!!!), and describe random things (widgets!) that you’ve had questionable experience in (being interested!), but if you don’t, the actual hiring manager will not even see your resume. You may be asking why the system works like this — it’s because the Federal government is both the largest employer on earth and also is one of the most committed to fairness. This excellent short agency video explains why “government resumes” exist. It’s frustrating, but it’s not a horrible idea.
I’m overwhelmed. Help!
Fair enough! First, remember that you don’t have to jump right this very second. Lots of folks are excited right now, but that doesn’t mean it’s now or never for you. Waiting also means you can find out who you’d be working with before you show up.
Second, if 280+ agencies is a little intimidating of a starting point, here are some recommendations of agencies that are known for treating tech and design seriously and having excellent talent:
All of the ones listed above
Consumer Financial Protection Bureau — Fight for a fair deal on everything from student loans to debt collection!
DARPA —Develop novel treatments for Covid19!
General Services Administration (includes 18F, but has much more) — Nurture platforms that make everything else in government possible!
To reduce overwhelm, I also heartily recommend getting Cyd’s excellent book.
What about states and counties and cities and contracting roles and fellowships and internships and other things???
Those are important too! If you’re open to these roles, make sure to check out the Code for America Public Interest Job Board. | https://eriemeyer.medium.com/so-you-want-to-serve-your-country-a-biased-guide-to-tech-jobs-in-federal-government-c2d3fd567af | [] | 2020-11-15 13:12:23.438000+00:00 | ['Government', 'Technology', 'Design', 'Jobs'] |
Who Will Lead HHS and CDC Under Joe Biden? | Who Will Lead HHS and CDC Under Joe Biden?
A speculative but informed list
Credit: Pool/Getty Images
Joe Biden will be president of the United States, inaugurating what could be a major shift in how Covid-19 is being tackled. As they say in Washington, personnel is policy. So who is going to be tapped for the top health jobs under the new administration? Below is a list of potential candidates for a variety of roles that cover public health, administration, and health care policy. It’s speculative but informed based on interviews and other public reporting. (If you think I have this totally wrong, feel free to email me at alexandra@medium.com or suggest names in the comments.)
The roles of director of the Centers for Disease Control and Prevention and Secretary of the Department of Health and Human Services are unique in that not only do they need to have a wide breadth of public health knowledge (including likely infectious disease experience given that the nation is experiencing the worst pandemic in history), but they also have to run large bureaucracies and are seen as some of the most important communicators on health and health care issues. The CDC is an independent agency within HHS, the latter of which has over 80,000 employees. Politico has speculated, based on reporting, that the Biden team may lean toward governors and former governors for the HHS job due to the natural political skills elected officials have and the fact that governors oversee health care as a large part of their state budgets. However, that’s not an absolute directive, and there are many people who seem like contenders who don’t fit that bill.
Given the massive health inequities in the United States, which have only become even more apparent during the pandemic, there’s hope (at least from this blog) that the public health leadership of the U.S. will be more representative of the country as a whole.
Here’s some informed conjecture.
Folks on the radar
On Monday, the Biden-Harris transition team announced its Covid-19 advisory board. These are public health experts who are tapped to advise the transition team on how the incoming administration should tackle Covid-19 (it’s unclear if this is through policy guidance alone or other measures). I’m listing the names and blurbs from the press release below. It’s reasonable to assume that some folks on this list could eventually be tapped for other formal health administration titles as has happened in the past, especially Vivek Murthy, MD, who has HHS experience and served as U.S. surgeon general from 2014 to 2017.
Co-Chairs and Advisory Board Members: CO-CHAIRS Dr. David Kessler
David A. Kessler, MD, is Professor of Pediatrics and Epidemiology and Biostatistics at UCSF. Dr. Kessler served as FDA Commissioner from 1990 to 1997, appointed by President George H.W. Bush and reappointed by President Bill Clinton. Dr. Vivek Murthy
Vivek Murthy, MD, MBA, served as the 19th Surgeon General of the United States from 2014–2017. As the Vice Admiral of the US Public Health Service Commissioned Corps, he commanded a uniformed service of 6,600 public health officers globally. The officers focused on helping underserved populations, protecting the nation from Ebola and Zika, responding to the Flint water crisis, and natural disasters such as hurricanes. Dr. Marcella Nunez-Smith
Marcella Nunez-Smith, MD, MHS, is an Associate Professor of Internal Medicine, Public Health, and Management at Yale University and the Associate Dean for Health Equity Research at the Yale School of Medicine. Dr. Nunez-Smith’s research focuses on promoting health and healthcare equity for structurally marginalized populations. MEMBERS Dr. Luciana Borio
Luciana Borio, MD, is VP, Technical Staff at In-Q-Tel. She is also a senior fellow for global health at the Council on Foreign Relations. Dr. Borio specializes in biodefense, emerging infectious diseases, medical product development, and complex public health emergencies. She served in senior leadership positions at the FDA and National Security Council, including as Assistant Commissioner for Counterterrorism Policy and Acting Chief Scientist at the FDA, and Director of FDA’s Office of Counterterrorism and Emerging Threats. Dr. Rick Bright
Rick Bright, PhD, is an American immunologist, virologist, and former public health official. Dr. Bright was the director of the Biomedical Advanced Research and Development Authority (BARDA) from 2016 to 2020 and the Deputy Assistant Secretary for Preparedness and Response at the Department of Health and Human Services. He also previously served as an advisor to the World Health Organization and the United States Department of Defense. His career has focused on the development of vaccines, drugs, and diagnostics to address emerging infectious diseases and national security threats. Dr. Ezekiel Emanuel
Ezekiel J. Emanuel, MD, PhD, is an oncologist and Vice Provost for Global Initiatives and chair of the Department of Medical Ethics and Health Policy at the University of Pennsylvania. From January 2009 to January 2011, he served as special advisor for health policy to the director of the White House Office of Management and Budget. Since 1997, he has served as chair of the Department of Bioethics at The Clinical Center of the National Institutes of Health (NIH). Dr. Atul Gawande
Atul Gawande, MD, MPH, is the Cyndy and John Fish Distinguished Professor of Surgery at Brigham and Women’s Hospital, Samuel O. Thier Professor of Surgery at Harvard Medical School, and Professor of Health Policy and Management at Harvard T.H. Chan School of Public Health. Dr. Gawande is also the founder and chair of Ariadne Labs, a joint center between Brigham and Women’s Hospital and the Harvard T.H. Chan School of Public Health for health systems innovation, and of Lifebox, a nonprofit organization making surgery safer globally. He previously served as a senior advisor in the Department of Health and Human Services in the Clinton Administration. Dr. Celine Gounder
Celine Gounder, MD, ScM, FIDSA is a Clinical Assistant Professor at the NYU Grossman School of Medicine and cares for patients at Bellevue Hospital Center. From 1998 to 2012, Dr. Gounder studied TB and HIV in South Africa, Lesotho, Malawi, Ethiopia and Brazil. While on faculty at Johns Hopkins, Dr. Gounder was the Director for Delivery for the Gates Foundation-funded Consortium to Respond Effectively to the AIDS/TB Epidemic. She later served as Assistant Commissioner and Director of the Bureau of Tuberculosis Control at the NYC Department of Health and Mental Hygiene. Dr. Julie Morita
Julie Morita, MD, is Executive Vice President of the Robert Wood Johnson Foundation (RWJF). Morita previously served as the Health Commissioner for the City of Chicago for nearly two decades. She is a member of the American Academy of Pediatrics and has served on many state, local, and national health committees, including the CDC’s Advisory Committee on Immunization Practices, and the National Academy of Sciences’ Committee on Community Based Solutions to Promote Health Equity in the United States. Dr. Michael Osterholm
Michael Osterholm, PhD, MPH, is Regents Professor, McKnight Presidential Endowed Chair in Public Health and the director of the Center for Infectious Disease Research and Policy (CIDRAP) at the University of Minnesota. Dr. Osterholm previously served as a Science Envoy for Health Security on behalf of the State Department. For 24 years (1975 to 1999), he worked in the Minnesota Department of Health; the last 15 years as state epidemiologist. Ms. Loyce Pace
Loyce Pace, MPH, is the Executive Director and President of Global Health Council. Over the course of her career, Loyce has championed policies for access to essential medicines and health services worldwide. Ms. Pace has worked with Physicians for Human Rights and Catholic Relief Services, and previously served in leadership positions at the LIVESTRONG Foundation and the American Cancer Society. Dr. Robert Rodriguez
Dr. Robert Rodriguez graduated from Harvard Medical School and currently serves as a Professor of Emergency Medicine at the UCSF School of Medicine, where he works on the frontline in the emergency department and ICU of two major trauma centers. He has authored over 100 scientific publications and has led national research teams examining a range of topics in medicine, including the impact of the COVID-19 pandemic on the mental health of frontline providers. In July 2020, Dr. Rodriguez volunteered to help with a critical surge of COVID-19 patients in the ICU in his hometown of Brownsville, Texas. Dr. Eric Goosby
Eric Goosby, MD, is an internationally recognized expert on infectious diseases and Professor of Medicine at the UCSF School of Medicine. During the Clinton Administration, Dr. Goosby was the founding director of the Ryan White CARE Act, the largest federally funded HIV/AIDS program. He went on to become the interim Director of the White House’s Office of National AIDS Policy. In the Obama Administration, Dr. Goosby was appointed Ambassador-at-Large and implemented the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR). After serving as the U.S. Global AIDS Coordinator, he was appointed by the UN Secretary General as the Special Envoy for TB.
The composition of the committee feels strategic in terms of getting quality voices in the room for a science-backed approach to the pandemic and for signaling what’s to come. The Biden administration likely wants to find people who can provide the best strategies for getting the pandemic under control as well as boost the morale of the health agency employees who have felt sidelined, silenced, and compromised. Take for example Rick Bright, the former top vaccine official in the Trump administration who became a whistleblower. His inclusion on the Biden-Harris Covid-19 advisory board sends a signal of restoration to all three agencies. Promoting people within the organizations who have stuck through the four years will likely also happen or at least be considered (see the first person suggested for CDC below). It should be noted that jobs that manage entire agencies require congressional approval; however, the CDC director position does not.
Here’s who else is likely on the radar
CDC
Anne Schuchat, MD: As the principal deputy director of the CDC — and the interim CDC director from January to July 2017 and February to March 2018 — Schuchat is viewed by many in public health to be an obvious contender for the director job. She has extensive knowledge of the agency, and she’s been hands-on during prior health emergencies including H1N1, SARS, and Ebola. In prior pandemics, she led daily briefings, which many in the media are hoping will pick up again. An October ProPublica feature revealed that Schuchat has clashed with the Trump administration, including Covid-19 task force leader Deborah Birx, and that she’s viewed by CDC insiders as “the defender of the agency’s principles.”
Mary Bassett, MD: The current director of the FXB Center for Health and Human Rights at Harvard University was formerly the commissioner of the New York City Department of Health and Mental Hygiene from January 2014 until August 2018. There’s a tradition of the former New York City health department lead becoming head of CDC with Tom Frieden, MD. She has a great TED Talk on why doctors should believe in social justice.
Richard Besser, MD: The president and CEO of the Robert Wood Johnson Foundation was a former acting director for the CDC and ABC News’ former chief health and medical editor. He was the acting director of the CDC during the H1N1 pandemic in the U.S. During the Zika pandemic, I ran into Besser while reporting in Brazil, and he had shrewd advice about what the major questions of the response were. His media and communications savvy could work in his favor for sharing Covid-19 messaging. He could also be under consideration for HHS roles.
HHS/Surgeon General
Mary Katherine Wakefield: She’s a nurse and health care administrator who served as the acting Deputy Secretary of Health and Human Services in the Obama administration from 2015 to 2017 and as head of the Health Resources and Services Administration from 2009 to 2015. Given the immense work of nurses during the pandemic, giving Wakefield the top job would send a good message of recognition to a large but often overlooked group of health care workers.
Peter Kilmarx, MD: Kilmarx is an expert in infectious disease research and HIV/AIDS prevention and is currently the deputy director of the Fogarty International Center at the National Institutes of Health. He led the CDC efforts to quell the Ebola pandemic in Sierra Leone. He’s been in touch with a wide range of contact tracing and epidemiology experts during the pandemic.
Margaret Ann “Peggy” Hamburg, MD: Hamburg — yet another former New York City health commissioner — served as the commissioner of the Food and Drug Administration under the Obama administration and is currently the chair of the Board of the American Association for the Advancement of Science. She’s also worked at HHS and the National Institutes of Health. If the administration is looking for someone with U.S. health agency knowledge, Hamburg is an obvious name to consider.
Tom Frieden, MD: The former head of the CDC before Robert Redfield and the current president and CEO of Resolve to Save Lives, an initiative of Vital Strategies, has become a prolific science communicator during the pandemic. His name is likely in the mix.
Scott Gottlieb: Whether he actually makes a final shortlist, Gottlieb — who was thought of as an effective former head of the FDA — is likely a name being considered.
Michelle Lujan Grisham: The governor of New Mexico is on Politico’s list of contenders for the top HHS spot. She ran New Mexico’s health agency in the past, and she currently co-chairs Biden’s transition team.
Mandy Cohen, MD, MPH: The North Carolina Health Secretary is also on Politico’s list and was a top official at the Centers for Medicare and Medicaid Services under the Obama administration.
Beth Cameron: The vice president for global biological policy and programs at the Nuclear Threat Initiative previously served as the senior director for global health security and biodefense on the White House National Security Council. She wrote a sharp critique of Trump’s pandemic response in March as she had helped write the nation’s “pandemic playbook” under Obama. | https://coronavirus.medium.com/who-will-lead-hhs-and-cdc-under-joe-biden-bc758aae9df4 | ['Alexandra Sifferlin'] | 2020-11-10 16:19:36.672000+00:00 | ['Coronavirus', 'Covid 19', 'Politics'] |
I Am The One Who Claps Once | Photo by Thomas Jörn on Unsplash
Your Medium story amuses me. I shall clap once.
You gave me “Four Habits I Must Adopt to Be More Productive.” That’s helpful. I clap once.
You explained, “How 3D Printed Houses Can Fight Climate Change.” Intriguing. I clap once.
You revealed, “The Six Things That Only Dogs Know About Happiness.” Touching. Here’s a clap for that.
Do I clap more than once? No.
I clap just once, to let you know you have given me the smallest, non-zero amount of pleasure from reading a story.
You showed me so many things: “How to Write Like A Rockstar,” “How to Read Like An Autodidact,” and “How To Travel to 50 Countries Without a Suitcase.”
In a mere 750–2,000 words, you distilled a useful life skill, an overlooked piece of the human experience, or a droll anecdote from the one time you had a threesome. And for that, I clap once.
I don’t want to give you the idea that I’m easily impressed.
When I was a child my father told me, “never show too much appreciation, kid. Keep ’em hungry for your approval. Keep ’em guessing. That’s what drives the ladies wild.”
Then he patted me on the head and handed me a penny for Christmas, the one day of the year he agreed to spend time with my mother and me. He was my hero.
Once, I nearly ignored his advice.
My (then) fiancé took me to see the master: Yo Yo Ma, live in concert. He rendered Bach’s Cello Suites in sublime perfection. The notes transported me to a place where I glimpsed love. A tear welled in my eye. I allowed it. Then a second tear. I held it back.
The audience witnessed the heavens part that night. A standing ovation. And another. And another. The audience clapped until their hands were raw. I clapped once.
My fiancé said, “What the fuck is wrong with you!? Do you ever really love anything?” We broke up. Was I sad? I felt one unit of sadness.
But on Medium, I clap once, and it is okay. Here, I am home. Every day, I read.
Stories about how Instagram is making us miserable. I clap once.
Stories about how to deal with the narcissists in your life. I clap once.
Stories about nanotechnology, cryptocurrency, and sex robotics. I clap once.
“Nine Hacks to Make More Money on Medium.” I clap twice… Oh no, I have made a mistake. I spend a full day searching Medium’s FAQs to see if I can “unclap.” No dice.
I email you to let you know that my second clap was unintentional, then I delete my account, throw my laptop in a dumpster, shave my head, change my name, move to a small town in Oregon, and start my life from scratch.
A month passes. I return with a new Medium account. Your writing grows in power. You write stories about biotech, urban farming, transpersonal psychology, speed reading, drone bombing, single parenting, dating, breakups, adoption, abortion, internet marketing, beekeeping, and how to write the perfect Tinder profile.
I clap once, once, once, once, once, once. Once. O-N-C-E.
You unleash a tour-de-force: The complete story of how humanity will destroy Earth and colonize Mars. Your article is a masterstroke, the kind of accessible science writing that makes the intelligent layman feel like a god. You get a book deal. You get a tweet from Obama. Elon Musk gives you a Tesla.
So, that’s pretty good, for a free article. I clap once.
You transcend Tolstoy. I clap once.
You outclass Austen. I clap once.
You eclipse Shakespeare. I clap once.
You get 1 million views, 1 million claps, 1 million adoring fans. So cool. I clap once.
You take Mescaline and Absinthe and enter the most lucid depths of human consciousness. You summon Borges’ Library of Babel, constructing an infinite collection of every possible Medium article, in your mind’s eye.
Riding a wave of unmitigated genius, you create the Most. Perfect. Story. Ever. Written.
A tale of Love and Loss; Of Solitude and Communion; Of Birth and Re-birth.
Humanity weeps. Dictatorships collapse. Babies speak three months early.
Sic Itur Ad Astra!
Your Medium story is fiction. It is nonfiction. It is a duality. It is Everything.
In one story, you solve war, poverty, prejudice, illiteracy, infidelity, terrorism, and insufficient legroom in economy class.
Something stirs in my soul.
On the distant shores of my childhood memory, some flame sparks, ignites, and shows me a path through the darkness. I truly feel that what you wrote is not half bad!
I clap once. | https://medium.com/slackjaw/i-am-the-one-who-claps-once-e1f8b1039f82 | ['Alex Baia'] | 2020-07-24 04:38:35.287000+00:00 | ['Humor', 'Clap', 'Medium', 'Writing', 'Satire'] |
On the Horizon: The Future of Marketing — and my test to see if it’s here. | It’s my humble opinion that all of the marketing strategies, tactics, and methodologies we marketers hear today — “Account Based Marketing”, “Inbound Marketing”, “Growth Hacking” — are simply a means to describe a what’s yet to come in the world of marketing. Each of the aforementioned marketing terms — with practice, dedication, and the right talent — can be achieved by any organization. Each offering valuable opportunities to grow your business. This article isn’t to “dethrone” any of these types of marketing and growth tactics…
However, I believe these various marketing methodologies, tactics, etc. are simply inching us closer to the future. Little by little they are getting us to think more deeply about the way we impact the businesses and customers we interact with. And that’s a really good thing.
I’ll admit that I don’t have much research (yet) to substantiate my prediction except for what I observe and hear through great marketers and brands all across the web but — as you’ll see below — the future of marketing is already poking it’s head out in the world of social media.
The future (in my humble opinion) is about the intersection of data, creativity and strategy. It’s about providing customers with a uniquely valuable experience or insight that only you can provide.
I think the future of marketing will unfold with the convergence of three key elements…
1. The future of marketing starts with technology.
The proliferation of data and the unprecedented access we have to it are beginning to shape the future of marketing. And, as the cost of cloud-based processing decreases while Artificial Intelligence (AI) and data science rapidly advance, the future of marketing is just starting to appear on the horizon.
2. The future of marketing depends on key players.
If you don’t have a data scientist or a data analyst (I’m certain the difference in roles is substantive but I don’t know what it is yet), then you better get in line. It’s a hot topic and a hard role to fill. In my opinion it’s key to the future of marketing.
Don’t worry I’ll get to the part where I tell you what I think future of marketing is…just keep reading.
Data Science and the employees that perform this critical role aren’t the only key players in the future of marketing. You’ll still need creative and strategic marketing roles too. Without creative and marketing strategy, data is just a bunch of 1's and 0's.
3. The future of marketing requires “must have” products.
Sean Ellis and many other entrepreneurs will tell you that you won’t have anything to market unless you first establish product market fit and ideally create a “must have” product.
Note: The beauty of the future of marketing I see is that it will help perpetuate the stickiness and likelihood your product/service becomes “must have”.
_________ Marketing
I believe the convergence of these three elements means we’re headed toward a new marketing methodology. I don’t know how to describe this new method of marketing but for now, I’ll refer to it as “insight-based” or “feedback” marketing.
As promised in my intro, the future of marketing is already poking it’s head out in the world of social media.
Here’s an example from Facebook:
We’ve all seen the notifications or content in our feeds.
“Here’s what you did this time last year!”
Or
“Happy Friendversary, Mike!”
btw I never login to Facebook anymore and I deleted the app from my phone, but sure enough while writing this post I logged in to see If i could find an old reference to illustrate my point and this is what came up…
It’s an absolute joy to see those videos and that type of content produced for us. It’s insightful and pleasant and everything Facebook needs to increase virality and stickiness. — Although I don’t use Facebook on my phone anymore I still see the value ;)
So, the real question is how do you apply insights like this to your B2B or SaaS business? How can we use our data and creativity to increase the stickiness and ultimately the effectiveness of our marketing campaigns?
The answer is data, creativity, and marketing strategy.
What’s next?
I’m on a mission to provide value and insight to customers powered by data. I want to scientifically, strategically and creatively adapt our data to create a feedback loop of transparency and valuable insight to benefit our customers in a way they would otherwise not see from competing products. But I can’t do it alone. I truly believe the future of marketing is about data, creative, and strategic marketing teams working together.
I’ll be testing my theory — that insight-based/feedback marketing is the key to helping our customers and our company achieve new levels of success — at AerServ.
Here’s what I’ll be doing…
I’m not going to give you the specifics because I don’t want to spoil the test (or let my competition in on my plans too early).
I’ll leverage data to provide our customers with insightful feedback on their performance and I’ll make it extremely easy to take advantage of the insights we uncover. By doing so, I believe both our customers and our business will see positive financial results and I hope that we’ll be seen as a trusted and valued partner.
I think a lot of businesses can do what I’m planning to do, but it’s a matter of time before they all try it. I’m sure someone else is already testing this theory, but in the coming year I hope to begin this test and report back on how it went.
If you’re doing this or you’ve seen a company do this, please provide a comment and description of what you saw and what you thought of it! | https://towardsdatascience.com/on-the-horizon-the-future-of-marketing-and-my-test-to-see-if-its-here-58e89ea48c7a | ['Mike Rizzo'] | 2017-07-21 05:31:48.794000+00:00 | ['Data Science', 'Marketing Strategies', 'Marketing', 'Creative Marketing', 'Data Science Marketer'] |
The Results of the Nazi IQ Tests | The Results of the Nazi IQ Tests
The Nuremberg psychological examinations of Nazi leaders were surprising, but certainly not shocking.
Source: pic via Wikimedia Commons
Retribution was expected in the wake of World War II. Too many horrors had been revealed. More than a hundred Nazis stood trial between 1945 and 1949. Nuremberg was chosen as the location because of its symbolic value. It’s where many of the initial Nazi protests and marches were held:
Rally in Nuremberg pic via Wikimedia commons
The trials included graphic pictures and testimonials of the atrocities committed. You may know of the commonly cited defense “I was just following orders.”
Among those on trial were a “Nuremberg 21”. They were the highest level officials of the group, a who’s who list of Nazi leaders.
Prior to their trials, there was a push to conduct psychological examinations on these leaders. It was driven by the scientific community’s interest to understand what drives a person to such acts. Their war crimes called into question the very nature of man, and of good and evil.
A gifted psychiatrist, Dr. Kelley, and a prominent psychologist, Dr. Gilbert, would conduct the tests, through a series of interviews with each leader.
Dr. Kelley, himself a renowned genius, approached the tests with intellectual curiosity. Gilbert, also smart, but Jewish, made no secret of his distaste for some of the men he spoke with. Julius Streicher, in particular, left a damning impression on him. The man was deeply anti-Semitic, one of the vocal and unashamed advocates of genocide. His actions would deservedly bring him to the executioner's noose.
As a whole, both doctors came to relatively similar conclusions about the twenty-one personalities, noting the men were (mostly) sane, though prone to deep character flaws:
Dr. Kelly: “Strong, dominant, aggressive, egocentric personalities. Their lack of conscience is not rare. They can be found anywhere in the country, behind big desks deciding the fate of their nations.”
Dr. Gilbert: “Ruthlessly aggressive, emotional insensitivity, presented with a front of utter amiability (likeability). Narcissistic sociopaths.”¹
The first was the Rorschach test. This is card #2 that was presented to the Nazi leaders. They were asked to elaborate on what they saw.
Source: pic by Discover Magazine.
Frank (senior nazi): Those are my darling bears. They’re holding a bottle. Beautiful prima ballerina dancing in white dresses with red light shining from below
Rudolf Hess (Deputy Führer): Two men talking about a crime. Blood is on their mind.
Hermann Göring (Hitler’s #2): [laughs] Those are two dancing figures, very clear, shoulder her and face there, clapping hands. [cuts off the bottom part with hand] Top red is head and hat; the face is partially white.
The doctors found that the men, though sharing common character flaws, were all very different from each other.
The analysis also presented challenging philosophical questions. Prior to this, society mostly looked at evil as a black and white concept. The tests reinforced the idea of gradated morality, that we are profoundly shaped by personality and circumstance. | https://medium.com/history-of-yesterday/the-results-of-the-nazi-iq-tests-c3a5e442f37c | ['Sean Kernan'] | 2020-07-20 14:38:25.208000+00:00 | ['Philosophy', 'History', 'Science', 'Life Lessons', 'Life'] |
The Philosophy of Physics | In this excerpt from In Search of a Theory of Everything author Demetris Nicolaides discusses the importance of philosophy in physics.
Episteme means “knowledge” in Greek. For Aristotle, we have episteme if we know the cause of something.1 Science is episteme in Latin. Hence, strictly speaking, science includes all fields of knowledge (theology, philosophy, physics, history, etc.). Physics is a particular type of science: it is the study of physis (nature in Greek). With time, however, especially nowadays, the word science evolved to have a narrower focus, and it, too, means the study of nature. Because of this, physics and science have basically become synonymous, but also so because all subfields of science that study nature are really reducible to the natural laws of physics. For example, biology studies cells, which are made of molecules; chemistry studies molecules, which are made of atoms; and physics studies atoms. Therefore, generally all the sciences about physis are ultimately built from (and are branches of) physics.2 The natural philosophers3 of antiquity that we’ll consider were philosophers but also physicists. Philosophy and physics were then closely related fields.
According to legend, Pythagoras probably coined the term philosophy, “love of wisdom,” in Greek. So to define philosophy we need to know what wisdom is. Recall that the Oracle of Delphi prophesied that Socrates was the wisest, but he humbly doubted it.4 Using the art of dialectic (his famous questioning style, the maieutic), he set out to prove the Oracle wrong (by engaging in meaningful discussions with his fellow Athenians, politicians, poets, craftsmen, and farmers, in search of the truth) only to find out that the prophecy was spot- on. He was the wisest indeed for, unlike the others, who confused their skill with wisdom, their empeiria (in Greek, experience or talent) with episteme,5 he at least knew well one thing: that he knew nothing.6 Confessing ignorance gives healthy scepticism hope to advance to the truth and become wise. And the road toward wisdom begins for Socrates with the “know thyself,”7 by recognizing our mind’s limitations and the uncertainties of our methods of inquiry.
I’m not sure what wisdom might be, but I’m sure that the journey to it begins with a question and continues with courage, determination, and an honest effort to eliminate age- old, false beliefs and prejudices and seek rational (not dogmatic) answers.
I want to say, parenthetically, that knowing nothing is inconceivable. For the notion of “nothingness” is not allowed; it’s an impossibility!
Anyhow, I’m not sure what wisdom might be, but I’m sure that the journey to it begins with a question and continues with courage, determination, and an honest effort to eliminate age- old, false beliefs and prejudices and seek rational (not dogmatic) answers. Now, although we are born imaginative creatures with insatiable curiosity and desire for adventure and knowledge, with age we usually settle down both physically and mentally (our body and mind) and inquire less and less. Asking is a trait of youth but of philosophy, too. Neither solar nor entropic8 time ever ages the inquiring mind.
In a more concrete example, philosophy for me is this: I read a book about physics (of Albert Einstein, Steven Hawking, Michio Kaku), and I usually understand it well. Then I read a book about philosophy (of Aristotle, Bertrand Russell, Karl Popper), and I often don’t understand it that well. So then I reread it, and again, only to find out that I now understand better the physics book. And so sadly I keep on reaffirming that “the man of science is a poor philosopher.”9
Philosophia10 is She who managed to escape from the darkness of superstition and ignorance (from Plato’s cave) and dared to ascend into the world of light and knowledge. It is “the vision of truth,”11 it is the ability to “teach people to talk sense,”12 it is sameness in dissimilarity, or a subtle immutability in conspicuous change. It is part Apollo (reason) and part Dionysus (passion) but never any one alone, it “is something intermediate between theology and science,”13 or, put simply, philosophy might be what the Greeks called “the gift of wonder”14 — of imagining, searching, discovering, and learning all you can by wondering.
In the quest for truth about nature, science without (the wisdom of) philosophy is practical and rational but (arguably) dull, and philosophy without (the empirical facts of) science is abstract and wise but (experimentally) unverified.
Adding to such gift is science, the systematic study of nature and the organization of acquired knowledge into “timeless, universal,” causal, and, most important, testable laws that are derived from observation and rational consideration. A good scientific theory, therefore, makes experimentally verifiable and falsifiable predictions, which must be tested by experiment. Science is evidence based knowledge; it is not knowledge based on opinion or dogma. The defining characteristic of science is its unique way of study, the scientific method. It can be summarized in five steps.
(1) Observe nature. For example, things fall. (2) Formulate a question, a problem (based on observation). Why do things fall? (3) Answer the question with a hypothesis (an educated guess) that makes testable predictions (the cardinal rule of science). They fall because the earth is pulling them: dropped from rest, they should fall 4.9 meters in 1 second. (4) Perform properly designed reproducible experiments to collect data that will be used in order to verify or falsify the predictions of the hypothesis. Drop various objects from rest and measure the distance they fall in 1 second. (5) Draw conclusions by comparing the predictions of the hypothesis against the data of the experiment: (a) if the predictions are observed (i.e., they agree with the data, so things would indeed fall 4.9 meters in 1 second), the hypothesis is verified and it transitions into a scientific fact, a law of nature; (b) if the predictions are not observed (they are in disagreement with the data, things would not fall 4.9 meters in 1 second), the original hypothesis is falsified and thus replaced by a modified one or by something completely new.
This is the general scientific method as was formulated by philosopher Francis Bacon. A clarification: as Einstein, Popper, Richard Feynman, and Hawking argued, we can only verify a hypothesis; we can’t prove it — we can never be absolutely certain if the laws we discover are truly timeless and universal. For example, while experiment after experiment keep on verifying Einstein’s relativity, “No number of experiments can prove me right; a single experiment can prove me wrong,” said he.15 That is, a new type of experiment may find a flaw in the theory (not previously detected) that will prompt scientists to revise or completely change it with a new vision.
In the quest for truth about nature, science without (the wisdom of) philosophy is practical and rational but (arguably) dull, and philosophy without (the empirical facts of) science is abstract and wise but (experimentally) unverified.16 For Einstein, “science without epistemology is — in so far as it is thinkable at all — primitive and muddled.”17 The road to truth, I believe, is paved by science and philosophy, but certainly by other fields of knowledge, too. The view, otherwise, is of mere copies, shadows of truth.
In Plato’s parable of the cave,18 prisoners constrained in a cave since childhood and for many years thereafter can see only shadows and mistake these shadows for the only reality. The prisoners therefore are utterly ignorant of the objects that project the shadows. But the sight and insight of the prisoner who eventually manages to escape start to gradually improve. At last, outside the cave and in the light of the sun, she begins to have a better perception of reality. She now sees how things resemble their shadows, and she realizes that shadows are deceptive; they are only a mere copy of the real objects (and of the grander truth in general). Nature, she now knows, is much more than what She appears, but Her divine secrets can ultimately be untied by the curious, willing mind. Eager to share her newfound sense with her prisoner- friends, she descends once more into the cave. But passing suddenly into the darkness from the light makes her sight briefly feeble. Her friends are tricked by it and think, up she went with her eyes and down she came blind; thus, sadly, they believe that the cave is the only safe place. | https://medium.com/science-uncovered/the-philosophy-of-physics-c6bb73ca31a2 | ['Oxford Academic'] | 2020-07-24 11:01:01.894000+00:00 | ['Oxford University Press', 'Physics', 'Philosophy', 'Science'] |
Has the type of health care system or type of government mattered during the coronavirus pandemic? | By Kent R. Kroeger (April 9, 2020)
Key Takeaways: There is no systematic evidence that the overall quality of a country’s health care system has had an impact on the spread (morbidity rate) and lethality (mortality rate) of the coronavirus. Instead, a country’s per capita wealth and exposure to the international economy (particularly international tourism) significantly increases the spread of the virus within a country. This latter finding may be partly a function of wealthier populations being more likely to have their coronavirus-related illnesses diagnosed and treated. But it is also likely that international travel is spreading the virus worldwide. As for the mortality rate, the story is more complicated: The single biggest driver of the mortality rate, so far, is simply the time since the country’s first coronavirus-related death. Once the virus has found a vulnerable host, the final outcome may be difficult to change (at least for now). As for the charge by the US intelligence community that China has under-reported the coronavirus’ severity in their country, the model reported here suggests China, given its size and characteristics, should have so far experienced 10 times the coronavirus cases they have reported and a case fatality rate twice their current estimate. If they are under-reporting, as charged by the US, China may have between 33,600 to 70,000 deaths related to the coronavirus, not the 3,339 they are currently claiming. To the contrary, it is also plausible that their aggressive suppression and mitigation efforts have successfully limited the spread and lethality of the coronavirus. The model reported here cannot determine which conclusion about China is true. Or if both conclusions have truth.
_________________________________________________________________
It’s OK to feel some tentative optimism about the coronavirus pandemic. It does appear, finally, that the virus and its associated illness — COVID19 — is peaking in many of the countries hardest hit by the virus (see Figure 1).
Figure 1: New daily COVID-19 cases in Italy, South Korea, Iran and Spain
Data Source: World Health Organization (as of 7 APR 2020)
Almost a month-and-a-half after the coronavirus reached its peak in new daily cases in South Korea (around 900 cases-a-day), the virus has peaked in Italy around March 22nd, and in Spain and Iran around April 1st.
If President Donald Trump’s advisers were correct in Monday’s White House daily coronavirus update, the U.S. may also witness its peak in new daily cases within the week.
This weekend, New York, the current locus of the US outbreak, saw a significant decline in the number of new infections and deaths.
“In the days ahead, America will endure the peak of this pandemic,” Trump said Monday.
In fact, from April 6th to 7th, the aggregate US data showed its first day-to-day drop in the number of new COVID-19 cases since late March (see Figure 2).
Figure 2: Cumulative and new daily COVID-19 cases in the U.S.
In many of US states hardest hit by the coronavirus — such as New York, Washington, and California — the number of new cases each day have leveled off or declined in the past week.
These are genuine reasons for optimism. While Trump’s hope for an economic return to near-normal by Easter was overly optimistic, the possibility it could happen in early May is not.
Europe and the U.S. were caught flat-footed by the coronavirus, but it is looking increasingly like they will escape with far fewer cases and deaths than originally anticipated by many epidemiological models.
[Of course, additional waves of this virus may still occur and we may never see a true return to normal until a coronavirus vaccine is made widely available — and by widely available I mean free to everyone.]
________________________________________________________________
In this moment of cautious cheer, my questions increasingly focus on how the world measured (and mismeasured) this pandemic and what national-level factors may have suppressed and, conversely, aided the spread of the coronavirus?
Everyone has theories. Some are convinced autocratic countries (i.e., China, Iran, Venezuela, Russia) have hidden the true impact of the coronavirus on their countries. Others have declared the coronavirus proves the importance of universal health care in containing such viruses. Still others have conjectured the number of COVID19-related deaths have been over-reported by anti-Trump forces, most likely to make Trump look bad. Conversely, the national media has unofficially declared (without conclusive evidence, as usual) that the US government has been under-counting COVID19 deaths (presumably to make the Trump administration look more effective in its coronavirus response than justified).
It is speculation at this point. It will be many months — probably years — before we know what actually happened during the 2019–20 Coronavirus Pandemic. The coronavirus pandemic is still on-going, after all, and the reality is: counting the number of people with any disease or virus is genuinely hard and prone to human error.
But we can start to address some of the controversies, if only tentatively.
If we assume that the majority of countries have exercised a fairly high level of due diligence in measuring the presence of the coronavirus within their jurisdiction, we may be able to identify those countries who have been much less than honest.
Moreover, after controlling for suspected dishonest coronavirus measurement, we may also see hints at the impact of national health care systems and containment policies on the spread and lethality of the coronavirus.
________________________________________________________________
Let us start our inquiry with this premise — there are two fundamental measures of the coronavirus: (1) the number of confirmed coronavirus cases relative to the total population (morbidity rate), and (2) the number of coronavirus-related deaths as a percent of those confirmed to have the virus (mortality rate).
For simplicity’s sake, what I am calling the mortality rate is actually the case fatality rate. In reality, the coronavirus’ mortality rate is much lower than the case fatality rate as its calculation will include undiagnosed cases experiencing only minor or no symptoms.
If universal health care were ever to show its value, now is the time. The logic is simple: Countries where citizens do not need to worry about the cost of a doctor visit, the probability these citizens get tested and treated early for the coronavirus is significantly higher.
Also, countries with universal health care may also be more likely to institute broad-based coronavirus testing, thereby identifying asymptomatic super-spreaders of the virus. Subsequently, when diagnosed with the virus, these citizens will be isolated sooner from the healthy population. Furthermore, early diagnoses of the coronavirus may also improve the chances infected individuals survive the virus.
Can we see this in the data?
________________________________________________________________
Figure 3 (below) is produced directly from World Health Organization (WHO) data. The chart shows the morbidity rate of COVID-19 (i.e., frequency of COVID-19 cases per 100K people) compared to its mortality rate (i.e., deaths per confirmed case).
I’ve segmented the chart in Figure 3 into four quadrants, each defined by countries’ morbidity and mortality rates. Countries with high morbidity and mortality rates are in the upper right-hand quadrant of Figure 3 (e.g., Italy, France, Spain, Netherlands, UK and Iran.); while countries with low morbidity and mortality rates are in the lower left-hand quadrant (e.g., Russia, Japan, Pakistan, Nigeria, and India).
Figure 3: COVID-19 Cases per 100K persons versus Number of Deaths per Confirmed Case.
What does Figure 3 tell us? In truth, not much.
Ideally, a country would want to be in the lower left-hand quadrant (Low/Low) of Figure 3, right? But a simple inspection of the quadrant reveals it is occupied mainly by countries in eastern Europe, Africa, South America and southern Asia (Russia, Ukraine, Pakistan, India, Nigeria, among others) — few of which find themselves ranked by the WHO among the countries with the best health care systems. One reason for their favorable performance so far may be that the coronavirus hasn’t significantly spread to those countries yet — after all, many are in the southern hemisphere.
Here are two fair questions to ask: Are these countries performing relatively well with the coronavirus due to favorable circumstances (fewer people traveling to and from coronavirus sources like China; climatic context; stronger containment policies — an area where authoritarian governments may have an advantage; and/or better health care systems)?
Or, are some of these countries simply not deploying the resources and expertise necessary to measure the impact of the coronavirus? Do they even have the capacity to do so?
________________________________________________________________
Figure 3 begs more questions than it answers, but it still may hint at some tentative conclusions. For example, experience tells me countries clustered around the intersection of the average country-level morbidity (34 cases per 100K people) and mortality rates (3.4%) are in the accuracy ballpark. If I am feeling generous, that list includes the US and China, along with countries like South Korea, Poland and Turkey.
The countries that raise my eyebrows are the major outliers from the center cluster: Italy, Spain, UK, France, Bangladesh, Nigeria, Indonesia and India.
The variation in the coronavirus mortality rate ranges from 12 percent in Italy to near zero percent for New Zealand (a country with 1,239 confirmed cases and only one death). What could possibly explain this difference in the coronavirus mortality rate between two advanced economies? Could it be their health care systems? WHO ranks Italy’s health care system 2nd in the world, while New Zealand’s is only 41st. Russia has a reported coronavirus mortality rate of 0.8 percent and has the 130th best health care system in the world, according to the WHO.
More in line with expectations, Germany, a country given significant positive coverage for its coronavirus response — plaudits comparable to perhaps only South Korea’s — has a reported 2.1 percent mortality rate on a base of 113,296 confirmed cases.
Why such discrepancies in reported mortality rates?
Dietrich Rothenbacher, director of the Institute of Epidemiology and Medical Biometry at the University of Ulm in Germany, credits Germany’s broad-based, systematic testing as being the reason his country’s mortality figures are hard to compare to other countries.
“Currently we have a huge bias in the numbers coming from different countries — therefore the data are not directly comparable,” Dr. Rothenbacher recently told the BBC. “What we need to really have valid and comparable numbers would be a defined and systematic way to choose a representative sampling frame.”
This is where statistics — my profession — becomes critical. As Dr. Rothenbacher asserts, Germany would not have understood the extent of the coronavirus crisis without testing both symptomatic and asymptomatic cases, just as South Korea and, sadly, only a few other countries have done.
Systematic random sampling needed to be a component of every nation’s coronavirus testing program.
It wasn’t.
In New Jersey, where I live, the office of the state’s Health Commissioner told me I couldn’t get tested for the coronavirus without meeting one of the following qualifications (…it felt like a job application):
Already being hospitalized and showing symptoms of COVID-19.
A health care worker showing symptoms and having who been exposed to others known to have the virus
having who been exposed to others known to have the virus Anyone known to be part of a cluster outbreak (one example being a recent Princeton, NJ dinner party where multiple attendees were diagnosed with the coronavirus)
And vulnerable populations (e.g., nursing home residents).
Someone like me, a 55-year-old male with no underlying health problems but showing mild flu symptoms — low-grade fever, persistent cough, and chest congestion — cannot get tested in New Jersey.
The New Jersey testing protocol is common across the U.S. given the relative scarcity of testing kits.
________________________________________________________________
Anytime the anecdotal evidence is contradictory or unclear, I turn to data modeling — even if crude — to test some of the initial hypotheses surrounding a controversy.
The challenge with the coronavirus is the availability and data quality of the key causal factors we’d like to test in a coronavirus model for morbidity and mortality rates. In the following linear models, I tested these independent variables:
Out of necessity, I limited the data analysis to countries with reliable data on all key independent measures and with populations over 3 million people, leaving the analysis with 76 countries.
[Note: The linear models, however, were not weighted by country population size. For example, China weighted the same as Serbia in the following models.]
The estimated linear models for morbidity and mortality rates are reported in the Appendix below.
Figures 4 and 5 show the model predictions for each country versus the actual morbidity and mortality rates. In the morbidity model graphic (Figure 4), I only show a selection of key countries in order to simplify the data presentation.
Figure 4: Predicted versus Actual COVID-19 Cases per 100K Persons for Selected Countries (as of 4 APR 2020).
Figure 5: Predicted versus Actual COVID-19 Deaths per Confirmed Cases (as of 4 APR 2020).
On the issue of autocratic countries (who are also U.S. adversaries), there is circumstantial evidence that Venezuela, China and Russia have fewer COVID-19 cases than we would expect given their key characteristics, even while their deviance as a group is not statistically significant.
For example, China may have 10 times the coronavirus cases they have officially reported and a mortality rate twice their current estimate. If true, China may have between 33,600 to 70,000 deaths related to the coronavirus, not the 3,339 they are currently claiming.
Likewise, Russia may have 19,500 coronavirus cases, not the 10,031 they have reported to the WHO and Venezuela may have 1,625 cases, not 167 cases.
Even if, according to the model, the reported numbers for China, Venezuela and Russia are low, we can’t rule out the possibility they are low because these countries have done a superior job containing the virus.
Perhaps the most puzzling (and saddest) case is Iran. Our model suggests Iran has experienced far more COVID-19 cases than we would expect given its characteristics. The most recent WHO numbers for Iran are 66,220 confirmed cases and 4,110 deaths.
Has Iran done an especially poor job of containing the virus or are they measuring more comprehensively than other countries? Unfortunately, my model can’t settle that point.
Final thoughts
I anticipated when I started looking at the coronavirus in 76 countries that the quality of their health care system s— starting with affordable, universal health care — would show up as a significant factor in distinguishing between countries that successfully took on the coronavirus pandemic (e.g., South Korea, Germany, Singapore, and Japan) and those less successful (e.g., Italy, Spain, France, UK and Iran).
While the number of hospital beds per 1,000 people does correlate significantly with lower mortality rates (see Appendix, Figure A.2), the overall quality of a country’s health care system did not. In fact, countries with the best compensated medical professionals actually have higher coronavirus mortality rates.
The coronavirus has hit Europe (and China) the hardest. In Italy, the high percentage of elderly helps explain its high volume of cases, but that can’t be the only explanation. And isn’t just that advanced economies have put more effort into measuring the occurrence of the virus in their communities that explains this fact. The coronavirus has found disproportionately more friendly hosts in these societies. We may have to accept that the coronavirus is one of the evolving risks associated with high disposable incomes and deeps global connections through trade and tourism.
I know this: I will never go on a cruise ship ever again.
Theories on why some countries handled the pandemic better than others are also plentiful. The most compelling analysis may have occurred while the pandemic was just starting.
Writing in early March, Chandran Nair, founder and CEO of the Global Institute for Tomorrow, may have come up with the best explanation still. “Strict and centralized enforcement of lockdowns, quarantines, and closures are the most effective way to contain the virus,” wrote Nair. “What’s emerged from the coronavirus crisis is the fact that some states are equipped to handle this type of action, and some are not — and it has little to do with development status.”
Or, more cynically, could we conclude that one of the costs of emphasizing individual freedom is that when collective action is necessary — including a strong, central state response — Europeans and Americans answer the call by hoarding toilet paper and Jim Beam?
I’m not quite there yet. For one, I don’t believe Nair fully appreciates how the modern state and elites consolidate their power during these uncertain times, and how this can leave even more people vulnerable economically and physically to the next pandemic — and there will be another one. Second, for every example of state power getting this done quickly and efficiently, there are dozens more where greed, incompetence, and arrogance lead the state to do more damage than good. Before we give the modern state more power, let us think this through some more first.
Here is what our governments and scientific community should be doing...
If this global pandemic ends relatively soon — as it appears it might — our governments and health researchers must immediately resolve themselves to understand how many people really did get infected by the coronavirus and how many actually died from its consequences.
Currently, we have a global mish-mash of epidemiological data of unknown quality or generalizability. Only probability-based sample studies can give us the real numbers and it is only with those numbers that we can really sit down and decide: What worked and what was a total waste of time and resources?
K.R.K.
Data used in this article are available by request to: kroeger98@yahoo.com
APPENDIX: The Linear Models
Figure A.1: Linear Model for Confirmed COVID-19 Case per 100K Persons (Morbidity)
Figure A.2: Linear Model for COVID-19 Deaths per Confirmed Cases (Mortality) | https://kentkroeger.medium.com/has-the-type-of-health-care-system-or-type-of-government-mattered-during-the-coronavirus-pandemic-e063dd64377 | ['Kent Kroeger'] | 2020-04-09 21:46:28.906000+00:00 | ['Statistical Analysis', 'Coronavirus Update', 'Coronavirus', 'Epidemiology', 'Covid 19'] |
Why I’m Open About Being Mentally Ill | It’s not the right choice for everyone but it is for me
Photo by Brooke Cagle on Unsplash
Talking about mental health can be tricky. Mental health issues, such as bipolar disorder, have always been stigmatized in our society. Because mental illness is not something that you can see, people don’t always believe that it’s there. There is no black and white testing that can definitively state that someone has a mental illness like you do when diagnosing a physical disorder such as diabetes.
Thus, when you are diagnosed with a mental illness, it can be extremely difficult to “come out” with it so to speak. Because of the stigma, many people keep it to themselves or only tell very close family members. I completely understand why. But, I’ve made a different choice for myself. I also have the privilege to do so.
Approximately 4.4% of the population of the United States struggles with bipolar disorder. It’s fairly even between men and women. I received my bipolar diagnosis in 2009 after I had what I call my bipolar breakdown. The symptoms hit me rather suddenly and life changed instantly.
Being open about my bipolar disorder, along with other mental health issues such as generalized anxiety disorder and ADHD, is the right decision for me. It didn’t start out that way. I had to ease into the diagnosis, how I felt as a mentally ill person, and get used to that particular label. It can be a hard thing to come to terms with.
One of my main motivations for being open about my bipolar disorder in real life, as well as here on Medium, is that the media often shows a very skewed picture of what bipolar disorder is actually like.
The way it’s dramatized and used as fodder in the media, it’s just not super realistic nor does it really represent most of us with bipolar disorder.
I want people to know that bipolar does not always look like that. If you didn’t know that I have bipolar disorder, you’d never really know. I’m a white lady in my early 40s, living in Austin, with my husband, daughter, and two crazy cats. I’ve been stable on medication for approximately 5 years now.
I’m a writer, a friend, a mother, a daughter, a wife — just trying to live my best life.
I’m open about being mentally ill with friends and potential friends, people that I know. I don’t go around announcing it, of course. I feel that if someone is going to judge me, not want to be my friend, or be around me because of my disorder, good riddance. I just don’t have time for those people and don’t want them in my life, at all. Luckily, I have amazing friends. Many of them are also bipolar and I met them through the bipolar support group that I co-organize.
I’ve written using a pen name throughout my writing career. When I signed up for Medium, I really wasn’t sure whether I should continue writing with a pen name, or sign up using my real name. I knew that I wanted to mostly write about mental health and this was kind of a new level of openness. I finally decided to write under my real name and I’m so glad that I did.
Of course, I realize that part of the reason that I’m able to be open is due to privilege. People have different situations in their lives and it isn’t so black and white. That’s why I’d never tell anyone else that they should be open about having bipolar disorder. What I’ve learned at my bipolar group is that there are vast cultural differences in how bipolar disorder or mental health issues are perceived. For some, being open about their mental illness can lead to real strife within a family or within their community.
Another issue that a lot of people deal with is whether or not to be open at work. The truth is that being open at work can have a number of consequences. The stigma is real. However, if you need special accommodations due to having a mental illness, it might be the correct choice to share with your workplace.
I personally had to leave my career behind when I had what I call my bipolar breakdown. I went on short-term disability and was never able to go back. That’s when I pivoted and started my writing career. So, I’ve personally not had to make the decision as to whether to be open at work as I’ve been self-employed for years now.
I hope that over time, as the years go on, mental health stigma will lessen and that more people will be comfortable being open about being mentally ill. The more we humanize the issue, the better. All I know is that I will continue to write about this illness openly, share my struggles with friends and family, and be true to myself. | https://medium.com/live-your-life-on-purpose/why-im-open-about-being-mentally-ill-e05681a58499 | ['Yvonne Handy'] | 2020-08-03 14:01:01.275000+00:00 | ['Mental Illness', 'Mental Health', 'Self', 'Advocacy', 'Bipolar'] |
The Switch | Photo by ThisIsEngineering from Pexels
As I move from engineering into data science, there are some key aspects of data science that I’m excited about:
Using my math/statistics training more. Continuing to be creative, but in a different way. Working in a field where technology is rapidly changing and expanding the limits of what we can do.
Photo by RODNAE Productions from Pexels
Math/Statistics:
A building is similar to a 3D puzzle — and a building typically leaks because a puzzle piece is missing, or puzzle pieces were not correctly put together (usually several pieces). So the problem-solving typically focuses on figuring out what pieces were missing or put together incorrectly. This problem can be hard to solve (and we didn’t always solve it); however, this problem is more spatial in nature. While I enjoy spatial problems, I missed more quantitative problems.
As I’ve learned about data science, learning how the math relates to it has jogged my memory, and I’m recalling pretty much every math course I’ve ever taken — calculus, differential equations, linear algebra, statistics, probability, numerical analysis… you name it (I’m even recalling sitting in my high-school classroom while learning specific aspects of linear algebra…stuff I didn’t even know I remembered). I’ve really enjoyed seeing how each type of math is relevant in data science — how the analysis is driven by statistics and probability, parts requiring optimization use calculus, and the computer needs to think in discrete chunks, which requires linear algebra. I’m reminded how much I’ve loved math over the years and how important (and fun) it is.
Photo by Digital Buggu from Pexels
Creativity:
In buildings, the creative side of me would come out when I had to figure out how to repair something that was wrong. For instance — if a puzzle piece was missing, I would figure out how to add a new piece to the existing puzzle that would take the place of the missing piece. However, in an existing building, this isn’t as easy as just replacing the missing piece, because the configuration of the other components usually limited our ability to modify the building. So we would need to get creative and figure out a solution that met the goal of addressing the leak while also accommodating the constraints set in place from the existing building.
In data science, we’re often given data to analyze, with a goal or question in mind; however, that doesn’t have to limit the analysis. As I analyze the data, I can continue to ask questions — and see what’s interesting. For example, on one project where I analyzed republican and democrat subreddit posts (which I’ll go into further in a future blog post) — my initial goal was to predict which subreddit a particular post was in based on the language in the post. However, in the process of analyzing the data, I found that many posts referenced URLs. I ended up spending a good amount of time and effort looking into which URLs were unique and common to each subreddit because it was interesting and I found some unexpected results (get excited for the future blog post). While this analysis took time, the ability to explore all aspects of the data that seem interesting — or at least do an initial analysis to see if it’s worth exploring further — is very freeing and some of my favorite part of the data science work that I’ve done so far.
Changing Toolset
Part of the reason that I loved studying for my master's degree so much is that I got to explore a field that is constantly changing through research and technology. Sustainability has a ton of resources being poured into it, and the types of energy models that can be done now are much more complex compared to even 5 years ago (let alone 20 years ago). In addition, there’s a growing amount of data in this field that’s increased substantially in the past few years, for instance as manufacturers become more willing to share about what materials and processes go into making their products (through environmental product declarations), we can better understand the carbon footprint of a building (through life-cycle analysis).
Data science is changing in a similar way — the creation of data from the internet and other sources is so much more than it was 10 or 20 years ago, and the computing power that gives us the ability to analyze this data is increasing at a breakneck speed. I love how this requires constant learning to stay aware of the ways in which I can analyze data —including new algorithms or methods for analysis (such as GPT-3 for natural language processing— https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html or GANs (generative adversarial networks) to generate images — https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html)
So, overall, as I make the switch I’m really excited — to get back to my roots in math, use my creativity in a different way, and work with a constantly changing toolset! | https://jenniferlwilliamson.medium.com/the-switch-d545a396f32c | ['Jenn Williamson'] | 2020-11-30 03:10:13.287000+00:00 | ['Data Science', 'Sustainability', 'Building Construction'] |
Dating at Forty-Something: a Tale of Fifty Bad Dates and One Good One | The guy I thought had died
This one is a ghosting case so incredible I think it deserves to be investigated by the team from the TV show Ghost Hunters. A lawyer in his 40s, decent looking, we went on three dates. I have to say I was making an effort to give him a chance because there were no significant red flags about him, except perhaps he seemed somewhat bitter about his divorce.
In any case, I was not completely relaxed about him. His body language was that of an insecure person who has trained to cover it up; maybe he had learned the tricks of looking more confident than you feel during his time in Law school.
We were located about an hour away from each other, so for the fourth date, I asked if he could come my way. The plan was to go wine tasting close to my place. He agreed. We talked around noon on a Saturday, and he said he would make a couple of stops before getting to my home, so I calculated it would be 2 to 3 hours before he will arrive. Just in case, I got ready in one hour and went about my business until the time came. Then three hours went by. And four, and five.
Finally, around 8 pm I texted him to ask if he was ok. I mean, when a person says they are about to go on the highway and never show up the first thing to do is try to make sure they didn’t have an accident, right? I got no answer.
The next Monday I told the story to my friend, and she agreed it was terrifying to think that something serious may have had happened to the person that is supposed to meet you but in this era of dating apps you have no way to know what happened to a person you just met.
I am happy to report that he is alive — a few weeks after the incident, he changed his profile picture on the dating site. And if you haven’t guessed yet, he never contacted me again.
The guy who didn’t care what I was talking about
This one I have to say fills the stereotype of the man that feminist ranters hate: conservative, mansplainer, one hundred percent focused on my looks. To be fair, though, he was not a bad guy. He was the proud father of a beautiful adult daughter, a hard-working man with a stable job with benefits and very polite. And he did pay for dinner. But it was 2016, and political opinions were the kiss of death for many a relationship that year and the years after “The Election.” And he gave me an earful on his position and how excited he was with the prospect of the “genius man” about to take over the most important office in the country.
When I said that as the single mother of a queer child of Latino descent I was unable to agree with him, he looked at me with googly eyes and said, “Your accent is so sexy, and that hair, so black and beautiful.”
Now, let me be clear — I think the phrase, “Your accent is so sexy, and that hair, so black and beautiful” is a compliment if taken in a neutral context. I like knowing that my hair looks nice and that people find my accent cute. But come on, man, can’t you see that my dark hair and charming accent are the reasons why your political views horrify me? Well, long story short, he didn’t understand that even after I explained it in a series of emails we exchanged the next day.
He kept asking for a second date and I kept saying we were not a good match. In the end, I had to do what I was trying to avoid out of politeness: I said, “I will not go out with you because I do not like you, not even a little bit, not at all.” And I give him kudos for his polite response, which was, “well, that’s closure.”
The guy who needed some time to get over his divorce
Brilliant, charming, dorky-cute, I was fascinated by this Harvard graduate and Theology professor. He spoke fluent Italian, could read Latin, Coptic, and Aramaic, was an avid reader and writer, and we never ran out of conversation topics. But he had jumped on the dating pool while his divorce was not yet final, so he had lots of unresolved issues, both legal and emotional, although he was very much done with the marriage and living in a different state from the ex-wife.
We dated longish distance (a two-hour drive) for three weeks. During that time we had some wonderful dates and when we apart, did facetime almost every day. Then his divorce sentence came, and it was a messy, confusing mandate that included 50–50 shared custody while the parents lived in two different states. So he asked for some time to pick up the pieces of his life and figure out how to work this out.
I thought it was fair and said to myself maybe six months would be a reasonable amount of time to give him. I unfollowed his Facebook to avoid temptations. At six months on the dot, I re-followed him. He was married, and I do not mean remarried to the ex. He was married to a new one — end of the story.
The guy that moved far away for a long time
This title drips with sarcasm because the guy did move, to a place one hour away (so far!) for a month (such a long time!). And yet, apparently, that is enough to never talk again to the person with whom you had five quite lovely dates — talking about feeling unappreciated.
I am grateful this happened to me after I was already forty-plus because in my twenties I would have thought it was my fault. I would have probably thought something along the lines of, I am too fat, and he is surrounded by supermodels every day. But nowadays, I know better.
If a person spends some pleasant time with me and then chooses not to call again, it is their problem. There seems to be a mentality going around in this age of dating apps where “dating” means “go out with as many people as you want,” so I can imagine that people wired to switch gears constantly from one prospect to another may feel like they accomplish a lot. To each their own.
I kissed a girl, and I liked it
I talk in more detail about this adventure in my piece I kissed a girl, and I liked it: reflections of a heteroflexible woman. Suffice to say that I had a lot of fun in those dates, and at least the connection and chemistry of the first few dates were much more comfortable and relaxed than in heteronormative dates, but alas, they did not last very long.
The one good date
I have known JJ for a long time, and we had very flirty interactions years ago, but the timing was never right. One of us was always dating someone else or not in the right headspace to get serious with anyone. I had not heard from him in years, as our circle of friends had dissolved with a lot of people moving away. I had been thinking about him again for about six months, but I didn’t dare to make contact. Maybe he won’t be receptive, just as I had not been very receptive to his advances years ago, I thought.
But the Universe brings us what we wish for, so one fateful summer night he contacted me on Instagram, which is the modern equivalent of leaving a note on somebody’s school locker. We engaged in witty banter just like old times, and I’m happy to say that that first conversation ended in a lunch date. It’s been over a year, and we are still making each other laugh.
Oh, yeah, and in love.
© Adriana M 2019 | https://medium.com/an-injustice/forty-something-and-dating-a-tale-of-fifty-bad-dates-and-one-good-one-df578ef968fa | ['Adriana M'] | 2019-09-26 21:45:43.151000+00:00 | ['Dating', 'Love', 'Nonfiction', 'Relationships'] |
Is My Inner Buddha A Secret Fascist? | Buddhism teaches that crises and hard times build our patience and resilience. That we learn to control our negative thoughts and words and cultivate compassion even for those whom we find unlovable.
I’m backsliding.
It also teaches every human being is born good, with an inherent ‘Buddha nature’, the purest and highest manifestation of ourselves. Buddha nature refers to the potential to attain awakening and enlightenment, filled with compassion and wisdom. Your inner Jesus, if you actually did everything He’d do.
While I don’t aspire to become a Buddha, I always aspire to be a better human than I was yesterday.
To put it mildly, it ain’t been workin’ for me lately.
If Buddhists could see what’s going on in my head sometimes they’d stage an intervention and pack me off to Jerkwads Anonymous. In chains.
As the COVID-19 pandemic spreads like a toxic miasma, infecting new hosts and uncaring what condition it leaves them in (“Shit, this one’s dead, hey Nursie, come on over here a sec!”), I try not to spend too much time on the news and social media.
I’ve even stopped watching late-night TV comedy shows because Trevor Noah, Stephen Colbert and Seth Meyers dig into Our Fearless Loser and his happy band of M̶A̶G̶A̶t̶s̶ T̶r̶u̶m̶p̶a̶n̶z̶e̶e̶s̶ supporters like laser-focused surgeons. Their efforts to excise a political cancer with their razor-sharp commentary further depresses me by including too much Real News. I’m not against facts, I’m a huge fan. I simply don’t need know a lot of them or I’ll toss myself off the balcony.
But still. One needs to catch up every morning so’s not to break laws that weren’t illegal yesterday. I tried to enter a bus through the front door. Hey, now we only board through the back door! And starting today buses are limited to ten passengers at a time. Next week we’ll be riding on the bike racks.
I’ve been fairly Buddhisty about this whole thing, because pandemics happen, viruses gonna propagate. Canada lags behind the rest of the world with COVID cases and deaths. Also, I don’t have much of a life anyway.
Then I watch what’s happening in my birth country and my spirit turns into a Left-Wing Deplorable. A veritable Vengeance Nazi.
Image by Chrissy on Flickr
That Buddhisty shit flies out the window, and Nicole’s Inner Fascist argues with Nicole’s Inner Buddha.
“I want to send COVID-infected blankets to the Red States.”
Oh, niiiiiice Ms. Manifest Destiny!
Fascist Nicole ignores her.
“Wait, I’ve got a better idea! We let Trump hold one more racist rally and we pass out free COVID-infected MAGA hats!”
It’s TERRIBLE to even think that. What about all the others they’d infect, including children? What happened to your compassion project? What happened to trying to understand, if not excuse, Trump supporters? What happened to, like, basic human decency?
“But they’re hurting others! They’re killing people with misinformation and lies! CNN & MSNBC cut Trump off during the daily COVID briefing! What does it say about one’s orange conspiracy-tweeting ass when MSNBC, the journos who mistook horny crickets for a Russian top-secret weapon, cuts you off for being unfactual?”
Don’t Trump and his followers have a Buddha nature too?
“Theoretically.”
There’s any question?
“Trump? His Buddha nature is as shriveled and near-nonexistent as his frenemies Putin & the North Korean fat kid!”
Donald Trump was born evil? He wasn’t, maybe disadvantaged spiritually and morally by being born into a rich family with twisted values and a mega-super-sized sense of entitlement?
“Okay, Baby Donald probably wasn’t a psycho. If he wasn’t already wired for it.”
Baby Trump (not to be confused with a popular balloon). Public domain photo from Wikimedia Commons.
Thank you.
“But his enemies continue to support him as people are dying! They claim it’s no worse than the flu when they’re not calling it a liberal hoax. I’ll bet they think the deaths are no big deal just because their intellectual betters are dying in droves in New York — ”
Intellectual betters?
“Hillary Clinton was right. They’re a basket of deplorables. We need to do something about them!”
Like what, Herr Himmler? Buddha Nicole is sounding a bit edgy.
“Well, not kill them or anything. I was kidding.”
Ha ha.
“But, we could build a wall around Texas and put all our deplorables there.”
And then what?
“Give them all the guns they want. Problem solved.”
Fascist Nicole, what the hell is wrong with you??? Did a zombie Dementor suck out your brain or something?
“To hell with them,” an obstinate Fascist Nicole says. “We need our own Australia.”
So, you want to build a huge concentration camp for Trump supporters.
“A Deplorables concentration camp wouldn’t kill them, although that would clean the stupid out of the electorate.”
Fascist Nicole, KNOCK IT OFF!!! I’ve had about enough of your backsliding bullshit! When Buddha Nicole swears, she’s pissed.
This isn’t you. Not anymore. You’re about to undo all your spiritual hard work. I’ll agree Trump’s people are morally compromised. I understand why you’re angry but dehumanizing them reduces you to their level.
“But — ”
No buts! You are NO BETTER than a Trump voter when you say things like that. How can you even think of justifying concentration camps at all after what they’ve done to Mexican families? After what the Nazis did in Germany? That’s not even remotely funny!
“Yes, yes, you have a point.”
Now TAKE IT BACK!
“They’re little better than animals and I don’t need to be an animal too.”
THAT’S NOT WHAT I MEAN! That backhanded self-aggrandizing personal validation demonstrates just little your spirit differs from theirs. And by the way, none of you are animals. Or deplorables.
“But Trump’s withholding much-needed medical equipment for New York and Michigan because their governors won’t kiss his greasy orange ass enough. And dammit, we have family in both places! MOM is in Michigan!”
Now we’re getting somewhere. Now we’re addressing the source of that putrid hatred. This is where you came from, Fascist Nicole. A trending story on Twitter, a more wretched hive of scum and villainy than Infowars.
“The way things are going, Trump will be lucky if he has any supporters still alive in November.”
Have you seen these?
Buddha Nicole pulls out her mobile and Google searches.
COVID-19 threatens to rip apart Southern states in a way that isn’t happening anywhere else.
And also The coronavirus’s unique threat to the South.
Fascist Nicole reads both for several minutes, her face growing graver.
“Well….shit.”
Yep.
“It says COVID-19 is killing far more young people and those in their prime in the South than anywhere else in the country or the world, including Italy.”
That’s right. You’re getting your wish. Congratulations.
“Well wait a minute! I don’t want them to all, just like, die!”
You did a moment ago.
“I was just saying stuff. I’m pissed.”
Fascist Nicole is silent for several long moments. “No. No. This is bad. Dr. Fauci says 100,000–240,000 deaths is the best case scenario if everyone keeps practicing isolation and social distancing. But this says Southerners aren’t doing that. That means — the national death toll could be much, much higher.”
Yes. The voters in states with Republican governors prepared less and were slower to adopt social distancing measures. They may well have voted for their own destruction.
“And prolonged unemployment.”
Yes.
“Which they deserve.”
Fascist Nicole. Listen to me. These people weren’t born any more evil than Baby Donald. You’ll never understand why they are as they are because you don’t know them. You didn’t grow up in their families, their dogmatic churches, their lousy education system. You don’t know the fear and uncertainty many have only ever known. You’re getting a taste of it now and big surprise, you’re not handling it very well.
You’re saying these things out of fear, just like they do. All you see on Twitter is a MAGA hat profile and denial of something too terrible to acknowledge. Deep down that person knows she’s at risk. She’s afraid for herself or her immuno-compromised sister. In times of fear, people blame others. Twentieth-century Germans blamed the Jews and I don’t think I have to explain to you why even joking to yourself about MAGA concentration camps to blow off a little steam is unhealthy. This is how it starts.
Rwanda. ‘ Tutsis are cockroaches.’
Let’s not forget these people, like all others, have choice. There’s repentance. Redemption. The chance to change. Sorry to go all Christian on you considering we haven’t recited the Lord’s Prayer since the Reagan years but the point is you made a conscious choice to become a better person when you were in the throes of angry, hostile depression and they might too. People can choose.
“Okay, yeah.”
It’s about to get very, very real. Their empty Christianity, excised of any real compassion or kindness, may well not get them through this crisis. Whole families could be wiped out. We haven’t seen that since the Spanish Flu epidemic. Individuals can react many different ways, including realizing the deadly consequences of careless voting, substance-free religion and refusing science and medical opinion.
“Okay, yeah. I — I don’t want to see pine boxes stacked up like we did after 9/11 — ”
The Dalai Lama, you can be certain, feels compassion for Trump’s followers. Remember, he’s always been forgiving toward the Chinese government and you know what they did to the Tibetans when they invaded in 1950. You read his autobiography, and you won’t re-read it because it’s so disturbing.
Fascist Nicole gets contemplative for a few moments, brow furrowed.
“I want to feel compassion for Trumpers,” she says, “but I don’t want to excuse their execrable behavior and hatred. I don’t want to be one of those squishy brainless liberal types who’s too tolerant, too understanding.”
Buddha Nicole dips her head in assent. Yes, that’s what Buddhists call ‘idiot compassion’. Tolerating intolerance out of misplaced cultural relativism, an unwillingness to judge inarguable abhorrent behaviors and values, refusing to challenge people to be better or grow because you’re afraid of hurting them.
“That’s it. There needs to be accountability. These COVIDiots aren’t just hurting themselves, they’re putting their families, neighbors, friends and coworkers at risk. The Atlantic article points out the reasons why they’re at higher risk for COVID infection and fatality is policy and poor healthcare. And who voted for that?”
Yes, they did.
“The difference between them and myself is they detest others for stupid, irrational reasons. Hating people for how they were born, whether it’s skin colour or gender or sexual preference. I dislike people on the basis of their values and morality, the things they can change. Even if you’re born into a certain culture, like a racist white one or a misogynist religion, you still have the capacity to question those values and reject them. It boils down to one question: Are you hurting others? If so, that value or belief needs to go.”
They have to know it’s a problem, which they may not. Perhaps no one has ever challenged their Wrong Perceptions before. When you’re taught from birth others are inferior to you, how can you expect them to understand otherwise? These lessons come from parents and caretakers they love. They live in blind ignorance. Maybe they’ve been told otherwise and rejected the message, but maybe the message needs a different frame. Or just repeated enough. With patience.
It starts with exploring and asking questions. Understanding your enemy is the beginning of detente and the end of suffering. For both sides.
Fascist Nicole sighs.
Relieve their suffering, and you relieve your own.
“I wouldn’t want to live in their heads. Especially Trump’s. I remember what deep depression and hopelessness feel like. I believe he hates himself.”
Wouldn’t you end their suffering if you could?
Fascist Nicole nods. She wouldn’t wish that psychic pain on anyone. People lash out, like wounded animals at those trying to help them.
You know you feel much worse when you give your hatred free reign. Fascist Nicole nods again. Let’s remember one very important fact.
Hatred is a choice to make yourself feel worse.
“I do feel like crap when I give in to it. Triggered and angry. Later, guilty.”
Do you need any help feeling like crap?
“No, I’ve got the news and ever-evolving restrictions on our lives to feed my anxiety.”
Yes, and just imagine how afraid Trump voters are. They’ve got more to fear than you. They live in the country they voted for. They don’t understand any better than others how they create their own suffering. They’re not going to get the help they need, and many of them will descend into madness and suicide without any better coping skills than blame and hatred. There’s a very ugly mental health crisis brewing in the United States, worse than what it had before. They’ve got guns. There may be much pointless suffering and deaths without the virus.
“That sounds, uh, kind of ominous, Buddha Nicole.”
The good news is Trump seems finally to be giving a little more ground to scientists like Dr. Fauci and Dr. Birx.
“Fear makes people act crazy and do crazy things. They need help. They might be more amenable to real soul-searching once this is over.”
Yes. Buddha Nicole gives her an expectant look.
“And me too.”
That’s right. You can’t fix the world, Nicole, but you can choose to be part of the solution rather than the problem.
“Hey…you didn’t call me Fascist Nicole.” | https://medium.com/interfaith-now/is-my-inner-buddha-a-secret-fascist-f8dea42eab50 | ['Nicole Chardenet'] | 2020-04-06 21:07:09.386000+00:00 | ['Compassion', 'Buddhism', 'Covid 19', 'Psychology', 'Government'] |
What If the Biggest Threat to Your Relationship Is You? | You might disagree with what I’m about to say, but that doesn’t make it less real. Most relationships don’t end up in ruins for the reasons you think. The culprit is typically not bad spending habits, sexlessness, or cheating. The biggest threat to your relationship is you.
Before you scroll to the comment section to share your scathing opinion, hear me out.
Relationships bring up your shit.
You have been surrounded by relationships your entire life. You learned how to interact and engage by watching others. The ideas and beliefs that govern how you show up to your connections were formed long before you could string words into a sentence. But your relationship reality is deeper than that.
The relationship you imagine is mostly fictionalized. Pixelated images pulled from movies, television, and YouTube. You visualize the kind of relationship that has a solid storyline, plot points, and happily ever after. *cue credits*
That’s not how things work in real life.
On this side of reality, relationships look different. Problems are not solved in 120-minutes or less. There are no speed-through-the-rough-patch montages. Fairy godmothers and pumpkins are sold separately.
In your actual life, your partner often grates on your last nerve. They have terrible habits, unruly behavior, and trouble listening. They push your buttons and know all the shortcuts to your triggers.
Your partner will not always respond the way you want them to. They may not always share your interests or your enthusiasm. And yes, they get tired of your bullshit because they’ve got plenty of their own.
But they’ve also seen you looking a hot mess and acting like a complete ass, and they’ve stuck around. At least for as long as they could. Your partner loves your dirty knickers, which scares the bejeezus out of you, which might explain why you have gone out of your way to fuck things up, accidentally on purpose time and time again.
This relationship trips all your wires. Your fear of abandonment. Mommy issues. Daddy issues. Fear of rejection. Unworthiness. Imposter syndrome. Problems trusting. Resistance to intimacy. Sexual anxiety. Not-enoughness.
And while your partner might precipitate your uncomfortable feelings, they did not create them.
Own your triggers
Your partner is not responsible for how you respond to the things that trigger you, which doesn’t give them the tight to push your buttons intentionally. But you are the only person who can control your response.
I understand that your partner’s behavior, or lack thereof, may upset you. But that doesn’t make you a victim of circumstance. You are solely accountable for your triggers, and you are also responsible for healing them.
Triggers are survival responses.
The human brain is like a file cabinet, where the painful files are stored at the front. So whenever something hurts you, your brain connects the pain to whatever is happening at that moment. Then it saves the pain and the incident in the file labeled trigger(s).
Recognizing your triggers is essential. But just because you know what sets you off, doesn’t give a pass to cock your loaded trigger and aim it at others. Once you know something, you become responsible for what you know. Triggers are no exception.
Self-compassion is medicine.
This might sound simple, but the people who are the least responsive to triggers are those who are kind to themselves. Conversely, those who tend to talk down to others are usually just as mean (if not more) to themselves.
But for most of us, self-compassion takes practice. You weren’t born being unkind to yourself; it’s a habit you developed over time, which is how you will relearn self-kindness.
Self-compassion grants you access to the tools to disengage the trigger. It allows you to advocate for yourself. The feelings of helplessness that bubble to the surface when your trigger is engaged will begin to dissipate when you’re kind to you. In turn, you will become less prone to lashing out and misbehaving with your partner and others.
Self-awareness makes for better relationships.
The guy I mentioned at the start of this post was not aware of himself. Self-awareness would have stopped him from making up a story. He would have recognized that his idle thoughts set off his insecurity trigger. And rather than trolling through her things, he might have opted to have a conversation instead.
Recognizing and dissolving your triggers is a part of self-awareness. The more self-aware you are, the better you’ll be at relationship-ing. And with practice, your triggers will no longer create a negative response. But instead, they’ll create space for deeper intimacy with yourself and others.
If you liked this article, you might also enjoy:
Stacey Herrera is a relationship-ing practitioner, jalapeño junkie, and chronic library fine payer. She’s also an Intimacy + REALationship coach residing in the Port of Los Angeles. Sign-up to her newsletter for updates.
Follow Relationship-ing, so you don’t miss a post. Do you have a relationship-ing story to share? Write with us. | https://medium.com/relationship-ing/what-if-the-biggest-threat-to-your-relationship-is-you-c4de2f274cb2 | ['Stacey Herrera'] | 2020-05-16 17:23:25.798000+00:00 | ['Self-awareness', 'Self', 'Relationships', 'Love', 'Couples'] |
The best design process is to not follow a design process | The best design process is to not follow a design process
Why is it practically impossible to stick to a single standard design process?
Double diamond design process (Source- Pinterest)
We have all been fans of evergreen double diamond and humble iteration loops of design thinking. These are undoubtedly great frameworks to follow but they’re often too broad to lead us towards the right solution. There are a plethora of books and articles telling us about different design processes. It is overwhelming to pick a particular process for our design project. It begins with a lot of enthusiasm but there comes a point when the process just won’t align with your work. We reach a point that is not mentioned in any book or article we’ve come across. The process that looked immaculate and streamlined in the beginning has come to a complex stage where it’s all messy. If this is relatable then please continue reading why following a single design process is never enough. It is because-
Every project is unique
The intent of every design project varies depending on multiple factors. There is a different scope for each project. The result may turn out to be similar but each project is still unique in its way. A design project is conceived with a goal in mind. This goal can simply be to just explore different possibilities. This goal and the whole journey to reach the destination is unique for every project. Try to break down a design project into multiple pieces and see how these pieces of one project differ from the pieces of another. Let us simplify it in further detail.
Every project has different constraints
Different projects have different limitations for designers to work with. The constraints can be of any type. Sometimes research is a constraint, sometimes it’s the budget. The others can be data privacy, internal organizational structure, legal and policy constraints, etc. Now when there can be so many different types of obstacles a designer can come across, how can a single design process be implemented to achieve the best results. The standard design processes are there to guide us through the projects but they would not be enough to overcome all the unique challenges that come with different projects.
Every stage of a project is different
A design project has multiple stages. These stages look different for different projects. For instance, problem identification in one project may be very easy while the other project may take way too long to find the right problem. The same goes for user research, ideation, prototyping, etc. Each of these stages will act differently for different projects. Having a single process every time may not be sufficient to manage different components of the project.
Every stakeholder has a different understanding of design
The non-design aspects of a design project come with their challenges. Another variable for design projects is the stakeholders involved in the work. The stakeholders play an important role in driving the project and designers need to establish a shared understanding among all the stakeholders. This may vary depending on the involvement and position of stakeholders, the time they can invest, their knowledge on the project, etc. The standard design process rarely helps us in managing such situations in the project.
Every project has different timelines
The timelines are a big deal in most design projects. Each one has a different scope of work concerning the available time at hand. In some projects, the research can take its own sweet time while in others it has to be super quick and agile. In some cases, there is less time to iterate and the only option is to improve after the launch. Time indeed is a luxury in most of the projects and it has to be planned smartly with the process. The variations in timelines force designers to adopt different processes for the project.
Every project has different resources
Yet another variable in the design projects is the availability of resources. The project runs on various resources like tools, tech, and most important of all, people. The design process depends largely on the available resources as designers cannot proceed without them. The designers may try to be extra resourceful and take the project ahead in unique ways. While the standard processes may look smooth on the whiteboards, the resources at hand show otherwise.
Every mistake is unique
Any design process is incomplete without failure. The key is to fail early and iterate fast. There are many mistakes that designers make and every mistake leads to a different learning curve. A standard design process leaves less room for error and new lessons. I believe we learn the best from our mistakes and our process should be flexible enough to try new ways without the fear of going wrong. Every mistake is unique and we should embrace it with a determination to not repeat the same.
The bottom line is that a single process is not enough to guide us through the design projects. The frameworks look enticing at first glance but there is more to a project than expected. One of the great way to make best use of these frameworks is to mix and match according to the context of the project. It is completely fine to deviate from one and adopt another method in the interest of the project. The scope of work should guide the process, not the other way around. Design processes and frameworks are great for reference, but the actual implementation of these may look different. A great project is the one optimizing different frameworks, and leveraging the best resources and processes at hand. | https://medium.com/design-bootcamp/the-best-design-process-is-to-not-follow-a-design-process-3e303ec850b7 | ['Saksham Panda'] | 2020-12-24 05:15:56.280000+00:00 | ['Design Process', 'Design', 'Design Thinking', 'Design Methods', 'Double Diamond'] |
Protecting Yourself and Your Bitcoin: Two Important Tips for Staying Safe from Cryptocurrency Scams | The world of Bitcoin, and cryptocurrencies, in general, can be intimidating and confusing. As Bitcoin becomes more mainstream, it is essential that we be aware of potential dangers that may be lurking out there.
Since the beginning of commerce and bartering, there have always been bad actors looking to make a quick and easy buck off of those who are unaware. Because Bitcoin has value and there is a demand for it, unfortunately, it is no exception, and there are many cryptocurrency-related scams to be aware of.
IF YOU ARE ASKED TO SEND BITCOIN FOR ANY OF THE FOLLOWING IT MAY BE A SCAM:
IRS Payment
Utility company payment
Airbnb Reservation
Job Posting
Vehicle purchase via eBay or Craigslist
I’m here to share how to protect yourself and your Bitcoin from scammers who are trying to take advantage and to help you educate yourself on how to identify and avoid these scams. Below are my main tips for staying safe when it comes to cryptocurrency scams:
1. Know Exactly Who You Are Sending Cryptocurrency To
If you’re at all skeptical, don’t send the money until you are confident.
Due to Bitcoin having a certain level of anonymity, it is tough to find out who owns a specific wallet address. Add to this the fact that Bitcoin transactions by nature are irreversible, and you can begin to see why it is imperative to know who you’re dealing with.
Most scams you will encounter involve sending Bitcoin or cryptocurrency to someone you don’t know, often in exchange for goods or services. If you are not dealing with someone that you know to be trustworthy, it is essential to be very skeptical and err on the side of caution. A good rule of thumb is to not send Bitcoin to anyone you are not 100% sure is on the up-and-up.
As an example, one scam that we see often is for automobiles or vacation rentals. You may find these ads on Craigslist or other similar sites offering up deals that seem almost too good to be true. When you think something sounds too good to be true, you’re probably right. Once you initiate contact with the vendor, you will be asked to send cryptocurrency to pay for the item, and after you do, you will never hear from the ‘seller’ again. This is why the most important piece of advice I can give to protect yourself is to know who you’re dealing with.
2. Don’t Take Action Until You Verify Who is Calling and Requesting Cryptocurrency.
Generally, you shouldn’t be receiving calls with legal threats and cryptocurrency payment requests.
Another threat to be aware of is ‘IRS’ scams. These ‘IRS’ scams are typically phone calls you may receive threatening you with legal action due to unpaid back taxes. The caller on the other end will threaten to have you arrested and thrown in jail unless you pay them the amount of money they say you owe.
These scammers have used all sorts of payment methods, including gift cards, but have now started asking for cryptocurrencies. If anyone calls you claiming to be from the IRS and threatening you with legal action, these are without a doubt scams. Do not send any money to these individuals. The IRS will not contact you by phone with threats of legal action.
If you’re ever unsure about a transaction, our Customer Success team is available seven days a week from 9am-7pm PST to answer any questions you may have, and we will help you make sure that you’re safe with your coins. Don’t hesitate to reach out to us! | https://medium.com/coinme/protecting-yourself-and-your-bitcoin-two-important-tips-for-staying-safe-from-cryptocurrency-scams-83acb2c6d675 | [] | 2018-07-20 16:55:24.352000+00:00 | ['Fraud', 'Cryptocurrency', 'Startup', 'Scam', 'Bitcoin'] |
#101 | Writing Daily For 100 Days Straight | What
I do believe that I’ve achieved my initial goal of improving my writing. But I have stopped my daily routine a couple of days back. While the enforcement of daily writing has pushed my creativity and grit, it’s starting to get way too disruptive to be beneficial now. That’s what it has become.
100 days. I think it’s good enough. I still have a lot of things I want to share, a lot of thoughts that go through my mind every day, but I would like to take a little more time now to put them to words.
My question is should I split it out from Medium to a personal blog? My posts are also all over the place. I write about learning to code, I write about interesting things I’ve came across, I write my thoughts and opinions. Should I continue forth or segregate my articles into their different categories? | https://medium.com/footprints-on-the-sand/101-writing-daily-for-100-days-straight-185a30aad711 | ['Kelvin Zhao'] | 2020-04-13 04:59:21.570000+00:00 | ['Musings', 'Daily Blog', 'Thoughts', 'Writing', 'Daily Thoughts'] |
How to Use Workbox With Next.js | Introducing next-with-workbox
next-with-workbox is a Next.js plugin that aims to encapsulate such configuration and enable them to be reusable. It provides a fully customizable interface so you can implement whatever you need to through Workbox.
To use the library, let’s first install it to our Next.js app:
yarn add next-with-workbox workbox-window
The next step would be to enable the plugin in your next.config.js :
// at next.config.js const withWorkbox = require("next-with-workbox"); module.exports = withWorkbox({
workbox: {
// .
// ..
// ... any workbox-webpack-plugin.GenerateSW option
},
// .
// ..
// ... other Next.js config values
});
Then, ignore the autogenerated files through .gitignore :
public/sw.js
public/sw.js.map
public/worker-*.js
public/worker-*.js.map
That’s all there is to configure your Workbox service-worker setup. You can then register your Workbox instance at pages/_app.js :
import React, { useEffect } from "react";
import { Workbox } from "workbox-window"; function App({ Component, pageProps }) {
useEffect(() => {
if (
!("serviceWorker" in navigator) ||
process.env.NODE_ENV !== "production"
) {
console.warn("Progressive Web App support is disabled");
return;
} const wb = new Workbox("sw.js", { scope: "/" });
wb.register();
}, []); return <Component {...pageProps} />;
} export default App;
And you’re done! Congratulations, you just set up Workbox for your Next.js application. Under the hood, the next-with-workbox plugin will configure things like making sure cache busting is only enabled for nonhashed filenames, adding public/**/* folder content to your precache, modifying URL prefixes for _next/static files, and more!
Using workbox-webpack-plugin.InjectManifest
The above introduction assumes you were trying to use the GenerateSW plugin. If you instead want to use the InjectManifest plugin to have more control over your service worker, it’s also straightforward to use with the next-with-workbox plugin.
First, install one more dependency that you’ll need to rely on:
yarn add workbox-precaching
Then, simply add the swSrc option to your next.config.js as:
// at next.config.js const withWorkbox = require("next-with-workbox"); module.exports = withWorkbox({
workbox: {
swSrc: "worker.js",
// .
// ..
// ... any other workbox-webpack-plugin.InjectManifest option
},
// .
// ..
// ... other Next.js config values
});
And create a worker.js file at the root of your project:
import { precacheAndRoute } from "workbox-precaching"; precacheAndRoute(self.__WB_MANIFEST); | https://medium.com/better-programming/using-workbox-with-next-js-a-step-towards-progressive-web-apps-a3f985f5f864 | ['Cansın Yıldız'] | 2020-05-14 14:01:40.804000+00:00 | ['Nextjs', 'Programming', 'React', 'Progressive Web App', 'Pwa'] |
Simple word cloud in Python. 💡 Wordcloud is a technique for… | 2. Word cloud ☁️
Firstly, let’s prepare a function that plots our word cloud:
# Import packages
import matplotlib.pyplot as plt
%matplotlib inline # Define a function to plot word cloud
def plot_cloud(wordcloud):
# Set figure size
plt.figure(figsize=(40, 30))
# Display image
plt.imshow(wordcloud)
# No axis details
plt.axis("off");
Secondly, let’s create our first word cloud and plot it:
# Import package
from wordcloud import WordCloud, STOPWORDS # Generate word cloud
wordcloud = WordCloud(width = 3000, height = 2000, random_state=1, background_color='salmon', colormap='Pastel1', collocations=False, stopwords = STOPWORDS).generate(text) # Plot
plot_cloud(wordcloud)
Ta-da❕ We just built a word cloud! Here are some notes regarding the arguments for WordCloud function:
◼️ width/height: You can change the word cloud dimension to your preferred width and height with these.
◼️ random_state: If you don’t this set this to a number of your choice, you are likely to get a slightly different word cloud every time you run the same script on the same input data. By setting this parameter, you ensure reproducibility of the exact same word cloud. You could play around with random numbers until you find the one that results in the word cloud you like.
◼️ background_colour: ‘white’ and ‘black’ are common background colours. If you would like to explore more colours, this may come in handy. Please note that some colours may not work. Hope you will find something you fancy.
◼️ colormap: With this argument, you can set up the colour theme that the words are displayed in. There are many beautiful Matplotlib colormaps to choose from. Some of my favourites are ‘rainbow’, ‘seismic’, ‘Pastel1’ and Pastel2’.
◼️ collocations: Set this to False to ensure that the word cloud doesn’t appear as if it contains any duplicate words. Otherwise, you may see ‘web’, ‘scraping’ and ‘web scraping’ as a collocation in the word cloud, giving an impression that words have been duplicated.
◼️ stopwords: Stopwords are common words which provide little to no value to the meaning of the text. ‘We’, ‘are’ and ‘the’ are examples of stopwords. I have explained stopwords in more detail here (scroll to ‘STEP3. REMOVE STOPWORDS’ section). Here, we used STOPWORDS from the wordcloud package. To see the set of stopwords, use print(STOPWORDS) and to add custom stopwords to this set, use this template STOPWORDS.update(['word1', 'word2']) , replacing word1 and word2 with your custom stopwords before generating a word cloud.
There are other arguments that you can also customise. Check out the documentation for more information.
Let’s generate another word cloud with a different background_colour and colormap 🎨. You could play with different combinations until you find the one that you like. I find the following combination quite nice:
# Generate wordcloud
wordcloud = WordCloud(width = 3000, height = 2000, random_state=1, background_color='black', colormap='Set2', collocations=False, stopwords = STOPWORDS).generate(text) # Plot
plot_cloud(wordcloud)
Suppose we are happy with the word cloud and would like to save it as a .png file, we can do so using the code below: | https://towardsdatascience.com/simple-wordcloud-in-python-2ae54a9f58e5 | ['Zolzaya Luvsandorj'] | 2020-12-08 08:53:46.262000+00:00 | ['Python', 'NLP', 'Visualisation', 'Word Cloud'] |
How to use clustering performance to improve the architecture of a variational autoencoder | Photo by Johannes Plenio on Unsplash
An autoencoder is one of the many different special neural network designs, the main objective of an autoencoder is to learn how to return the same data used to train it. The basic structure of an autoencoder can be split into two different networks, an encoder, and a decoder. The encoder compresses the data into a low dimensions space, while the decoder reconstructs the training data. In the middle of those two networks lays a bottleneck representation of the data. The bottleneck representation or latent space representation can be helpful for data compression, non-linear dimensionality reduction, or feature extraction. In a traditional autoencoder, the latent space could take any form as there is no constrain controlling the distribution of the latent variables in the latent space. A variational autoencoder rather than learn a single attribute in the latent space, learn a probability distribution for each latent attribute. The following post shows a simple method to optimize the architecture of a variational autoencoder using different performance measurements.
Thinking fashion
A classic data set used in machine learning is the MNIST dataset, that dataset is composed of a variety of images of handwritten numbers from zero to nine. With the rise in popularity of the MNIST dataset similar datasets have been created. One of those is the Fashion MNIST dataset, it consists of images of several clothing items, divided into ten different categories. Each image is a simple 28X28 grayscale image.
Before the definition of the variational autoencoder, we need to define the different custom layers needed to train the variational autoencoder. In Keras, a custom layer can be created by defining a new class that inherits the characteristics of a Layer class from Keras. For this particular autoencoder, two customs layers are going to be needed a sampling layer and a wrapper layer. The Sampling layer will take as input the layer before the bottleneck representation and will be used to constrain the values that the latent space can take. By sampling a normal distribution the variational autoencoder will learn a latent representation that is normally distributed. That characteristic can be useful to create new data that does not exist in the training data, that can be done with the decoder by using a sample from the same distribution as an input. For that characteristic variational autoencoders are also classified as generative models.
With the sampling layer defined the wrapper layer is defined similarly. However, the objective of this layer is to add a custom term to the model loss. A variational autoencoder loss is composed of two main terms. The first one the reconstruction loss, which calculates the similarity between the input and the output. And the distribution loss, that term constrains the latent learned distribution to be similar to a Gaussian distribution. The second loss term is added to a layer before the bottleneck representation and it adds the Kullback Leiber divergence as a dissimilarity between the learned distribution and the Gaussian distribution.
Creating the autoencoder
As all the custom layers are already defined the autoencoder can be created. In an autoencoder, the number of densely connected layers or convolutional blocks gradually down-sample the shape of the data into the latent space size (encoder) and then gradually return it to the original data size (decoder). If the same units are for the encoder and the decoder, then both networks are the mirror images of each other. We can use that characteristic to use the same function to create the encoder and the decoder, just by simply reversing the order of the elements of each layer. Then, adding the custom layers to the encoder.
Variational Autoencoder performance can be measured in a variety of forms, the simplest one could be to use the model loss as a measure of autoencoder performance. When a neural network is trained the stochastic gradient descent algorithm is used to minimize the loss function and to calculate the layer weights and biases.
However, using the model loss minimization only shows us how the model learns the data but it doesn’t tell us how useful the latent representation could be. As the variational autoencoder can be used for dimensionality reduction, and the number of different item classes is known another performance measurement can be the cluster quality generated by the latent space obtained by the trained network. We can apply k means clustering to the latent space and calculate the silhouette coefficient of the clusters and use it as a performance measurement of the network.
With the different performance measurements defined we can start to optimize the architecture of the variational autoencoder. First, a random set of possible architectures is created, each architecture follows the unique constrain that the first layer has the same size as the input, which will ensure that the decoder will have the same output shape as the input data. Then each architecture will be modified following one simple rule if the number of layers in the current architecture is greater than 4 one random layer will be removed, if that is not the case each layer will be added k units, where k is the location of the layer. For example, the first layer will have zero units added as the location of the first layer is zero.
On each round of performance optimization, each individual in the architecture population will be updated if the performance is better than the previous performance.
Under the loss based performance measurement, all the trained models obtained a similar loss value, however, some variational autoencoders with fewer layers show a similar performance as those with a higher number of layers.
While the cluster-based performance shows a little more variation between the results. Although variational autoencoders with a high number of layers and a low number of layers show high performance, the best performing variational autoencoder happens to be in the middle regarding the number of layers.
Know you have an example on how to define and use custom layers in Keras, how to add custom losses to a neural network, and how to merge everything using the functional API from Keras. Also how to develop a basic evolutionary algorithm using different performance measurements or different optimization objectives. The complete code for this post can be found in my GitHub by clicking here and the complete dataset y clicking here. See you in the next one. | https://tavoglc.medium.com/how-to-use-clustering-performance-to-improve-the-architecture-of-a-variational-autoencoder-ca71c9bb0aaf | ['Octavio Gonzalez-Lugo'] | 2020-09-19 18:37:29.463000+00:00 | ['Machine Learning', 'Data Science', 'STEM', 'Python'] |
Equal Opportunities = the right to be treated fairly and with respect | The subject of equal opportunities might be viewed by employers as a minefield, but prevention of a problem is always better than a cure. The Equality Act 2010 came into force on 1 October 2010. It replaces, and in some areas extends, existing legislation on discrimination and equality.
There are compelling reasons for putting an Equal Opportunities policy in place: Protecting the company and our employees, compliance with the Equality Bill 2010 and winning business from the public sector — not to mention making a better environment in which to work.
The principles of fair treatment and respect need to be applied to everyone, regardless of any of the following, so called “protected characteristics”
gender
marital or civil partnership status
gender reassignment
pregnancy and maternity leave
sexual orientation
age
disability
race
colour
ethnic background
nationality
religion or belief
Discrimination takes place if an employer treats someone less favourably than others on any of the grounds above. We used the Business Link and the ACAS websites for helpful guidance on the scope of the Act and how to set about our policy.
So what is the scope of the Act?
In the simplest terms, being an equal opportunities employer means treating everyone fairly and with respect. Our policy needs to reflect this as it applies not only our employees but also to the way we treat customers, visitors and job applicants. In short, anyone with whom we come into contact. It should ensure a shared responsibility by all in the company to make our workplace a fair environment — and compliant within the law.
Understanding the scope of the act and the meaning of equal opportunities was the first step to putting our Equal Opportunities policy in place. It sets out our commitment to recognising the rights of all individuals to fair treatment. This covers fair and equal treatment at recruitment stage, treating all applicants in an unbiased manner, offering equal opportunities to employees for training and career development and making any reasonable adjustments for disabilities should they arise.
The 2010 Equality Act extends previous legislation to include complaints of indirect harassment (ie not directed at the complainant) if it can be demonstrated that it results in making the workplace an offensive environment. There is also a new provision for “associative discrimination”. This is direct discrimination against someone because they associate with with one of the protected characteristics listed above.
Who is responsible?
The responsibility for implementing good practice should be driven by management but shared by all. In the area of equality that is particularly true. We all have a shared and individual responsibility to treat one another with the respect that we would wish for ourselves, regardless of our differences.
Raising the awareness of managers and employees to the principles of equality and fairness and by building a culture of mutual respect should go a long way towards avoiding problems. Our written policy is the company’s statement of intent. It is putting the those standards of behaviour into practice, together with routine monitoring and review processes, that will build and maintain the culture of respect within and hopefully avoid dispute.
If you’d like to discuss your startup or project, get in touch with Simpleweb today. | https://medium.com/simpleweb/equal-opportunities-the-right-to-be-treated-fairly-and-with-respect-de77d2d45cc9 | [] | 2018-04-19 09:44:24.988000+00:00 | ['Equal Rights', 'Employee Engagement', 'Equality', 'Business Strategy', 'Startup'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.