title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
Machine Learning Up-To-Date — Life With Data | There have been breakthroughs in understanding COVID-19, such as how soon an exposed person will develop symptoms and how many people on average will contract the disease after contact with an exposed individual. The wider research community is actively working on accurately predicting the percent population who are exposed, recovered, or have built immunity. Researchers currently build epidemiology models and simulators using available data from agencies and institutions, as well as historical data from similar diseases such as influenza, SARS, and MERS. It’s an uphill task for any model to accurately capture all the complexities of the real world. Challenges in building these models include learning parameters that influence variations in disease spread across multiple countries or populations, being able to combine various intervention strategies (such as school closures and stay-at-home orders), and running what-if scenarios by incorporating trends from diseases similar to COVID-19. COVID-19 remains a relatively unknown disease with no historic data to predict trends.
We are now open-sourcing a toolset for researchers and data scientists to better model and understand the progression of COVID-19 in a given community over time. This toolset is comprised of a disease progression simulator and several machine learning (ML) models to test the impact of various interventions. First, the ML models help bootstrap the system by estimating the disease progression and comparing the outcomes to historical data. Next, you can run the simulator with learned parameters to play out what-if scenarios for various interventions. In the following diagram, we illustrate the interactions among the extensible building blocks in the toolset.
… keep reading | https://medium.com/the-innovation/ml-utd-23-machine-learning-up-to-date-life-with-data-99300a0d3ee0 | ['Anthony Agnone'] | 2020-11-20 15:45:15.936000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Technology', 'Data Science', 'Programming'] |
Analyzing The Relationship Between 3 Science and Buddhist Teachings | 3. Cosmology and Agganna Sutta
Cosmology refers to the origin, evolution, and fate of the universe. According to modern science, the origin of the universe was marked by the Big Bang theory — all the matter in the universe was concentrated into a single point with extremely high temperature and pressure and then it exploded (Gorbunov and Rubakov 2011).
According to Lopez (2008), most of Buddha’s teachings related to cosmology can be found in the Agganna Sutta. The Buddha did not refer to the origin of the universe and there is no reason to supposes that the world had a ‘beginning’ because the idea of everything must have a ‘beginning’ is a concept created by humans (Lopez 2008). The similarities in Science and Buddhist teaching lies in what Buddha claimed about the evolution of the universe.
In accordance with Venerable Ajahn Brahmali (2015), in the Agganna Sutta, the Buddha mentions about ‘world systems’ that is composed of the Earth, sun and the moon. These ‘world systems’ are what we now know as solar systems which consists of the sun, planets and their moons. The Buddha went on to say that there is not just one but millions or even billions of ‘world systems’ in the universe. This is consistent with the NASA’s (National Aeronautics and Space Administration) estimated number of solar systems in our galaxy alone (Milky way) — 3 billion (NASA 2003).
Perhaps a more noteworthy claim made by the Buddha in the Agganna Sutta is the existence of ‘beings’ in the other solar systems. The term ‘being’ is open for interpretation since there is no word for aliens in Buddhist teachings when Buddha said ‘beings’ he could have referred to humans (like us) in other solar system or maybe being who is superior or inferior to us.
Regardless of how one defines ‘beings’ according to the scientist, Frank Drake in 1961, he proposed an equation (Drake’s equation) (Refer Fig. 3) which estimates the number of ‘civilization’ per solar system. By entering values to Drake’s equation, he arrived at an answer which is approximate to 1.13 — around 1 civilization per solar system. This value justifies Buddha’s claim about the existence of ‘beings’ in other solar systems.
Drake’s equation. Via: Wikicommons
Conclusion
On the whole, both Buddhism and Science have a common goal in that both of them use the knowledge to enrich people and society. This can be thought of as an apple tree where Buddhism and Science are separate branches of the same tree with the same roots.
Common goal is not to say that Buddhism and Science are similar because the ultimate aim of Buddhism is to attain Nirvana (being released from the cycle of rebirth) and emphasizes more on the inner world and the mind of an individual. Conversely, science’s main goal is to explain the physical world with the highest accuracy possible and emphasizes more on the materialistic world (macroscopic and microscopic).
Science is constantly changing with new theories being made and old theories getting debunked. The relationship between science and Buddhism could get more intimate or distant depending on the direction science heads in the future.
Subscribe to my newsletter to receive articles about science, healthcare, technology, and happiness! | https://medium.com/predict/analyzing-the-relationship-between-3-science-and-buddhist-teachings-9d0852c534e0 | ['Eshan Samaranayake'] | 2020-12-21 19:52:18.466000+00:00 | ['Religion', 'History', 'Philosophy', 'Science', 'Culture'] |
Evaluation Metrics in Machine Learning Models using Python | We will try to evaluate our machine learning models on different error metrics. We need to remember while evaluating models that it should be immune to class imbalance if our data set is a classical example to imbalance data set. We will deal with typical imbalance dataset examples in upcoming blogs.
A list of few popular Evaluation Metrics are followings.
For Classification Problem:
1. Confusion Matrix
2. Precision / Recall
3. F1 Score
4. Area Under the ROC curve (AUC — ROC)
5. Cohen’s Kappa
For Regression Problem:
1. Root Mean Squared Error(RMSE)
2. R-Squared/Adjusted R-Squared
Lets try an understand them sequentially.
Confusion Matrix:
This matrix represents the accuracy of the model. A confusion matrix is an N X N matrix, where N is the number of classes being predicted. Let’s take an example of two class problem for the credit card company which wants to detect fraud using the algorithm you have to build. The possible two class would be either its Fraud or Not Fraud transaction.
Fig: Confusion Matrix
Here,
True Positive (TP)= 1 i.e. actual Fraud and also predicted Fraud
False Positive(FP) = 1 i.e. actual Not Fraud but predicted Fraud
False Negative(FN) = 2 i.e. actual Fraud but predicted Not Fraud
True Negative(TN) = 996 i.e. actual Not Fraud and predicted Not Fraud
Accuracy of the model is given by
accuracy = (TP+TN) / (TP+TN+FP+FN)
In the above matrix, we have calculated the proportion of true positive and true negative in all evaluated cases.
Precision / Recall:
Precision: It is the ratio of the correct predictions to the total number of predictions for that class. It answers the question Of all the values predicted as belonging to the class “Fraud”, what percentage is correct?
Precision(P)=TP/ (TP+FP)
The precision in our above matrix example would be =(1)/(1 + 1)=1/2=0.5
Recall: It is the ratio of the number of correct predictions to the total number of actual instances of the class. It answers the question Of all the instances of the class “X”, what percentage did we predict correctly?
Recall(R)=TP/ (TP+FN)
The recall in our above matrix example is =(1)/(1+2) = 1/3 =0.34
Suppose two models: Model 1 and Model 2 are implemented on the credit fraud dataset where 0 indicates Not Fraud and 1 is for Fraud .Following are the observations:
Model 1 is able to correctly predict 0 [Not Fraud] better than Model 2. Thereby Model 1 has lesser False Positives, thereby higher Precision
Model 2 is able to correctly predict 1 [Fraud] better than Model 1. Thereby Model 2 has lesser False Negatives, thereby higher Recall.
This is called as Precision-Recall Tradeoff. It completely depends on a business requirement to choose precision over recall or vice-versa. In this case, we should be more concerned about the recall. Because the Fraud if not detected then it would cost a loss to a business, predicting not fraud to fraud impact can be minimized by doing extra monitoring or manual verification.
An example where the precision metric is more important are recommendations systems(youtube playlist recommendation, TED X recommendation), stock prediction, Price of house, etc.
Example where recall metric is more important are disease prediction, terrorist prediction, fraud cases, loan defaulters.
F1 Score:
What if we wish to have both good precisions and recall in a model?
In such cases, we used F1 score. It is nothing but a harmonic mean of precision and recall.
F1 score= 2PR / (P+R)
The F1 score in our above matrix example
F1 score =(2*0.5*0.34) / (0.5+0.34)=0.34/ 0.84 = 0.40
It is more useful when you have uneven class distribution.
Area Under the ROC curve(AUC — ROC):
The ROC curve is the plot between sensitivity also called as True Positive Rate and (1- specificity) also called False Positive Rate.
True Positive Rate(TPR) = TP / (TP+FN)
False Positive Rate(FPR) = FP / (FP+FN)
Fig: Confusion Matrix
Fig: Roc curve
More the area under the curve better is the model. The random line represents a random prediction of a model which is 0.5 which is considered as the worst case. So our curve should be above that random model line for the model to be good.
Python Implementation for Classification Problem for the above Matrix can explore here. Github_link_for_classification_matrix
Cohen’s Kappa:
It is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). let try to understand with help of matrix.
Fig: Cohen’s kappa Matrix
kappa is defined as K = (Po - Pe) /(1- Pe) Where,
Po = Observed Accuracy
Pe = Expected Accuracy Po simply is number of instances that were classified correctly. From above matrix we can ‘a’ and ‘d’ were classified correctly. Po = (a+d) / (a+b+c+d) Pe = Probability by random chance
= P (yes) or P (no)
= (Classifier classifies it as yes) * (Actual yes) +
(Classifier classifies it as no) * (Actual no)
Pe = [(a+b)/(a+b+c+d)] * [(a+c)/(a+b+c+d)] + [(c+d)/(a+b+c+d)] * [(b+d)/(a+b+c+d)]
If the value of observed accuracy is better than expected accuracy then the model is said to be good.
Root Mean Squared Error (RMSE):
It is used in regression problems. It follows an assumption that error is unbiased and follows a normal distribution. It is given by the following formula.
Where N is a Total Number of Observations.
fig: RMSE
Note 1) For more samples, reconstructing the error distribution using RMSE is considered to be more reliable.
2) It is highly affected by outlier values.
3) As compare to mean absolute error, RMSE gives higher weightage and punishes large errors.
R-Squared / Adjusted R-Squared:
In AUC — ROC we learned that our model should be better than accuracy by random chance, so here we have a benchmark to compare our model in classification problem but in case of regression model when RMSE decreases the model performance improves but we still do not have a benchmark to compare it. This is the reason we require R-Squared / Adjusted R-Squared. It measures the goodness of fit of a straight line in the regression model.
R-squared is always between 0 and 100%.
let’s understand a few more terminology to better understand the R-Squared.
Error Sum of Squares (SSE): It is nothing but the sum of squared residuals.
Fig: SSE
yi : Actual Values
y^i: Predicted value
Total Sum of Squares (SST): It is nothing but the Squared difference between the Actual Values (yi) and the Mean of our dataset (yˉi).
Fig: SST
Regression Sum of Squares ( SS(Regression) ): It is the squared difference between the Predicted values (y^i) and the Mean (yˉi).
Fig: SS(regression)
We can observe the following,
SS(Total)=SS(Regression) + SSE
R-squared is now defined as,
Fig: R-Squared Formula
Higher the value of R-squared better is the model. The best model with all correct predictions would give R-Squared as 1. The problem with the R-squared is, it can be made artificially high by adding more number of independent variables (features) although they might be irrelevant. However, on adding new features to the model, the R-Squared value either increases or remains the same. R-Squared does not penalize for adding features that add no value to the model. So an improved version over the R-Squared is the Adjusted R-Squared.
Fig: Adjusted R-Squared Formula
where,
n = Number of data points
p = Number of features/independent variables
Note: R-squared tells you how well your model fits the data points whereas Adjusted R-squared tells you how important is a particular feature to your model.
Python Implementation for Linear regression for the above Matrix can explore here. github_link_for_regression
So, whenever you build a model, this article should help you to figure out what these parameters evaluate error metrics and how good your model has performed.
I hope this blog was useful to you. Please leave comments or send me an email if you think I missed any important details or if you have any other questions or feedback about this topic. | https://medium.com/analytics-vidhya/evaluation-metrics-in-machine-learning-models-using-python-fb6199450fba | ['Manoj Singh'] | 2020-02-19 08:18:28.447000+00:00 | ['Machine Learning', 'Statistics', 'Confusion Matrix', 'Python', 'Data Science'] |
Why Artificial Stupidity Could Be Crucial To AI Self-Driving Cars | Dr. Lance Eliot, AI Insider
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
We all generally seem to know what it means to say that someone is intelligent.
In contrast, when you label someone as “stupid,” the question arises as to what exactly that means. For example, does stupidity imply the lack of intelligence in a zero-sum fashion, or does stupidity occupy its own space and sit adjacent to intelligence as a parallel equal?
Let’s do a thought experiment on this weighty matter.
Suppose we somehow had a bucket filled with intelligence. We are going to pretend that intelligence is akin to something tangible and that we can essentially pour it into and possibly out of a bucket that we happen to have handy. Upon pouring this bucket filled with intelligence onto say the floor, what do you have left?
One answer is that the bucket is now entirely empty and there is nothing left inside the bucket at all. The bucket has become vacuous and contains absolutely nothing. Another answer is that the bucket upon being emptied of intelligence has a leftover that consists of stupidity. In other words, once you’ve removed so-called intelligence, the thing that you have remaining is stupidity.
I realize this is a seemingly esoteric discussion but, in a moment, you’ll see that the point being made has a rather significant ramification for many important things, including and particularly for the development and rise of Artificial Intelligence (AI).
Can intelligence exist without stupidity, or in a practical sense is there always some amount of stupidity that must exist if there is also the existence of stupidity?
Some assert that intelligence and stupidity are Zen-like yin and yang. In this perspective, you cannot grasp the nature of intelligence unless you also have a semblance of stupidity as a kind of measuring stick.
It is said that humans become increasingly intelligent over time, and thus are reducing their levels of stupidity. You might suggest that intelligence and stupidity are playing a zero-sum game, namely that as your intelligence rises you are simultaneously reducing your level of stupidity (similarly, if your stupidity rises, this implies that your intelligence lowers).
Can humans arrive at a 100% intelligence and a zero amount of stupidity, or are we fated to always have some amount of stupidity, no matter how hard we might try to become fully intelligent?
Returning to the bucket metaphor, some would claim that there will never be the case that you are completely and exclusively intelligent and have expunged stupidity. There will always be some amount of stupidity that’s sitting in that bucket.
If you are clever and try hard, you might be able to narrow down how much stupidity you have, though there is still some amount of stupidity in that bucket.
Does having stupidity help intelligence or is it harmful to intelligence?
You might be tempted to assume that any amount of stupidity is a bad thing and therefore we must always be striving to keep it caged or otherwise avoid its appearance. But we need to ask whether that simplistic view of tossing stupidity into the “bad” category and placing intelligence into the “good” category is potentially missing something more complex. You could argue that by being stupid, at times, in limited ways, doing so offers a means for intelligence to get even better.
When you were a child, suppose you stupidly tripped over your own feet, and after doing so, you came to the realization that you were not carefully lifting your feet. Henceforth, you became more mindful of how to walk and thus became intelligent at the act of walking. Maybe later in life, while walking on a thin curb, you managed to save yourself from falling off the edge of the curb, partially due to the earlier in life lesson that was sparked by stupidity and became part of your intelligence.
Of course, stupidity can also get us into trouble.
Despite having learned via stupidity to be careful as you walk, one day you decide to strut on the edge of the Grand Canyon. While doing so, oops, you fall off and plunge into the chasm.
Was it an intelligent act to perch yourself on the edge like that? Apparently not.
As such, we might want to note that stupidity can be a friend or a foe, and it is up to the intelligence portion to figure out which is which in any given circumstance and any given moment.
You might envision that there is an eternal struggle going on between the intelligence side and the stupidity side.
On the other hand, you might equally envision that the intelligence side and stupidity side are pals, each of which tugs at the other, and therefore it is not especially a fight as it is a delicate dance and form of tension about which should prevail (at times) and how they can each moderate or even aid the other.
This preamble provides a foundation to discuss something increasingly becoming worthy of attention, namely the role of Artificial Intelligence and (surprisingly) the role of Artificial Stupidity.
Thinking Seriously About Artificial Stupidity
We hear every day about how our lives are being changed via the advent of Artificial Intelligence.
AI is being infused into our smartphones, and into our refrigerators, and into our cars, and so on.
If we are intending to place AI into the things we use, it begs the question as to whether we need to consider the yang of the yin, specifically do we need to be cognizant of Artificial Stupidity?
Most people snicker upon hearing or seeing the phrase “Artificial Stupidly,” and they assume it must be some kind of insider joke to refer to such a thing.
Admittedly, the conjoining of the words artificial and stupidity seems, well, perhaps stupid in of itself.
But, by going back to the earlier discussion about the role of intelligence and the role of stupidity as it exists in humans, you can recast your viewpoint and likely see that whenever you carry on a discussion about intelligence, one way or another you inevitably need to also be considering the role of stupidity.
Some suggest that we ought to use another way of expressing Artificial Stupidity to lessen the amount of snickering that happens. Floated phrases include Artificial Unintelligence, Artificial Humanity, Artificial Dumbness, and others, none of which have caught hold as yet.
Please bear with me and accept the phrasing of Artificial Stupidity and also go along with the belief that it isn’t stupid to be discussing Artificial Stupidity.
Indeed, you could make the case that the act of not discussing Artificial Stupidity is the stupid approach since you are unwilling or unaccepting of the realization that stupidity exists in the real world and therefore in the artificial world of computer systems that are we attempting to recreate intelligence, you would be ignoring or blind to what is essentially the other half of the overall equation.
In short, some say that true Artificial Intelligence requires a combination of the “smart” or good AI that we think of today and the inclusion of Artificial Stupidity (warts and all), though the inclusion must be done in a smart way.
Indeed, let’s deal with the immediate knee jerk reaction that many have of this notion by dispelling the argument that by including Artificial Stupidity into Artificial Intelligence you are inherently and irrevocably introducing stupidity and presumably, therefore, aiming to make AI stupid.
Sure, if you stupidly add stupidity, you have a solid chance of undermining the AI and rendering it stupid.
On the other hand, in recognition of how humans operate, the inclusion of stupidity, when done thoughtfully, could ultimately aid the AI (think about the story of tripping over your own feet as a child).
Here’s something that might really get your goat.
Perhaps the only means to achieve true and full AI, which is not anywhere near to human intelligence levels to-date, consists of infusing Artificial Stupidity into AI; thus, as long as we keep Artificial Stupidity at arm’s length or as a pariah, we trap ourselves into never reaching the nirvana of utter and complete AI that is able to seemingly be as intelligent as humans are.
Ouch, by excluding Artificial Stupidity from our thinking, we might be damming ourselves to not arriving at the pinnacle of AI.
That’s a punch to the gut and so counterintuitive that it often stops people in their tracks.
There are emerging signs that the significance of revealing and harnessing artificial stupidity (or whatever it ought to be called), can be quite useful.
One such area, I assert, involves the inclusion of artificial stupidity into the advent of true self-driving driverless autonomous cars.
Shocking?
Maybe so.
Let’s unpack the matter.
Exploiting Artificial Stupidity For Gain
When referring to true self-driving cars, I’m focusing on Level 4 and Level 5 of the standard scale used to gauge autonomous cars. These are self-driving cars that have an AI system doing the driving and there is no need and typically no provision for a human driver.
The AI does all the driving and any and all occupants are considered passengers.
On the topic of Artificial Stupidity, it is worthwhile to quickly review the history of how the terminology came about.
In the 1950s, the famous mathematician and pioneering computer scientist Alan Turing proposed what has become known as the Turing test for AI.
Simply stated, if you were presented with a situation whereby you could interact with a computer system imbued with AI, and at the same time separately interact with a human too, and you weren’t told beforehand which was which (let’s assume they are both hidden from view), upon your making inquiries of each, you are tasked with deciding which one is the AI and which one is the human.
We could then declare the AI a winner as exhibiting intelligence if you could not distinguish between the two contestants. In that sense, the AI is indistinguishable from the human contestant and must ergo be considered equal in intelligent interaction.
There is a twist to the original Turing test that many don’t know about.
One qualm expressed was that you might be smarmy and ask the two contestants to calculate say pi to the thousandth digit.
Presumably, the AI would do so wonderfully and readily tell you the answer in the blink of an eye, doing so precisely and abundantly correctly. Meanwhile, the human would struggle to do so, taking quite a while to answer if using paper and pencil to make the laborious calculation, and ultimately would be likely to introduce errors into the answer.
Turing realized this aspect and acknowledged that the AI could be essentially unmasked by asking such arithmetic questions.
He then took the added step, one that some believe opened a Pandora’s box, and suggested that the AI ought to avoid giving the right answers to arithmetic problems.
In short, the AI could try to fool the inquirer by appearing to answer as a human might, including incorporating errors into the answers given and perhaps taking the same length of time that doing the calculations by hand would take.
Starting in the early 1990s, a competition was launched that is akin to the Turing test, offering a modest cash prize and has become known as the Loebner Prize, and in this competition, the AI systems are typically infused with human-like errors to aid in fooling the inquirers into believing the AI is the human. There is controversy underlying this, but I won’t go into that herein. A now-classic article appeared in 1991 in The Economist about the competition.
Notice that once again we have a bit of irony that the introduction of stupidity is being done to essentially portray that something is intelligent.
This brief history lesson provides a handy launching pad for the next elements of this discussion.
Let’s boil down the topic of Artificial Stupidity into two main facets or definitions:
1) Artificial Stupidity is the purposeful incorporation of human-like stupidity into an AI system, doing so to make the AI seem more human-like, and being done not to improve the AI per se but instead to shape the perception of humans about the AI as being seemingly intelligent.
2) Artificial Stupidity is an acknowledgment of the myriad of human foibles and the potential inclusion of such “stupidity” into or alongside the AI in a conjoined manner that can potentially improve the AI when properly managed.
One common misnomer that I’d like to dispel about the first part of the definition involves a somewhat false assumption that the computer potentially is going to purposefully miscalculate something.
There are some that shriek in horror and disdain that there might be a suggestion that the computer would intentionally seek to incorrectly do a calculation, such as figuring out pi but doing so in a manner that is inaccurate.
That’s not what the definition necessarily implies.
It could be that the computer might correctly calculate pi to the thousandth digit, and then opt to tweak some of the digits, which it would say keep track of, and do this in a blink of the eye, and then wait to display the result after an equivalent of the human-by-hand amount of time.
In that manner, the computer has the correct answer internally and has only displayed something that seems to have errors.
Now, that certainly could be bad for the humans that are relying upon what the computer has reported but note that this is decidedly not the same as though the computer has in fact miscalculated the number.+
There’s more than can be said about such nuances, but for now, let’s continue forward.
Both of those variants of Artificial Stupidity can be applied to true self-driving cars.
Doing so carries a certain amount of angst and will be worthwhile to consider.
Artificial Stupidity And True Self-Driving Cars
Today’s self-driving cars that are being tried out on our public roadways have already gotten a reputation for their driving prowess. Overall, driverless cars to-date are akin to a novice teenage driver that is timid and somewhat hesitant about the driving task.
When you encounter a self-driving car, it will often try to create a large buffer zone between it and the car ahead, attempting to abide by the car lengths rule-of-thumb that you were taught when first learning to drive.
Human drivers generally don’t care about the car lengths safety zone and edge up on other cars, doing so to their own endangerment.
Here’s another example of driving practices.
Upon reaching a stop sign, a driverless car will usually come to a full and complete stop. It will wait to see that the coast is clear, and then cautiously proceed. I don’t know about you, but I can say that where I drive, nobody makes complete stops anymore at stop signs. A rolling stop is a norm nowadays.
You could assert that humans are driving in a reckless and somewhat stupid manner. By not having enough car lengths between your car and the car ahead, you are increasing your chances of a rear-end crash. By not fully stopping at a stop sign, you are increasing your risks of colliding with another car or a pedestrian.
In a Turing test manner, you could stand on the sidewalk and watch cars going past you, and by their driving behavior alone you could likely ascertain which are the self-driving cars and which are the human-driven cars.
Does that sound familiar?
It should, since this is roughly the same as the arithmetic precision issue earlier raised.
How to solve this?
One approach would be to introduce Artificial Stupidity as defined above.
First, you could have the on-board AI purposely shorten the car’s length buffer to appear as though it is driving in the same manner as humans. Likewise, the AI could be modified to roll through stop signs. This is all rather easily arranged.
Humans watching a driverless car and a human-driven car would no longer be able to discern one such car from the other since they both would be driving in the same error-laden way.
That seems to solve one problem as it relates to the perception that we humans might have about whether the AI of self-driving cars is intelligent or not.
But, wait for a second, aren’t we then making the AI into a riskier driver?
Do we want to replicate and promulgate this car-crash causing risky human driving behaviors?
Sensibly, no.
Thus, we ought to move to the second definitional portion of Artificial Stupidity, namely by incorporating these “stupid” ways of driving into the AI system in a substantive way that allows the AI to leverage those aspects when applicable and yet also be aware enough to avoid them or mitigate them when needed.
Rather than having the AI drive in human error-laden ways and do so blindly, the AI should be developed so that it is well-equipped enough to cope with human driving foibles, detecting those foibles and being a proper defensive driver, along with leveraging those foibles when the circumstances make sense to do so (for more on this, see my posting here).
Conclusion
One of the most unspoken secrets about today’s AI is that it does not have any semblance of common-sense reasoning and in no manner whatsoever has the capabilities of overall human reasoning (many refer to such AI as Artificial General Intelligence or AGI).
As such, some would suggest that today’s AI is closer to the Artificial Stupidity side of things than it is to the true Artificial Intelligence side of things.
If there is a duality of intelligence and stupidity in humans, presumably you will need a similar duality in an AI system if it is to be able to exhibit human intelligence (though, some say that AI might not have to be so duplicative).
On our roads today, we are unleashing so-called AI self-driving cars, yet the AI is not sentient and not anywhere close to being sentient.
Will self-driving cars only be successful if they can climb further up the intelligence ladder?
No one yet knows, and it’s certainly not a stupid question to be asked.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot
For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/
For his AI Trends blog, see: www.aitrends.com/ai-insider/
For his Medium blog, see: https://medium.com/@lance.eliot
For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
Copyright © 2020 Dr. Lance B. Eliot | https://lance-eliot.medium.com/why-artificial-stupidity-could-be-crucial-to-ai-self-driving-cars-2b4955f92321 | ['Lance Eliot'] | 2020-05-22 17:28:14.173000+00:00 | ['Autonomous Vehicles', 'Driverless Cars', 'Artificial Intelligence', 'Autonomous Cars', 'Self Driving Cars'] |
Inception Network and Its Derivatives | In a traditional image classification model, each layer extracts information from the previous layers in order to get useful information. However, each layer type extracts different kinds of information. The output of a 5x5 convolutional kernel tells us something different from the output of a 3x3 convolutional kernel, which tells us something different from the output of a max-pooling kernel, and so on and so on. But, how can you be sure whether the transformation is giving useful information or not?
We mostly try deeper network models. But, what if we try some wider network.
Let’s dive in to discover the answers.
The answer is Inception Network. The inception network has played an important role in the world of ML. The Inception network is engineered too much to compete with speed and accuracy.
In traditional neural networks due to working just with the previous layers causes loss of useful data. Inception models use the same input data to learn by computing multiple different conversions parallelly, concatenating them in a single output. Simply saying, Inception does 1✕1 convolutional transformation, 3✕3, 5✕5, and more others and some pooling layers. Then stack all the outputs of the transformation and let the model choose how to use that information.
Before diving into the types of inception, let’s gather some knowledge about 1✕1 convolution.
1✕1 convolution is simply used to reduce the computation to an extent. Let’s see an example for better understanding.
Assume you need to compute a 3✕3 convolution operation with and without using 1✕1 convolution.
Without 1✕1 convolution total number of operations involved is = (3✕3✕300)✕(14✕14✕30) = 15876000
Using 1✕1:
Using 1✕1 convolutional total number of operation involved is = (14✕14✕10)✕(1✕1✕300) + (14✕14✕30)✕(3✕3✕10) = 1117200
Due to using 1✕1 convolution 14M operations are reduced. Thus, 1×1 convolution can help to reduce model size which can also somehow help to reduce the overfitting problem.
There is another term I have used that is Inception module. So let’s know what the inception module really is.
Inception Module:
Inception Modules are used in Convolutional Neural Networks to allow for more efficient computation and deeper Networks through dimensionality reduction with stacked 1×1 convolutions. The modules were designed to solve the problem of computational expense, as well as overfitting, among other issues. In the Inception module 1×1, 3×3, 5×5 convolution and 3×3 max-pooling performed in a parallel way at the input, and the output of these are stacked together to generate the final output. The idea behind that convolution filters of different sizes will handle objects at multiple scales better.
Inception Network consists of multiple versions.
Inception v1
Using this inception module with dimensionality reduction a neural network is architected. The most simple neural network made up of this way is known as Inceptionv1 or Google Net. The architecture is shown below.
GoogLeNet has 9 such inception modules stacked linearly. It is 22 layers deep and 27, including the pooling layers. It uses global average pooling at the end of the last inception module. Due to its lesser loss of data its classification ability is truly good and can prevent the vanishing gradient problem. GoogleNet or inceptionv1 was the winner of ILSVRC (ImageNet Large Scale Visual Recognition Competition) 2014, an image classification competition.
Thus, Inception Net emerged victorious over the previous versions of CNN models. It achieves an accuracy of top-5 on ImageNet, it reduces the computational cost to a great extent without compromising the speed and accuracy.
Problems in Inception v1- InceptionV1 use 5✕5 convolution that causes loss of information. Furthermore, there is also a complexity decrease when we use bigger convolutions like 5×5 as compared to 3×3. We can go further in terms of factorization i.e. that we can divide a 3×3 convolution into an asymmetric convolution of 1×3 then followed by 3×1 convolution. This is equivalent to sliding a two-layer network with the same receptive field as in a 3×3 convolution but 33% cheaper than 3×3. This factorization does not work well for early layers when input dimensions are big but only when the input size is m×m where m is between 12 and 20.
Inception v2: Inception v2, 3⨯3 convolutions is used in the space of 5⨯5 and boosts the performance. This also decreases computational time and thus increases computational speed because a 5×5 convolution is 2.78 times more expensive than 3×3 convolution. So, Using two 3×3 layers instead of 5×5 increases the performance of architecture.
figure1
The architecture also converts n×n factorization into 1xn and nx1 factorization as shown below.
figure2
To deal with the problem of the representational bottleneck, the feature banks of the module were expanded instead of making it deeper. This would prevent the loss of information that causes when we make it deeper.
figure3
Using these 3 above ideas the inception v1 architecture is updated in inception v2. Below is the layer-by-layer details of Inception v2:
Notice in the above architecture figures 5, 6, 7 refers to figures 1, 2, 3 in this article.
Inception v3: Inception v3 is almost similar to Inception v2 except for some updates. Those updates are listed below:
Use of RMSprop optimizer.
Batch Normalization in the fully connected layer of the Auxiliary classifier.
Use of 7×7 factorized Convolution.
Label Smoothing Regularization(a method to regularize the classifier by estimating the effect of label-dropout during training). It prevents the classifier to predict too confidently a class. The addition of label smoothing gives 0.2% improvement from the error rate.
Inception V4:
The main motto behind creating inception v4 is to reduce the complexity of inception v3 and make the module more uniform.
This is a pure Inception variant without any residual connections. It can be trained without partitioning the replicas, with memory optimization to backpropagation.
Inception-ResNet v1 and v2: Inspired by the success of ResNet, a combination of inception and the residual module was proposed. There are two models in this combination: Inception ResNet v1 and v2.
Using the inception module in the above architecture the computation cost decreases. Each Inception block is followed by a 1×1 convolution without activation called filter expansion. This is done to scale up the dimensionality of the filter bank to match the depth of input to the next layer. In Inception ResNets models, the batch normalization is not used after summations. This is done to reduce the model size to make it trainable on a single GPU. Both the Inception architectures have the same architectures for Reduction Blocks but have different stem structures. They also have differences in their hyperparameters for training. It is found that Inception-ResNet V1 has similar computational cost as of Inception V3 and Inception-ResNet V2 have similar computational cost as of Inception V4. The total structure of Inception ResNetv2 is shown below:
Results: With Single-Crop Single-Model the top-5 and top-1 error rate on the ILSVRC 2012 validation sets are below:
With 10/12-Crop Single-Model, again, Inception-v4 and Inception-ResNet-v2 again have the best performance with similar results.
The top-5 and top-1 error rate of 144-crop single-model evaluation of different architectures on the ILSVRC 2012 validation sets are below:
Resources: | https://medium.com/analytics-vidhya/inception-network-and-its-derivatives-e31b14388bf9 | ['Ritacheta Das'] | 2020-09-30 12:38:09.953000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence'] |
Eighth Law of the Interface: Interfaces are subject to the laws of complexity | Complexity
Talking about complexity is not simple at all. A system is complex when it is composed of interrelated elements that exhibit general properties not evident in the sum of the individual parts (Waldrop, 1992). The complexity can be disorganized, for example when millions of elements have random relationships with each other, or organized, when the number of relationships between elements is reduced. A system composed of gas molecules in a container is an example of disorganized complexity; organized complexity manifests itself in biological or economic systems. According to Ricard Solé “complexity surrounds us and is part of us” (2009:20). Stuart Kauffman (1995) believes that the general principles of coevolution and self-organization govern the biological (biosphere), economic (economosphere) and technological (technosphere) domains. Perhaps, as Kauffman suspects, the same law applies to all the biospheres of the cosmos.
Kauffman and other scientists from the Santa Fe Institute consider that technological evolution is based on laws similar to those that govern the biological domain. After 2.5 million years of evolution it could be said that the culture of Homo sapiens is a network of designers, users, design strategies, use tactics, skills, practices, processes, institutions, texts, hardware, software, artifacts, proposals and interaction contracts, and, above all, a network of interfaces (First and Third Law) whose complexity has nothing to envy of biological ecosystems. According to Kauffman “tissue and terracotta may evolve by deeply similar laws” (1995:194).
This migration of analytical models from biological to technological research is fundamental for understanding the interface ecosystem because, among other advantages, it definitively takes us away from any form of determinism. The complexity theory facilitates understanding phenomena such as the Cambrian explosion of technological species or the proliferation of variations in certain moments of the evolution of an interface (Fourth Law).
Technology, complexity and increasing returns
The best interfaces generate virtuous circles. Brian Arthur, a researcher from the Santa Fe Institute and author of The Nature of Technology (2009), argues that technologies improve with their use and adoption, which leads to greater use and adoption, thus generating positive feedback (increasing returns). He also affirms that new technologies are nothing else than combinations of previously existing technologies. However, this “combination principle” was not enough for Arthur, so he decided to complete his model with a second principle: technologies are recursive systems, that is, assemblies of other technologies similar to building blocks. If the content of an interface is always another interface (Third Law), Arthur complements this idea stating that “each component of technology is itself in miniature a technology”.
“Each component of technology is itself in miniature a technology”.
For Arthur the assemblages that form a technology communicate with each other and generate an evolutionary process that he defines as “combinatorial evolution”. This process, which resembles a conversation between components, never ends. The technologies, adds Arthur, are fluid: they are always in motion. As the number of actors and interfaces increases, the possibility of new emerging combinations also increases. However, not all combinations are possible or make sense (Third Law). Some combinations work; others, don’t.
According to complexity principles, we cannot know what combinations will emerge from a socio-technical network. The digital simulations that Arthur and his colleagues developed to analyze the evolution of logic circuits showed that combinations can create very complicated products. They also identified Cambrian explosions of new models after long periods without innovations and “avalanches of destruction”. Like in any other complex system, technological evolution depends on small events.
Impossible predictions
The configurations adopted by the socio- technological network and the properties that emerge cannot be explained by the individual properties of each actor. Original black and white television included color television within its possible evolutions but nobody in the 1950s could have imagined that it would be possible to watch TV on demand on a phone. Nor would anyone have thought that emails would become one of the main contents of the first Internet, or that SMS would be the killer content of the first generation of mobile phones. These developments existed in nuce, almost imperceptible, but until they connected to other actors in a new interface, they remained hidden from the observer. In a complex system the whole is much more than the sum of its parts, and what we can come to know about an interface or its actors is never enough to understand the entire ecosystem.
According to Kaufmann, it is impossible to predict the evolution of the biosphere. Something similar can be said about the socio-technological network: new variables (mutations, variations, symbiosis and interactions) emerge both in the biosphere and in the technosphere without a solution for continuity.
In the socio- technological network unpredictability increases due to overuses, misunderstandings, redesigns, negotiations, confrontations, deviant interpretations and many other amazing activities of human actors (Second Law).
In this context, the evolution of interfaces is an open and unpredictable process (Fourth Law). Kauffman argues that, in “chaotic systems we can not predict long-term behaviour” but he adds: “not predicting does not mean failing to understand or explain” (1995:17). | https://uxdesign.cc/eighth-law-of-the-interface-interfaces-are-subject-to-the-laws-of-complexity-95e1ba773edb | ['Carlos A. Scolari'] | 2020-02-01 11:23:04.896000+00:00 | ['Design', 'Interfaces', 'Complexity', 'Technology Evolution', 'Interface Design'] |
A-Z Of Exploratory Data Analysis Under 10 mins | 3. UNIVARIATE ANALYSIS
Univariate analysis, as the name says, simply means analysis using a single variable. This analysis gives the frequency/count of occurrences of the variable and lets us understand the distribution of that variable at various values.
3.1. PROBABILITY DENSITY FUNCTION (PDF) :
In PDF plot, X-axis is the feature on which analysis is done and the Y-axis is the count/frequency of occurrence of that particular X-axis value in the data. Hence the term “Density” in PDF.
import seaborn as sns
sns.set_style("whitegrid")
Seaborn is the library that provides various types of plots for analysis.
sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'age').add_legend()
Output :
PDF of Age
Observations :
Major overlapping is observed, so we can not clearly say about the dependency of age on survival. A rough estimate that patients age 20–50 have a slightly higher rate of survival and patients age 75–90 have a lower rate of survival. Age can be considered as a dependent variable.
sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'op_year').add_legend()
Output :
PDF of Operation Year
Observations:
The overlap is huge. Operation year alone is not a highly dependent variable.
sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'axil_nodes').add_legend()
Output :
PDF of Axillary nodes
Observations:
Patients with 0 nodes have a high probability of survival. Axillary nodes can be used as a dependent variable.
The disadvantage of PDF: In PDF, we can’t say exactly how many data points are in a range/ lower to a value/ higher than a particular value.
3.2. CUMULATIVE DENSITY FUNCTION (CDF) :
To know the number of data points below/above a particular value, CDF is very useful.
Let’s start with segregating data according to the class of survival rate.
survival_yes = haberman_data[haberman_data['surv_status']==1]
survival_no = haberman_data[haberman_data['surv_status']==2]
Now, let us analyze these segregated data sets.
import numpy as np
import matplotlib.pyplot as plt
count, bin_edges = np.histogram(survival_no['age'], bins=10, density = True)
#count : the number of data points at that particular age value
#bin_edges :the seperation values of the X-axis (the feature under analysis)
#bins = the number of buckets of seperation
pdf = count/sum(count)
print(pdf)
# To get cdf, we want cumulative values of the count. In numpy, cumsum() does cumulative sum
cdf = np.cumsum(pdf)
print(cdf) count, bin_edges = np.histogram(survival_yes['age'], bins=10, density = True)
pdf2 = count/sum(count)
cdf2 = np.cumsum(pdf2) plt.plot(bin_edges[1:],pdf,label='yes')
plt.plot(bin_edges[1:], cdf,label='yes')
plt.plot(bin_edges[1:],pdf2,label='no')
plt.plot(bin_edges[1:], cdf2,label='no')
plt.legend()
#adding labels
plt.xlabel("AGE")
plt.ylabel("FREQUENCY")
Output :
[0.03703704 0.12345679 0.19753086 0.19753086 0.13580247 0.12345679
0.09876543 0.04938272 0.02469136 0.01234568]
[0.03703704 0.16049383 0.35802469 0.55555556 0.69135802 0.81481481
0.91358025 0.96296296 0.98765432 1. ] Text(0, 0.5, 'FREQUENCY')
CDF, PDF of segregated data on age
Observations:
There are around 80% of data points have age values less than or equal to 60
count, bin_edges = np.histogram(survival_no['axil_nodes'], bins=10, density = True)
pdf = count/sum(count)
print(pdf) cdf = np.cumsum(pdf)
print(cdf) count, bin_edges = np.histogram(survival_yes['axil_nodes'], bins=10, density = True)
pdf2 = count/sum(count)
cdf2 = np.cumsum(pdf2) plt.plot(bin_edges[1:],pdf,label='yes')
plt.plot(bin_edges[1:], cdf,label='yes')
plt.plot(bin_edges[1:],pdf2,label='no')
plt.plot(bin_edges[1:], cdf2,label='no')
plt.legend()
plt.xlabel("AXIL_NODES")
plt.ylabel("FREQUENCY")
Output :
[0.56790123 0.14814815 0.13580247 0.04938272 0.07407407 0.
0.01234568 0. 0. 0.01234568]
[0.56790123 0.71604938 0.85185185 0.90123457 0.97530864 0.97530864
0.98765432 0.98765432 0.98765432 1. ] Text(0, 0.5, 'FREQUENCY')
CDF, PDF of segregated data on axillary nodes
Observations:
There are around 90% of data points have axil_node values less than or equal to 10
3.3. BOX PLOTS
Before exploring box plots, few commonly used statistics terms are,
median (50th quartile) is the middlemost value of the sorted data
25th quartile is the value in sorted data which has 25% of the data less than it and 75% of the data above it
75th quartile is the value in sorted data which has 75% of the data less than it and 25% of the data above it.
In the box plot, the lower line represents the 25th quartile, the middle line represents the Median/50th quartile, the upper line represents the 75th quartile. And the whiskers represent the minimum and maximum in most of the plots or some complex statistical values. Whiskers are not min and max values, when seaborn is used.
sns.boxplot(x='surv_status',y='age', data=haberman_data)
sns.boxplot(x='surv_status',y='axil_nodes', data=haberman_data)
sns.boxplot(x='surv_status',y='op_year', data=haberman_data)
Output : | https://towardsdatascience.com/a-z-of-exploratory-data-analysis-under-10-mins-aae0f598dfff | ['Ramya Vidiyala'] | 2020-05-14 14:39:14.813000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Education', 'Data Science'] |
What Verizon’s Move to Shut Down FIOS 1 Means for Local News | Credit: @kitsada via Twenty20
A few weeks ago, Verizon announced that it would shutter FIOS 1, its local news station in New Jersey and New York. The creation of FIOS 1 was aimed at capturing new subscribers by providing an alternative to Cablevision’s News 12. Over its ten years in existence, FIOS 1 provided vital local news coverage in communities across New Jersey and New York that would otherwise be neglected. Verizon has since announced that it will add News 12 into its packages. However, adding News 12 will not fill the void caused by FIOS 1 leaving the marketplace. Consumers will now have less choice when it comes to local news and 150 reporters, at least for the time being, will no longer be telling the important local news stories that need to be told.
More broadly, the move could not have come at a more unwelcome time. Many print and digital outlets continue to consolidate, downsize or shut down in communities across the country because of financial pressures, contributing to the expansion of “news deserts.” If this trend continues, and it shows little signs of slowing, the prognosis for the health of our civic life will continue to dim. If the reaction to the impending closure of FIOS 1 tells us anything, it’s that policymakers are beginning to take notice.
Several Members of Congress from New York and New Jersey sent a letter to Verizon asking them to reconsider their stance. They wrote:
“Ending this contract would shut down FIOS 1, a major contributor to original programming, and threaten the robust free press that drives our democracy in New York and New Jersey…As we have seen from the cuts and consolidation in print media, local journalism is irreplaceable, and these cuts would do great harm.”
In this sentiment, they are entirely correct. Local journalism is irreplaceable and so are the people who are on the ground doing the reporting. So what does Verizon’s decision to close FIOS 1 mean in the broader context of our local news ecosystem?
Attention from policymakers is a great start. By recognizing that we are truly facing a local journalism crisis, we are taking the first steps towards finding a solution that includes discovering ways to innovate in the public sector to preserve and expand local news coverage. We in local news should embrace this and actively work to find areas where government can support a robust local news ecosystem.
The second part is a lack of innovation. We need to continue to find ways to bring engaging, grassroots reporting to communities across the United States in a sustainable way.
The third broad takeaway is that we need continued, genuine buy-in from tech platforms in supporting local news. The decline of print and, yes, even digital outlets, has at least in part been a symptom of a shift of attention to big tech platforms. A concerted effort on the part of big tech to develop fruitful partnerships that will be mutually beneficial and not lopsided will be key.
As I’ve written before, local news isn’t nice to have — it’s essential. It’s essential because of the impact it has on the functioning of our democracy, from the federal level on down to our school boards. The data is clear. More people get engaged with the political process when there is robust local news coverage — including higher voter turnout. It is unequivocally bad news that Verizon is shuttering FIOS 1, but I do hope that it will serve as a wake up call that we need to do more to protect, preserve and expand local news coverage. | https://mikeshapirotapinto.medium.com/what-verizons-move-to-shut-down-fios-1-means-for-local-news-dd143f038056 | ['Michael Shapiro'] | 2019-09-17 16:25:54.838000+00:00 | ['Journalism', 'News', 'Technology', 'Media', 'Local News'] |
Which Framework is Best For 2020 — Ruby on Rails vs. Django? | Ruby on Rails and Django are both best web development frameworks–but how do you choose one over the other. The aspects they have in common range from programming language similarities where both the Ruby and Rails are both object-oriented and dynamically typed, as well as their output that is unique to each task.
There are various Best Web Development Frameworks available to programmers in the programming world, but most stand out, as we all know Django and Ruby on Rails. They emerge as the popular web frameworks, and it has expected that this popularity will continue until 2020.
If you try to make a choice between the two, both Django and Rails are great choices–here are a few things to consider that will help you make the right choice. For more Additional info at Ruby On Rails Online Training.
What Do Both The Frameworks Have in Common?
It is safe to say that Ruby on Rails and Django are like twins, differentiating only in the vocabulary used under the hood and the philosophies applied–every born in a different environment.
Languages
Python and Ruby, as described above, are object-oriented and dynamically typed languages–they are very different from the languages used in architectures like Java in companies.
One major difference is that Python and Ruby are open-source and very active and robust are their respective communities. It means that if you stick to using these tools, you will not struggle to find answers or information.
Performance
Ruby and Python have almost the same performance level–the differences for a typical CRUD app are not noticeable. In broader applications, the difference is just as negligible–but if quality is high on your priority list and you need to support thousands of users at the same time, and then neither is the right choice.
Let us put it this way, both are fantastic at costly CPU operations such as image manipulation and can support thousands of users (not blasting performance but ok) but configuring Ruby and Python for such size requires a lot more effort.
Architecture
Based on an MVC template, the architecture of both languages is well-structured–the app will be properly organized and will have simple divisions within the framework between layers such as specified routes, controllers, models, and bounded views.
Items are arranged in Rails and Django a little differently, but it is just the format so there’s nothing to think about.
What Django is All About?
Django launched in 2005, Django is a web application based on Python and a primary option for the creation of Python apps. What makes it so popular is that it is an open-source, general-purpose and free application that makes it easy to use. Django’s features are highly praised by developers. It has developed to simplify the creation process of complex, database-driven websites. With a clean and practical model, this system promotes rapid development. In addition, Python is the easiest language to learn and write easily. There are different types of applications that can be built with Python.
Cons of Django Framework
Lacks the capability to handle multiple requests simultaneously
Highly reliable on ORM system
Makes web app components tightly-coupled
Too Monolithic
Pros of Django Framework
Scalable
Django has the Representational State Transfer (REST) framework
Mature Software with numerous Plug-ins
Highly Customizable
Effective Admin Panel
High compatibility with databases and operating systems
Adopts Battery-included approach
Supports MVC programming
What Ruby on Rails is All About?
Ruby on Rails, written under the MIT License, acronynized as RoR is an open-source web application platform on the server side. As a model-view-controller, Rails offers incredible default server, web services, and pages structures. Developers are known to write code as a timesaving process.
The system mainly works on two concepts–DRY (Don’t Repeat Yourself) and Setup Convention. The latter being self-explaining removes the need to do the same programming function repeatedly, while the latter means that by definition, the world in which you operate, such as structures, databases, languages and more, allows for many logical scenarios. It means that instead of developing your own rules every time, you can adapt to them, making the entire programming process much easier.
Pros of Ruby on Rails
Easy to modify and migrate
Superior testing environment
Active RoR community
High-speed development
Diverse tools and presets
Cons of Ruby on Rails
Varying quality and standard of documentation
Low runtime speed
Tricky to create API
Lack of flexibility
Major Difference between Rails and Django Framework
Language
Although Django uses Python, Rails, launched back in 1995 and built using Ruby. Python is one of the top Programming Languages and known for highlighting the code’s simplicity and readability, while Ruby known for its qualities such as flexibility and equality, as well as its understandable syntax.
Ruby has designed to “enjoy” writing the language on the other side of the table, so it is essentially enjoyable. Although the applications developed using either of them will look and function the same, under the covers you can see the main difference.
Architecture
One aspect that both web development systems have in common is that both MVC (Model-View-Controller) have implemented. It’s named MVT (Model-View-Template) for Django, though. For the most part, both MVC and MVT are identical and so slightly different.
The Model in Django represents the database, which defines the data structure, View is the Regular Expression-based URL Dispatcher which controls what users should see. Finally yet importantly, Template refers to a web template framework that merges with Django Template Language (DLT). Django himself does this part of the controller.
The template in RoR applies to the server information such as comments, images, messages, etc. Active Record takes care of all this. The View here covers the information in the HTML template and then sends it to the controller, which Active View will then manage later. The Action Controller now links Model and View and manages requests while also managing web browser responses
User Interface
When comparing Django vs Rails on the user interface grounds, both are the winners. It’s because they’re both designed to offer a high-class experience. Such web-centric frameworks allow can Mobile App Development Company to create highly functional websites packed with flawless add-ons and plugins
Speed and Performance
In the fight between RoR and Django, Rails are found to be 0.7 percent faster. It is because Rails has the benefit of a rich repository of awesome libraries and plugins to boost this framework’s speed and ultimately performance. Nonetheless, Django also promotes rapid development processes and is an amazing choice for a web application.
Nonetheless, Django vs. Rails ‘ performance is high both because they exploit modern programming languages while providing the tools to optimize the software
Stability
For creation, innovation and stability are two parallel elements. The one winner that can do both successfully could be called. Ruby on Rails can juggle both because it allows users to reuse the code to eliminate dependencies. It also uses the Configuration Convention technique, freeing coders from further efforts.
On the other side of the table, by sticking to any proven method to solve the problems at hand, Python follows a more conventional approach, providing stability.
Installation
Depending on the installation process, comparing Django vs. Ruby on Rails is not a hard nut to crack. The installation process of Django is very simple and therefore it only takes about a minute to fully install it.
It can’t be said the same thing about RoR though. First you need to understand what bundle and Gems are, as they are required to install Ruby’s packages. Such two are first downloaded and then run the Command Gem Install Rails to eventually install the Rails framework’s latest version.
Security
In comparison with Django vs Rails, we had to include the security factor as it is an essential part of any website or applications.
Django has certainly won this functionality from Python. Yes, NASA also uses Django frameworks, which in itself is a proof enough to recommend how stable it is. Django backed by middleware’s, while Rails is backed by active files. Django has tools to protect the Django app from SQL injection, cross-site (XSS) scripting, and so on.
Overall, both frameworks for web development are a reliable option and can be trusted for security.
Scalability
While Python’s scalability has inherited by the Django web platform, it still lags a little behind Rails. It has better scalability resulting from its attributes like freedom and code flexibility. Both are heavyweight web development frameworks, so both designed to keep scalability in mind, but the winner’s title here is Ruby on Rails development.
Syntax
It is a well-known fact that the syntax of Ruby is very versatile. Well, Ruby on Rails ‘ advantages can’t always be associated with this. It can cause problems and make it more difficult for the task to be passed on to other team members as one feature can be done in many different ways, creating confusion.
Whereas, Python argues that there should be only one obvious way to do something, making debugging and reading the code easier.
Principles of Development
Principles are like the glue that holds together the whole web app development process. Django has two notable principles–DRY (which we had already discussed) and “Explicit is better than Implicit.” This idea helps developers to create easy-to-understand software that managed by many people.
Ruby on Rails is also not short of the principles of design. It also uses DRY and Convention on Configuration, which suggests that, instead of making your own configurations, you have to follow conventions to be successful. This increases pace and effectiveness
Documentation of Frameworks
In this regard, to be clear, it is a connection between the design frameworks of Django vs. Rails. All frameworks are well known, making the most common FAQs and answers to questions easy to find. The documentation language of both is very simple, understandable, and clear, without placing the reader in the mayhem state.
Maturity of Platform
Django has first released in 2005 and has since served on the list of the best frameworks for web development. His recent release, with many new features and improved usability, was released in April 2019. First released in 2003, Ruby on Rails has officially declared as an open-source application in 2004. As the latest version of it has released in August 2018, it also regularly updated.
HTML Template
Although the core feature of both Django and Ruby on Rails framework is template, Django uses a simple template language to allow developers to create templates with minimal programming and HTML capabilities. On the contrary, the views of Rails tend to be more complex (individual page templates).
Usage
If you are looking for a framework that helps to create complex database-driven websites and web apps in less time, with system management, scientific programming, data analytics, and manipulation performance, then Django is the way forward.
Rails, on the other hand, also helps develop database-backend web apps by providing developers with improved functionality and autonomy. It is an ideal choice for Meta programming and building fun codes because Ruby is versatile in design.
Community Support and Ecosystem
As an open-source platform, Django also has an open-source ecosystem, which means that developers have access to a sea of libraries and software, both paid and free. In addition, Django’s official documentation is more than adequate for reference if you need a response to any question.
Django group has more than 11,000 users to use for developers with more than 4,000 ready-made products. In addition, Ruby on Rails also has a highly active group of 5,000 committed people who have contributed a number of Gems with reusable software already
To get in-depth knowledge on Ruby, Enroll for live free demo on Ruby On Rails Online Course
Learning Curve
It is well known that Python is a very easy language for programming to learn among its competitors, which also makes Django’s learning curve small. Numerous offline and online resources are open, making it easier to answer queries. On the opposite, because of individual principles, Rails has a very steep learning curve that a developer needs to refine to become professional in Rails. So, only experienced programmers and developers are recommended.
Use Cases of Django
Disqus
Instagram
Spotify
Youtube
Use Cases of Ruby on Rails
Basecamp
SlideShare
Crunchbase
Hulu
AirBnb
Which Framework to Choose and When
Overall, on Rails web frameworks, both Django and Ruby are at the top of their class, giving each other a tough competition. Nonetheless, there are some places where one overrides the other.
For example, you should go with Django if you want a highly detailed app packed with remarkable features. However, if you are thinking about a fast release and then focusing on the website specifics or a web app, then Ruby on Rails is your perfect choice. It’s because it has shortcuts and configuration features that make it simpler for web applications to incorporate complex features.
KEY TAKEAWAY
So, who on Rails or Django is Better Ruby? Okay, in one word, it’s hard to answer this question. There are many similarities and differences between Django and Rails. Both are successful in various tasks. Inefficiency, pace, community support, scalability, safety and more, they both give each other a tough competition.
Yet learning Ruby on Rails is absolutely worth it as it is one of developers and programmers ‘ favorite options to create websites and web applications. In reality, it has predicted that the success of RoR will continue in 2020, making learning it imperative. | https://medium.com/quick-code/which-framework-is-best-for-2020-ruby-on-rails-vs-django-a73290ffc625 | ['Sravan Cynixit'] | 2019-12-24 20:53:01.218000+00:00 | ['Framework', 'Python', 'Ruby on Rails', 'Django', 'Ruby'] |
Player Types, Domination, and the Core Question of Your Game’s Design | For the sake of clarity, let’s analyze two games (a multi-player, and a single-player) under the lens of the Bartle Player Types.
Photo by Ravi Palwe on Unsplash
Mario Kart is a famous Nintendo racing game, with a roster of beloved characters from the Super Mario franchise. The recent versions of the game have both online and offline multiplayer modes. The following analysis focus on the multi-player mode, be it online or offline.
Achiever — The achiever wants to win the race and the championships. They want to win tournaments and unlock new characters and circuits. To some extent, they are also interested in the leaderboard, as it is a powerful bragging tool to show-off their feats.
Explorer — The explorer wants to find the hidden spots and short cuts in the tracks. They might win the race once in a while, as a consequence of their thorough testing of the game. Unlocking new circuits and characters are also exciting activities, but only as long as it opens up more possibilities to be explored and tested within the game boundaries.
Socializers — The socializer wants to have fun with the other players, be it in whatever mode is being played. Winning and losing are only consequences, but might be of interest if it furthers the game’s social aspect. For instance, a socializer will be inclined to participate (or just watching) in a tournament between friends, just for the sake of being with them.
Killers — The killer also wants to win, but not as an in-game objective. Winning in the game is not its goal. A killer wants to win over other players. They want to assert domination. They will be more thrilled when playing against achievers because the latter cares so much about winning that they became easy targets for distress. A killer will gladly take second or worst place in a race, as long as it drags the achiever to lose it too.
Photo by the Author.
Stardew Valley is an open-ended farming-RPG, first released for single-player mode only. The multi-player mode was later added to the game, but for the sake of the discussion on Bartle Player types, let’s reduce our discussion on the 4 types using only the single-player mode of the game.
It bears reminding, though, that Bartle’s theory was developed under the assumption of a multi-player environment, and he believes that using the theory in different contexts might not work or lead to wrong/biased results. However, the following analysis is an attempt to cover the possibilities of the 4 player types into a single-player game.
Achiever — The achiever wants to complete the in-game tasks, such as to finish the Community Center or to construct all different farm buildings. There are also in-game achievements, such as to earn 1 million in-game money or to cook every recipe. The achiever is willing even to make friends with the townsfolk to get friendship related achievements, although there is probably not a lot of fun in the process for them.
Explorer — The explorer wants to explore possibilities within the game and its economy. What is more lucrative or if you can profit from completely alternative methods (such as filling the Quarry with Bee Houses). Explorers will also try to break the game in its details that will often go missing under other players’ attention (the "Out of Bounds bug" is a good example). Completing the Community Center is not an objective for explorers, but might be seen as a necessary step to unlock more places to explore.
Socializers — The socializer wants social interaction, and this might be hard to find in a single-player game. But for Stardew Valley, there is a considerable fandom around the player-NPC interaction, and it might be something to entertain socializers. Also, these same interactions and stories are devices to engage in player-player interaction over online fora. From a Bartle’s Taxonomy standpoint, the game still fits as a tool for social interaction, even if the social element happens in an outside-game experience.
Killers — Similar to the socializer, the killers rely on social interaction to satisfy their needs. Stardew Valley does not have many opportunities for these types of players, though. There are few occasions to assert domination, but, as for socializers, this interaction might happen outside the game. Moreover, the game has a few mechanics to bother the NPCs, such as using the slingshot to shoot them or by following the Joja Mart progression. Ultimately, killing monsters in the mines could be seen as a sort of domination-related activity, but it might hardly fulfill a killer’s urge for chaos.
Even though not all types are broadly covered, it is interesting to look at the games from this perspective. The lack of place for a killer type in Stardew Valley might be a game design decision to make the game more approachable. As Eric Barone, Stardew Valley creator, said in an interview: the game aims to spark joy and wonder at every moment. That would be hard to achieve having space for dominant-aggressive behavior. | https://medium.com/the-innovation/player-types-domination-and-the-core-question-of-your-games-design-111c2990fcf9 | ['Yvens Serpa'] | 2020-09-13 09:41:01.319000+00:00 | ['Bartle', 'Game Development', 'Games', 'Game Design', 'Psychology'] |
Playpen | Photo by Les Anderson on Unsplash
Maya’s jittery hands clumsily clanged perfume bottles and makeup compacts on the bureau. Glancing at her cell phone for the time, Maya realized she had waited too long for her sister Nora to show up to watch the baby. Anatoly, Maya’s boss, was impatient. If she kept stalling, he might call another one of his “girlfriends.”
Anatoly had too many chicks floating around. All those damn perfectly made-up faces floating around the island made it hard for a girl to negotiate with him. Maya convinced herself that it wasn’t Anatoly’s brutish charm that kept the ladies around, kept them working.
Cash was his talisman. When a girl needed something, Anatoly played it real cool. Girls who didn’t know, the naïve ones who thought they could push him around with their pussies, would claim to fall in love with him. In turn, Anatoly would make the chick a favorite, lavishing second hand furs and public housing penthouses complete with balconies overlooking the Russian Riviera that is Coney Island. The chicks always ended up forgetting their own dreams, only believing in what Anatoly could do for them.
Eventually, the favors became few and far between until eventually the girls were left to fend for themselves. Anybody with two eyes could see that there weren’t too many opportunities for advancement in Coney Island. Girls always ended up calling Anatoly for help, desperate, begging. When they reached to hug him as they pled, a stiff, distancing arm would extend, holding them back. Too often they’d gasp, “I’ll do anything.” Too often they did. That’s when the real work began.
“I don’t have time for love,” Maya always told herself, defiantly. Love means opening up, sharing things, especially all the sticky details about one’s self. She steadied her nerves and took one last look in the mirror. She could see lines under her eyes. Her trademark, wide baby doll eyes had ceased to be cute. They just looked frozen in fear.
“Ice, ice,” she mumbled to herself. Maya used ice to make her face look relaxed — her quick fix. Four strides was all it took for her to get from the dressing area of her studio to the refrigerator. As she stepped across the floor, she carefully skirted the playpen that took most of the space in the middle of the floor, as not to wake the baby. Absentminded, Maya reached into the freezer, grabbed some ice and applied it to the puffy circles under her eyes. Looking over the ice, Maya’s eyes wandered around the room and finally rested on the sleeping infant in the playpen. Curled up, shaped like a kidney bean, his pink lips worked on a dream nipple.
Maya’s countenance clouded with disdain. She knew it was wrong but she couldn’t help hating that baby. Every time she looked into its face and it grinned charmingly at her, all she could see was Nicolas’ malevolent sneer when he raped her in a drunken fit of anger. Nicolas’ veneer of sincerity when he begged her for forgiveness, promising to be a fantastic father. His easy grin when he said he was going to work, which ended up being code for “I’m going to disappear.” How could she have bought that bullshit? Nicolas never went to work!
“Fuck,” Maya whispered, cursing her naiveté and her current situation. Some digital jingle blared from her cell phone. “Shit,” she hissed. The baby stirred, and, after some struggle, flipped onto his back, moaning, threatening to cry.
“Shush, shush, sh… Shut up! You’re gonna fuck this up for me.” Maya flipped open the cell phone. Through the phone, a gravelly voice speaking Russian was heard.
“What?” Maya asked.
“My God. Вы глупая России я когда-либо встречал,” Anatoly sighed, exasperated, “You don’t even know your mother tongue. What’s that crying?”
“Oh, nothing. My sister’s baby.”
“You sure that one’s not yours?”
“No. Why do you ask?”
“Because this is the fifth time I’ve called you and heard a baby over there. Some of my associates told me they’ve seen you on the street pushing a stroller. Just tell me the truth.”
“No, he’s not mine, for real, Anatoly. I just take him out sometimes cuz my sister needs somebody to watch him while she works. You sure you don’t have no phone girl job for her or nothin?”
“You know I don’t hire moms, Maya. I can’t have a mother living this kind of life. Having mothers do that kind of work drags the kids into it. It makes the kids fucked up.”
“You’re so moral, Anatoly,” Maya sneered, “You know, mothers need money too.”
“Well let her go be a school teacher or a secretary or something,” Anatoly replied dismissively, “Have her sue the daddy for child support. Where’s he at? Shit, I’ll help her find the mother fucker.”
“He was a john.”
“Well, she should have been more careful. Look, I’ll be in front The Sea Garden Restaurant in five minutes. Be there.” Click!
Maya leaned against the fridge and began picking at the cuticle on her left thumb. She had half a mind to call Anatoly back to tell him her sister hadn’t shown, but she knew that was a bad move. For one thing, she’d lose her job at the strip club and be blacklisted in the neighborhood. Everybody knew Anatoly. Even if she could scrounge up the money to pay the next month’s rent, the landlord wouldn’t accept it. On top of that, Maya didn’t want to stop working at the club. Working in the club was like a mini vacation for her. It gave her a chance to have all the champagne she could drink and all the coke she could snort, just like the high class models did. With the disco lights blinding her, the men whispering flattery in her ears, and the money filling her pockets, it was easy for Maya to forget she had a baby at home.
In that instant, counting the imaginary money on her pocket helped Maya forget that she had a responsibility to the sleeping child in front of her.
“I called Nora. I’ll call her again. She always checks her messages,” Maya rationalized, “Nora has keys to get in. She’ll be here in no time. The baby will sleep for at least another hour.”
The mental lie jumped into her mouth and out into the atmosphere, “Nora will be here in no time. The baby won’t even notice, he’ll sleep.” She let the ice slide over her fingers and into the sink. On tiptoe, she maneuvered around the playpen like a teenager sneaking out after their parents had forbidden it. She picked up the shabby sequined purse that looked like it was a designer brand under the club’s disco ball. Using all the grace she could muster, Maya eased the usually creaky door open a crack.
Click! The door latched behind her.
The baby only stirred.
***
Nora’s computer clock said 4:30 pm. “Geeze, Toshi’s late today,” Nora laughed to herself. The a sadder thought struck, “Maybe he’s not going to ask anymore.” Nora frowned inwardly. That would be just her luck. Right when she decided she would give it a try. Just then, the unmistakable muffled sounds of footsteps caressed Nora’s ears.
Toshi, Nora’s coworker made his way down the office hall towards Nora’s desk, shuffling his feet on the grey industrial grade carpeting. Nora stared at her computer, steadily typing and pretending she didn’t hear Toshi’s footfalls approaching.
“Um, hey,” Toshi smiled.
“Hey. What’s going on?”
“Nora, you know what’s going on.”
“Still trying to get me out for drinks?”
“Well, yeah.”
“After two months? I can’t believe the level of persistence.”
“Well, when I know what I want, I don’t settle for less.”
“That’s an admirable quality in a man. You know, for once I don’t have to babysit.”
“I think you’ve been making that whole babysitting thing up.”
Nora giggled, “No really. I wouldn’t lie to you. I’ve been wanting to go out with you for a while now. Today, I’m free. Let’s make the most of it.”
Toshi’s face registered shock, then happiness, “You’d like dinner too?”
“Yeah. I know a good Thai place around the corner. Listen, I’ll be done with my work in about fifteen minutes. Let’s meet in front of the building.”
“Alright,” Toshi walked away beaming at her, and she held his gaze, smiling until the end. She was glad to see him happy for once. Typically, he walked around the office looking frustrated. People in the office tried to down Toshi for being a foreigner, by discouraging him from speaking up at meetings because of his Japanese accent, but Nora knew they did that because they were scared of his intelligence.
To counter the office politics, Nora always tried tell him little jokes to cheer him up, offered him help with his projects. Not that he needed much help. Toshi was so attentive, diligent and reliable. If Toshi treats women like he treats his work, Nora thought, he is the kind of guy I want around when I’m not working.
Here was Nora’s chance to find out what kind of person Toshi really was. Five minutes was all Nora actually needed to finish up her work. Her secretarial position was a no brainer. She was just working the gig until she got done with law school. Nora used the remaining ten minutes to primp in the ladies room.
Downstairs, in front of the building, Toshi was waiting patiently. Shyly, he grasped Nora’s hand. In return, she tightened her grip and led him to the Thai restaurant. At the table, Nora kept smiling, even though they were talking about a whole bunch of nothing — who annoyed who at work, how Toshi was embarrassed for a coworker when they put their foot in their mouth. Toshi broke the conversation to say, “I love the way you look right now. You have a happy glow.”
“I’m just glad to have some time away from the office with you, that’s all. You know, we could hang out on Satur…” Nora’s phone vibrated in her pocket. “Who is this calling me now?” Nora pulled the phone out and looked at the number. It was Maya’s cell.
“Do you want to take that?” Toshi offered.
“No, I really don’t want to take it,” Nora said with a tinge of annoyance. Nora didn’t feel like getting wrapped up in another one of her sister’s drama sagas. Since they were teenagers, Maya had a way of stringing problems together and wearing them for the whole world to see, like a child making popcorn tinsel for the Christmas tree in front of the living room window. Whenever some guy would beat Maya up, Nora and their mother had been there to put ice on her face and kisses on her cheek. “Maya’s unlucky,” their mother used to confide in Nora, like Nora was a friend she had grown up with instead of her daughter, “Maya keeps meeting the wrong guys. I hope she finds a good one soon.” There used to be a tinge of pity in their mother’s voice.
In the past, Nora had joined their mother in the chorus of sympathy for Maya. But after years of seeing Maya continue to chase jerks and refuse honest jobs, Nora realized, Maya was not unlucky, she was throwing herself against the rocks. Now that their mother was dead and Nora was making something out of her life, she didn’t have time to play medic in an unbeatable war. That would just put her in the position to get battle scarred as well. Why didn’t Maya hire a baby sitter, she wanted to know. All that money Maya made at the club, where does it go? Nora knew it wasn’t being used on that rag-tag room that looked like it was furnished in the seventies. The sofa had a plastic sheath ,for goodness sake. Nora spent her own money on Dominic’s diapers. The sisters shopped together at Goodwill for his clothes and toys. It didn’t add up.
Especially since Anatoly and Maya had been spending more time together outside of the club. He had to be giving her money for her to give him any of her free time. That worried Nora. Maya had always managed to escape the guys before Anatoly, but this time she wasn’t so sure her sister would make it. All the girls who went around that creep thought they would be different, that they would be smart enough to outwit him, get what they could and leave. The girls who were really smart didn’t even cross paths with him. He was just a low rung goon for the real bosses. Once a girl started working for those guys, they couldn’t quit if they wanted to. Anatoly was just one of the millions of ways they enslaved women.
Toshi stared at Nora with a concerned expression, “You ok?”
“I’m sorry,” Nora giggled nervously, “Do I look like something is wrong?”
“You zoned out there for a minute. You sure you don’t need to take that call?”
“No. My sister called earlier, probably to wrangle me into babysitting again. I figure if I just ignore her, she’ll get the point.”
“And the point is?”
“I need to have some fun for once.”
“Well, we have all night to make sure you do that.”
***
Maya was right. The baby did sleep for about an hour — an hour and ten minutes to be precise. Dominic probably would have slept longer if a gunfight hadn’t broken out across the street from their building. The sound of the bullets pumping out of the gun didn’t sound like the pop! pop! pop! of a revolver. It was the tata!tata!tata! of an AK-47. Dominic’s instinct was to awaken, crying, to his aunt’s sympathetic face. This time when he awoke, all he saw through his watery eyes was the pressboard paneling of the walls.
Grasping the mesh walls of the playpen, Dominic pulled himself up to stand on his feet. Looking around, he realized no one else was in the room. This wasn’t the first time this had happened to him. On two of the occasions, he screamed louder and someone appeared from behind a door, out of nowhere, just like peek-a-boo. On the other seven, no one came. He raised his voice a few octaves but no one appeared.
Tata!Tata!Tata! Hollow tipped bullets ricocheted off the sides of the walls and through a window. Like termites, two ate their way through the cheap sheetrock walls, ripped the mesh of the playpen, and tore the back of the Dominic’s onesie.
Dominic shrieked as he lost his grasp on the wall of the playpen and fell backwards onto his blankets. The impact of the sudden fall pushed the breath from his lungs. He flailed his arms and legs, gasping. After hiccoughing a few times, his bronchioles opened up enough to allow him a moan then sobbing. Turning over onto his stomach, Dominic left thick smears of blood on the duck printed blankets. His little body quaked from incessant bawling, Dominic crawled a bit then collapsed onto the floor of the playpen. Dominic’s cries rang out like a police siren, filled the room, pierced the walls, traveled down the street and into the darkness of the night.
***
Nora cocked her head as if she could hear something under the loud music that played in the bar she and Toshi had gone to after the restaurant. Putting her drink back on the bar she said, “Something’s wrong. I can feel it.”
Toshi looked at her with concern. “What is it? Do you feel sick?”
“No, it’s not me. Something else. I just have this horrible feeling.”
“Maybe it has something to do with that call you ignored.”
“I hope not. I would feel so bad if it did.” Nora whipped out her phone, turned it back on and dialed her voice mail.
“You have two new messages,” the automated voice droned through the phone. Maya’s voice came through on the first message sounding nervous. “Nora, Anatoly is taking me out in an hour. Come watch the baby as soon as you get off from work.”
Demanding bitch, doesn’t even call Dominic by his name, Nora thought angrily. Then the next message came on.
Toshi watched as Nora’s face changed color from blood red to ash grey. “What is it?”
“My sister! She left my nephew in the house alone.”
“What?! How long ago?”
Nora babbled frantically, “About an hour and a half ago! I have to go check on him. I need a cab. It’ll take too long to get from here to Coney. You know how the trains are when everybody is leaving Midtown.”
“Of course. Let’s go.”
Toshi threw down some cash on the bar to cover the drinks and grabbed their coats with one deft motion. Clasping Nora’s hand in his, Toshi led her to the street. With his free hand, he hailed a cab. As Nora got in, she said, “Thank you. You don’t have to come. I’ll be ok.”
“Let me come with you,” Toshi said firmly, “I know we’re not close, and this is a family situation, but at least let me keep you company until your sister gets home.”
“Um. Ok,” Nora said, reluctantly.
As the cab sped towards Brooklyn, both Nora and Toshi retreated into silence. Toshi peered at Nora, who was gazing out of the window at the darkening city. Nora’s hand scratched at the car seat. Toshi put his hand over hers to quell her fidgeting.
Nora laughed softly, “I know I’m a mess.” Her eyes locked with his. The wan smile on Nora face melted into an anguished frown.
“You’re not crazy, you’re normal,” Toshi said, “If you weren’t worried, I’d be worried about you. Just try not to give yourself a heart attack. He was sleeping right? Then he must be still in the crib.”
“He sleeps in his playpen,” Nora chimed in. “He had a crib but one day I came over and it wasn’t there anymore…”
“Well then, he’s ok. Playpens keep babies safe. I’m sure he’ll be ok, locked inside.”
“Toshi, Coney Island isn’t exactly Pleasantville,” Nora took a deep breath to calm herself, “You’re probably right. At most, he’s just crying now. But I feel like he could be dead. I must be crazy.”
“It must be your feelings telling you something. If you’re just worried, at least your heart is in the right place. Maybe we should meditate, to send some good energy to your nephew.”
Nora guffawed, “I don’t believe in all that new age stuff.”
“It’s not new age. It’s old. Just like praying. Don’t you ever pray?”
“I used to, before my mom died.”
“Want to try it now?”
“Ok. It could make me feel better. It used to make me feel better sometimes.”
The cab hit a pothole, jarring them, giving them an excuse to take hold of one another. Eyes closed, currents and eddies of aura whirled and twinkled from their foreheads in the taxi driver’s rear view mirror. The driver recorded this phenomenon with a blink, then turned his attention back to the road.
The cab pulled up to the dark row house. One of the streetlights was half dim. A couple of guys sat on the porch of the neighboring house in dark shirts and thick leather jackets, even though the night air was dead still with the approaching summer humidity. The men didn’t speak to each other, just glanced at the streets and each other with coal dim eyes and mirthless lips. There was a porch light, but the men reclined on their lawn chairs in absolute darkness. Toshi looked apprehensive. “Yeah,” Nora whispered, “This is where she lives.”
Slowly, they stepped out of the cab, made their way up the sidewalk and opened the chain link fence. As they walked up the path to the back door, they averted their eyes from the glances of the men on the porch.
“Hey,” one of the men called gruffly.
Nora cringed at the man’s call. “Yeah?” she asked timidly.
“We heard a baby crying in there for a long time.”
“Oh my god,” Nora moaned dashing to the door.
Another one called behind her, “We tried calling the chick that lives there, but she didn’t answer. We left a message.”
Nora opened the door of the room, dashed in, and peered into the playpen. “Oh my god,” she cried, “He’s bleeding!”
Toshi reached into the playpen and picked the baby up. Toshi examined the baby’s back. “He lost some blood, but it doesn’t look that bad, like a bad scratch. But he’s in shock from the pain. He passed out. Call 911.” Toshi used a blanket to gently compress Dominic’s wounds while Nora dialed.
A voice came from the doorway behind them, “Oh shit. Oh shit.”
Nora and Toshi turned around to face Maya, “I can’t believe this shit. The guys next door left a message saying the baby was crying after a gunfight, but they didn’t say some bullets came into my room!” She gasped, “Oh shit. Oh shit, what is Anatoly gonna do when he hears I left this baby alone to work! He’s gonna fuckin…”
“Who cares what he’s gonna do!” Nora yelled, “I feel like I want to kill you! I can’t believe you left Dominic alone!”
“I left you a message! Your spoiled ass ignored me all day!” Nora blushed with guilt. Maya jabbed her fingers in Nora’s chest, “I knew it! I knew you were ignoring me! If you had answered the phone and come over here, none of this would have happened.”
“I will not let you blame this on me. It’s not my fault you were irresponsible!”
Maya’s face crumpled into tears. “What was I supposed to do? While you were out with your little boyfriend, I was I was…” Maya’s chest heaved, “You don’t understand Nora! You just think I can wave a magic wand or say a little prayer and things will magically change? Things aren’t like that for me! Bad shit like this is going to keep happening to me! This is just the beginning now that Anatoly is gonna know that baby is mine. Don’t you get it?” she shrieked. “Things have always been easy for you and hard for me! You always had good luck!”
The call of sirens got nearer. Maya backed out of the door, Nora following. Maya stared Nora in the eye and whispered, “Since you’re Little Miss Perfect and you know what to do, you take that baby. He’s your problem now.” Maya sniffled, turned, stumbled in her heels and dashed past the half dimmed streetlight into the oceanic darkness of the night. | https://medium.com/literally-literary/playpen-eb9a2a1ac9e8 | ['Dr. Kenya Mitchell'] | 2019-04-09 03:09:25.565000+00:00 | ['Fiction', 'Poverty', 'Literally Literary', 'New York'] |
6 Top Use Cases for Neural Networks | Customer Experience Enhancement
(Example: Sephora)
Customer data is collected by different organisations for commercial or analytical usage. Customer data can contain information like demographical data, economic status, purchase patterns etc. Using these data, ANNs can be applied to segregate customers in multiple segments. Sephora , a big cosmetic company in the world, has already started to implement ANN in their marketing campaign and going towards becoming speciality beauty retailer. They are using Augmented Reality having ANN in the core to let customers try makeup virtually. They are also implementing ANN to produce virtual fragrance sample for their customers so that the customers can feel their products. These marketing strategies have been powered by ANN and is boosting Sephora in their sales.
Digital Marketing Enhancement
(Example: Starbucks)
Now-a-days, digital marketing is a huge thing as many customers prefer to shop online and browsing is one of the most sought-after method for a very large number of customers. ANN can be implemented in digital marketing strategies to increase the customer base.
Starbucks, one of the leading coffee shop brands in the world, has used ANNs in their marketing campaigns. They have used Supervised and Unsupervised Machine Learning methods along with ANN to divide their customer groups.
Based on these, they have been able to send customers targeted content over email and other social media and has been able to improve their sales.
Their Starbucks Rewards program has been successful, and they have been able to provide the customers with great personalised experience and it has increased their revenue by $2.56 billion.
Photo by Alexas_Fotos on Unsplash
Improve Search Engine Functionality
(Example: Google)
Search Engine Optimisation is a huge factor for Search Engine companies. To improve their search engine capabilities, search engine providers are now leaning towards ANNs . Google has already implemented a 30-layer deep ANN in their system to allow the search engine to process complicated searches such as shapes and colours.
There have been significant improvement in search experience already and as a result, the error rate has dropped to 8% from 23%, according to the company report.
Customer Loyalty
(Example: FedEx)
Deep Learning can be used to determine whether an existing customer can switch to a competitor or not.
Neural networks can be used to predict which customers are most likely to switch to another organisation and how to retain them using a tailored service. FedEx has implemented a neural network to enhance and personalise their consumer’s experiences.
With the help of neural nets , FedEx has been able to predict which customers are likely to switch to their competitors with an accuracy of 60–90%.
Market Movement Forecasting
Example: LBS Capital Management
Predicting the prices of assets has been one of the oldest professions as traders through time have tried to take goods from one market and sell them in another. Nowadays, this is sped up a lot by hedge funds who can trade electronically from their offices in London, Tokyo, or even in the middle of nowhere.
Given this technological jump, it’s also possible to introduce more complicated models into the mix. Statistics is the foundation to Financial Mathematics so it paved way for the developments in Machine Learning . This has been greatly beneficial as, companies like LBS Capital Management, who’ve implemented neural nets to get positive results. Based on neural network models that utilise 6 financial indicators, they’ve been able to predict the average directional movement over the last 18 days.
Note: This isn’t financial advice and nor do I recommend anyone to take these approaches.
Reference
Photo by Nick Fewings on Unsplash
Insurance Provisions
(Example: Allianz Travel Insurance)
Insurance providers often have to provide customised travel insurance and that’s no easy task. People generally have differing holiday plans in terms of cost, age, reason and length of trip.
Given these variables , it’s important to give the right recommendation because otherwise, the responsibility ultimately falls with the insurance provider.
Neural networks are being used by Allianz to help segregate different types of policyholders to customise and offer appropriate pricing plans and provisions.
Allianz Travel Insurance has implemented a neural network in their travel insurance system which analyses many factors like length of the trip, cost of the trip, traveller’s age, reason for travelling, air miles used etc. and then comes up with the best policy for the customer.
This has helped customers to get most relevant coverage for the trip and it also reduces the time to research for the trip, making the holiday planning a less worried item. | https://towardsdatascience.com/6-top-use-cases-for-neural-networks-197669b21b27 | ['Mohammad Ahmad'] | 2020-11-23 14:50:23.920000+00:00 | ['Machine Learning', 'Technology', 'Tech', 'Artificial Intelligence', 'Data Science'] |
Blockport Crowdsale Announcement | In this article we would like to share some important changes regarding our Crowdsale start date, individual cap and rates & caps. Additionally, we updated our roadmap with more specifics for the coming year.
Crowdsale starts on 24th of January!
In the past few weeks we have gained tremendous traction from the crypto community, and therefore, we decided to pull the start time of the Crowdsale forward!
The new Crowdsale start time is set on: 24th of January 2018, 15:00 CET
Setting the Crowdsale start time forward enables us to spread the influx of incoming participants, giving committed community members more time to attain Blockport tokens.
Individual cap is 50 ETH
As a result from huge interest by our community we realised that there is a significant amount of people that want to participate with a large amount of ETH. Therefore, we have also decided to lower the individual cap from 100 ETH to 50 ETH in order to ensure that everybody has a fair chance to attain Blockport tokens during the Crowdsale.
Crowdsale rates & caps
In the past weeks we learned that we should not have pegged the price of BPT to EUR but to ETH. It is (almost) impossible to account for Ether’s price fluctuations during a month’s period and still maintain the same terms & conditions for every token sale participant. Therefore, to ensure clarity towards our community we are maintaining our current rates & caps in ETH.
This is for several reasons:
Changing the BPT rates has multiple implications since it affects the bonus structure and exposure to risk of early pre-sale participants. For example, if we would increase the BPT rates — to account for ETH price increase — this means that pre-sale participants will not receive the bonus as a reward for taking the risk to participate one month earlier than Crowdsale participants.
Secondly, we will not be able to lower the ETH cap, because this would mean a lower total supply of BPT for all Crowdsale participants. Additionally, lowering the hardcap will also lower the company’s absolute BPT allocation, which will be mostly used for rewarding our community members for supporting us (bounty + referral program). As a result, we would not meet our planned BPT budget and the community will have less room to obtain BPT tokens in the Crowdsale.
Roadmap updates
We made some small updates in the specification of our roadmap. Since we pulled the start date of our Crowdsale forward we have decided to release our Blockport 1.0 Beta in March 2018.
Blockport Roadmap
Summarized Crowdsale announcements: | https://medium.com/blockport/blockport-crowdsale-announcement-9ea245eaa418 | ['Kai Kaïn Bennink'] | 2018-01-23 17:04:34.864000+00:00 | ['Startup', 'ICO', 'Blockchain', 'Ethereum', 'Bitcoin'] |
Training Tensorflow Object Detection API with custom dataset for working in Javascript and Vue.js | Generate the TFRecords for training
In order to create the TFRecords we will use two scripts from Dat Tran’s raccoon detector; xml_to_csv.py and generate_tfrecord.py files.
Download and place them in object_detection folder.
You can go to References section below to see from where I downloaded them.
We need now to modify xml_to_csv.py script so we can transform the created xml files to csv correctly.
# Old:
def main():
image_path = os.path.join(os.getcwd(), 'annotations')
xml_df = xml_to_csv(image_path)
xml_df.to_csv('raccoon_labels.csv', index=None)
print('Successfully converted xml to csv.') # New:
def main():
for folder in ['train', 'test']:
image_path = os.path.join(os.getcwd(), ('images/' + folder))
xml_df = xml_to_csv(image_path)
xml_df.to_csv(('images/'+folder+'_labels.csv'), index=None)
print('Successfully converted xml to csv.')
Then we can use the script opening the command line and typing:
python xml_to_csv.py
As you can observe, two files have been created in the images directory. One called test_labels.csv and another one called train_labels.csv
Note: if you get “No module named ‘pandas’” error just do a conda install pandas.
Before we can transform these csv files to TFRecords we need to change the generate_tfrecords.py script.
From:
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'raccoon':
return 1
else:
None
To:
def class_text_to_int(row_label):
if row_label == 'jagermeister bottle':
return 1
else:
None
Now the TFRecords can be generated by typing:
# train tfrecord
python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record # test tfrecord
python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
These two command generate a train.record and a test.record file which can be used to train our object detector.
Note: if getting an error like “module tensorflow has no attribute app” is because you are using Tensorflow 2.0 so we need to change a line in generate_tfrecord.py file.
From:
# line 17
import tensorflow as tf
To:
# line 17
import tensorflow.compat.v1 as tf
Or if you prefer you can rollback to Tensorflow 1.0 instead doing:
conda remove tensorflow
conda install tensorflow=1
Configure training
Before we start training we need to create a label map and a training configuration file.
Creating a label map
The label map maps an id to name. We will put it in a folder called training located in the object_detection directory with the name labelmap.pbtxt
item {
id: 1
name: 'jagermeister bottle'
}
The id number of each item should match the id of specified item in the generate_tfrecord.py file.
Creating a training configuration
Now we need to create a training configuration file.
We are going to use faster_rcnn_inception_v2_coco model which can be downloaded from:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Download it uncompress the file and place it into object_detection folder.
Folder name will look like faster_rcnn_inception_v2_coco_2018_01_28
And we are going to start with a sample config file named faster_rcnn_inception_v2_pets.config which can be found in the sample folder.
You can download it from:
https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs
Keep the same name and save it into the training folder and open it with a text editor in order to change a few lines of code.
Line 9: change the number of classes to number of objects you want to detect (1 in our case).
Line 106: change fine_tune_checkpoint to the path of the model.ckpt file
fine_tune_checkpoint:
"/Users/<username>/projects/tensorflow/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
Line 123: change input_path to the path of the train.record file:
input_path:
"/Users/<username>/projects/tensorflow/models/research/object_detection/train.record
Line 135: change input_path to the path of the test.records file:
input_path:
"/Users/<username>/projects/tensorflow/models/research/object_detection/test.record
Line 125-137: change label_map_path to the path of label map file:
label_map_path:
"/Users/<username>/projects/tensorflow/models/research/object_detection/training/labelmap.pbtxt
Line 130: change num_example to the number of images in your test folder.
num_examples: 10
Training model
We are going to use the train.py file which is located in the object_detection/legacy folder. We will copy it into the object_detection folder and then we will type the following into the command line:
Update: or we can use the model_main.py file in the object_detection folder instead.
python model_main.py --logtostderr --model_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
If everything was setup correctly the training should begin shortly.
Note: if you are getting an error like “module tensorflow has no attribute contrib” is because you are using Tensorflow 2.0.
There are two ways of solving this:
update scripts for being used with Tensorflow 2.0 rollback to Tensorflow 1.0
The easy way is doing a rollback so do the following
# uninstall Tensorflow 2.0
conda remove tensorflow # install Tensorflow 1.X
conda install tensorflow=1
Training model progress
Once the training is running you will see every 5 minutes or so (depending on your hardware) the current loss gets logged to Tensorboard. We can open Tensorboard by opening a second command line, navigating to object_detection folder and typing:
tensorboard --logdir=training
A web page is now opened at localhost:6006
You should train the model until it reaches a satisfying loss.
The training process can be then terminated by pressing Ctrl+C. | https://towardsdatascience.com/training-tensorflow-object-detection-api-with-custom-dataset-for-working-in-javascript-and-vue-js-6634e0f33e03 | ['Adrià Gil'] | 2020-02-25 14:46:27.446000+00:00 | ['JavaScript', 'Artificial Intelligence', 'Guides And Tutorials', 'TensorFlow', 'Machine Learning'] |
Three Medium Writers who educate, entertain, and entice me to keep reading (and writing). | When I first joined the Medium Partner Program, there was an awesome criteria in place for you to earn money: You had to both contribute your writing and be an active reader. You had to be part of the community supporting fellow writers. This meant you had to spend time reading, highlighting, commenting, and sharing others’ posts. It was a wonderful thing and I’m sorry that it disappeared somewhere in the last two years.
What this guideline forced writers consumed with putting words down was what we could pick up from someone else and learn. This is a developed habit that I continue to use to meet new writers, discover a different point of view, and broaden what I read in a day.
Here are three writers I’ve met on Medium whose writing touches, educates, and connects. Enjoy:
Hair to Dye For has this 60 year-old laughing and nodding in agreement and understanding. I let my hair do its thing a few years ago. Marie describes a woman’s connection to her hair, “I had turned fifty years old that summer and no longer felt I could tolerate trying to make myself in an image that pleased others while leaving me miserable.”
I’ve exchanged many comments with Florian telling him: You made me laugh, cry, and learn all in the same article. He writes so much that touches me in one way or another, it was difficult to narrow my pick. I chose this article because we both lost our fathers to wasting diseases — his to cancer, mine to ALS. Even within the time of dying, humor can be found.
Agnes and I have bonded over shared family stories. But don’t stop your reading there. Agnes writes intimate poetry, yoga-focused health articles, and socially attuned stories. If you are new to Medium (or been here forever!), connect with Agnes. She shares her support like a godsend carrying an umbrella on a rainy day.
Three — it’s a good thing we are limited to three because I have many great writers bookmarked and could easy write about six, nine, twelve … you get the idea. Enjoy! | https://medium.com/top-3/three-medium-writers-who-educate-entertain-and-entice-me-to-keep-reading-and-writing-c7bcda68a43f | ['Rose Mary Griffith'] | 2019-12-12 17:25:42.569000+00:00 | ['Grief', 'Aging', 'Writing', 'Life', 'Top 3'] |
Hexagram 49: Revolution. A Short Story | I dreamed that night. After a journey along roads fringed by tall maize and sorghum, after I climbed the steps of the old temple and made obeisance in the incense-fragrant hall occupied only by myself and a few sparrows, after I looked him in the eye and he stared back at me from the shadowed wall, I dreamed of old Fu Xi. He was hairy, wild-eyed, a man of the soil, dressed in leaves and animal skins, his toes sturdy, his big hands capable, grasping at everything.
It was a dream. What can I say? Without the logic of brute things, my mind constructed a slapdash simulacrum of the world. I dreamed of Fu Xi, but he was also my boss from work. His tie, now I think of it, was made of leaves. The Freudians will say that he was also my father; and perhaps they are right. But there is more than one story in the world.
So there he was, leaf-clad Fu Xi, and he was restless and everywhere. He fashioned a bow and arrow, and with it felled a deer; with the pelt of the deer he made a coat; with the meat of the deer he made dinner. And the dinner was good. But afterwards, he did not sleep. Instead he leapt up to invent writing and music and the arts of divination.
The people — timid people with soft skins that bruised easily — peered from their leaf-shelters, small, nervous creatures on the brink of a world in which everything would soon be different. ‘Teach us,’ they said. ‘Teach us.’
Fu Xi looked at me. This is the part of the dream that I am sure I am remembering with precision. He looked at me, as he had in the temple the day before, but with real, not painted, eyes. Then he was off again. He invented a robot rabbit that played the drums, and he invented a battery to ensure the rabbit could continue playing for eternity without cease, and he invented a pill to make erections last, and he invented a novel way of curing bacon, and he invented a bomb capable of destroying the entire world.
At this point I formed a clear intention. Freudians, say what you will. I am playing into your hands. I formed the intention to kill him. No bomb, no cured bacon, no erection pills, no robot rabbits, no divination or music or writing.
But just then I must have cried out, because my wife shook my shoulder. ‘What?’ she was saying.
‘What?’ I replied, my eyes now open, the shadow of the dream still upon me.
‘You were raving about rabbits,’ she murmured.
‘I was?’
‘You said something about erections.’
‘Did I?’
‘Hmm…’ Then I heard her sigh, and her breathing slowed. The following day she would not remember this. I fell asleep just in time to see Fu Xi — hairy, wild-eyed old Fu Xi — disappearing away over the hill. His back was turned, his vast shoulders hunched in sadness, his large hands empty. | https://medium.com/wayward-things/hexagram-49-revolution-a-short-story-f63261ef7491 | ['Will Buckingham'] | 2019-12-18 15:16:42.303000+00:00 | ['China', 'Writing', 'Fiction', 'Short Story', 'Philosophy'] |
The Deep Learning Tool We Wish We Had In Grad School | Deep Learning
The Deep Learning Tool We Wish We Had In Grad School
How Determined could have fixed our deep learning infrastructure problems
Author(s): Angela Jiang, Liam Li
Machine learning PhD students are in a unique position: they often need to run large-scale experiments to conduct state-of-the-art research but they don’t have the support of the platform teams that industrial ML engineers can rely on. As a result, PhD students waste countless hours writing boilerplate code, ad-hoc scripts, and hacking together infrastructure — rather than doing research. As former PhD students ourselves, we recount our hands-on experience with these challenges and explain how open-source tools like Determined would have made grad school a lot less painful.
How we conducted deep learning research in grad school
Photo by Angela Jiang
When we started graduate school as PhD students at Carnegie Mellon University (CMU), we thought the challenge laid in having novel ideas, testing hypotheses, and presenting research. Instead, the most difficult part was building out the tooling and infrastructure needed to run deep learning experiments. While industry labs like Google Brain and FAIR have teams of engineers to provide this kind of support, independent researchers and graduate students are left to manage on their own. This meant that during our PhDs, the majority of our attention was spent wrangling hundreds of models, dozens of experiments and hyperparameter searches, and a fleet of machines. Our ad-hoc workflows prevented us from doing better research, as tasks like starting new experiments and distributing training would cause increasingly more strain on the existing workflows and infrastructure.
Every project started the same, with a lone graduate student tasked with implementing a research prototype and performing a virtually endless number of experiments to test its promise. There was little infrastructure, scarce resources, and no process. So we would start writing one-off scripts: scripts to create the prototype, scripts to kick off dozens of experiments, and even more scripts to interpret the logs from these experiments. These scripts were run on whatever machines we could find: the labs’ machines, friends’ lab’s machines, AWS spot instances, or even our professors’ personal machines. As a result, we’d have gigabytes of logs in various drummed up formats, model checkpoints, and PDFs of graphs showcasing our results, scattered about the file systems of the machines we used. We quickly learned that to survive as ML graduate students, becoming well-versed in engineering, system administration, and infrastructure management was table-stakes.
Photo by Ferenc Horvath on Unsplash
The first time each of us realized that it didn’t have to be this way was when we did industry internships. As interns in a place like Google Brain, we had access to Google’s internal training infrastructure that allowed us to focus on research as opposed to operations.
It was daunting to leave a place like Google, knowing that as independent researchers, we would be back to managing our own infrastructure and that this would come at the cost of doing our research.
Fortunately, you don’t have to do grad school the way we did. Open-source tools for deep learning training have matured and can empower individual researchers to spend less time wrangling machines, managing files, and writing boilerplate code, and spend more of their time forming hypotheses, designing experiments, interpreting results, and sharing their findings with the community. But in the throes of conducting research and surviving grad school, it is difficult to invest time to learn a new tool without the guarantee that it will increase your productivity. To help future graduate students get over that hurdle, we share the ML research pain points that Determined AI would have alleviated for us.
How Determined can transform the research experience
Throughout the life cycle of a deep learning research project, you’re bound to run into several common pain points. Today, many of these can be alleviated with foresight and the right tooling. In this section, we share the pain points we commonly encountered and how tooling like Determined can help.
Photo by Lauren Mancke on Unsplash
Monitoring experiments
A single deep learning experiment can run for days or weeks and requires constant monitoring. In grad school, we would typically monitor experiments by tailing a log file or SSH’ing into the cluster and using tmux to monitor the job’s console output. This required remembering to start the experiment in a tmux session, to log key metrics, and to manage output log file naming and organization. When running several experiments at the same time, this also required tracking which experiment was running on which machine. Tools like Determined shed this overhead by automatically tracking and persisting key metrics and by logging them to TensorBoard. These results and more are available in a web UI for users to monitor their experiments in real-time and can be shared with peers, advisors, and community members with a single link.
Dealing with failures
An experiment can also crash due to transient errors that are out of our control. This issue is exacerbated when running on preemptible cloud instances as a cost-saving measure. In these situations, we could easily lose hours or days worth of work and would then need to relaunch an experiment manually by passing a command via SSH. With Determined, the system automatically retries failed jobs for you, so no time is wasted when an error occurs. Automated experiment logging helps you diagnose and track where failures are happening across machines. Checkpoint saving also ensures that little progress is lost when a failure occurs. Determined manages checkpoints automatically: users can specify policies to control how often checkpoints are taken and which checkpoints should be preserved for future use.
Managing experiment results
The result of days and hours of experimentation are artifacts like log files, model checkpoints, and results from subsequent analyses. It’s necessary to persist results in all stages of the project to retroactively report them to the community. Initially, managing this data is straightforward to do on the file system with careful naming and folder organization. But as a project progresses, it becomes an unwieldy way to track the gigs and gigs of emerging data. For us, it was common to have to redo a long-running and resource-intensive experiment because we lost track of a particular experiment graph or the script and model checkpoint to reproduce an earlier result. Instead, it’s better to start with an experiment tracking platform early into a project’s lifecycle, so that all experiment data is managed for you. Using Determined, model source code, library dependencies, hyperparameters, and configuration settings are automatically persisted to allow you to easily reproduce an earlier experiment. The built-in model registry can be used to track your trained models and identify model versions that are promising or significant.
Distributing training
Deep learning training requires a huge amount of resources, with state-of-the-art results sometimes requiring tens of thousands of GPU hours. Almost inevitably, independent researchers need to scale their own training experiments to more than a single GPU. By relying on native PyTorch or TF distribution strategies, we were still left with tasks like setting up networking between machines and alleviating errors from machine failures and stragglers. At Determined, distributed training is rolled out for you by infrastructure experts. By writing your model code in Determined’s Trial API format, you can distribute your code with a single config file change. Determined takes care of things like provisioning machines, setting up networking, communicating between machines, efficient distributed data loading, and fault tolerance.
Managing experiment costs
Many deep learning practitioners rely on cloud platforms like AWS and GCP to run resource-heavy experiments. However, when operating within tight academic budgets, cloud platforms were often prohibitively expensive. Instead, we would run on cheaper spot instances without guaranteed uptime. Consequently, we had to manually restart stopped instances, checkpoint experiments constantly, or in absence of this, suffer lost results. To make the most use of available resources, Determined manages cloud compute resources automatically for the user depending on what jobs are queued. When using AWS spot instances or GCP preemptible instances to reduce cost, Determined maintains reproducibility with fault-tolerant checkpointing.
Hyperparameter tuning
Hyperparameter tuning is a necessary step to achieve state-of-the-art model performance. However, these were one of the most difficult experiments to run as they scale up all of the pain points previously discussed. Running a grid search is simple in theory, but ends up being orders of magnitude more costly and longer to run than traditional training. Algorithms that employ early-stopping like SHA and ASHA can be dramatically more efficient but are difficult to implement. (Well, not for Liam who invented these algorithms, but it’s difficult for the rest of us!) Hyperparameter searches also generate a lot more experimental metadata to manage and are harder to rerun when things go wrong. With Determined AI, you can run hyperparameter searches with state-of-the-art algorithms by changing a config file. And just like in regular training, you get experiment tracking, distributed training, and resource management out of the box. You can also pause, resume, or restart hyperparameter tuning jobs on-the-fly.
Ensuring reproducibility
Building upon empirical results, either by ourselves or the broader community, requires being able to reliably reproduce said results. Reproducibility is becoming a first-class goal of the ML research community, with initiatives like the Reproducibility Challenges or Artifact Evaluations. During our PhD, we found it difficult to anticipate all the data needed to ensure full reproducibility of our results. For instance, we may start by saving the experimentation script and the code (via git SHA) that led to a particular result, only to come to find that we could not reliably reproduce the result without knowing the machine the experiment ran on. With Determined AI, the data you need for reproducibility is automatically persisted for you, including the code used to create the model, the model weights, the full environment used to train the model, and the data preprocessing code.
If you are like us, you may find yourself spending most of your time on operations, not research. Fortunately, with the emergence of ML infrastructure tools, you don’t have to do ML research the way we did. Tools like Determined provide researchers the foundation to build state-of-the-art and even production grade models. If you feel like you could benefit from the backing of a training platform, we encourage you to give Determined a spin. To get started, check out our quick start guide. If you have any questions along the way, hop on our community Slack or visit our GitHub repository — we’d love to help! | https://medium.com/towards-artificial-intelligence/the-deep-learning-tool-we-wish-we-had-in-grad-school-3107f9a39efa | ['Angela Jiang'] | 2020-11-19 17:13:47.853000+00:00 | ['Machine Learning', 'Deep Learning', 'Technology', 'Software Engineering', 'Education'] |
The Sharing Economy + AI Assistants in China — my short trip in May, 2017 | Trips to China
I take quarterly weekend trips to China to stay in the loop of the tech scene there. It’s substantially different every time I visit.
My last trip was in Q4, 2016. Thanks to Kai-Fu’s invitation, I had the pleasure to visit the Foxconn factory in Shenzhen, and it was one of the best trips I’ve had. Needless to say, everything was confidential on that trip so I can’t really talk about it.
One takeaway is that Foxconn is one of the top robotics companies in the world, I was very impressed by the operational efficiency and scale. I’ve also decided to spend more time looking into factory AI and robotics going forward (more on that later).
Recently, I took another trip to Shanghai and Beijing; here are some observations.
The sharing economy
The biggest change this time compared to the last was that almost everyone on the streets of Beijing was using a “bike-share.” Ofo and Mobike were the most popular, followed by Bluegogo on the streets of Beijing. Even my mom uses them. In fact, she already has 3 apps. She calls them the yellow, orange and blue bikes.
The American news media has just started to pick up on this phenomenon (a little bit late). At the same time, at least 5–6 bike sharing companies have entered the U.S. market to my personal knowledge. It’s easy to think this might be the biggest thing in the U.S. as well, following the success of the Chinese biking unicorns, but the two markets are substantially different.
Here are some of the big ones for American investors to think about:
Do people’s behaviors need to change? Bikes as transportation has a long and illustrious history in China. There are bike-only lanes in most cities. In places where the bike lanes aren’t so obvious, people ride them on side walks all the time. Bikes as transportation and last mile delivery is natural and organic for Chinese citizens. How many people in the United States use bikes as transportation? Where do they ride the bikes, in lanes shared with cars?
Bikes as transportation has a long and illustrious history in China. There are bike-only lanes in most cities. In places where the bike lanes aren’t so obvious, people ride them on side walks all the time. Bikes as transportation and last mile delivery is natural and organic for Chinese citizens. How many people in the United States use bikes as transportation? Where do they ride the bikes, in lanes shared with cars? Sharing helmets? No one wears helmets in China (masks and umbrellas are in fact more common for bikers than helmets). How many cities in the United States do not require helmets? How would you feel about using a helmet dozens of others have sweated into?
No one wears helmets in China (masks and umbrellas are in fact more common for bikers than helmets). How many cities in the United States do not require helmets? How would you feel about using a helmet dozens of others have sweated into? Parking? You can practically park bikes anywhere on the street (technically, it’s not allowed, but I’ve seen bikes literally everywhere in my short trip to Beijing including in the middle of a highway). Biking companies are expected to put major work into negotiating with cities to solve the parking problems. Will US cities live with these problems — even in the short-term?
You can practically park bikes anywhere on the street (technically, it’s not allowed, but I’ve seen bikes literally everywhere in my short trip to Beijing including in the middle of a highway). Biking companies are expected to put major work into negotiating with cities to solve the parking problems. Will US cities live with these problems — even in the short-term? Chinese investors invest in biking companies for the payment entry . One of the main reasons Chinese tech giants backed the bike share companies in China is that it’s a relatively low-friction entry point for new wallet customers. Users only pay a negligible amount of money to use a bike, but the account creation and linking of bank accounts is what these payments giants sought. This makes a lot of sense, because it’s getting harder and harder to convince investors that they’ll break even based on the fees charged by the bike alone — the competition is so intense that the bike share companies are almost paying users to use the bikes. I paid zero to ride hours of bikes switching across multiple companies. Does this logic make any sense in the U.S. context?
. One of the main reasons Chinese tech giants backed the bike share companies in China is that it’s a relatively low-friction entry point for new wallet customers. Users only pay a negligible amount of money to use a bike, but the account creation and linking of bank accounts is what these payments giants sought. This makes a lot of sense, because it’s getting harder and harder to convince investors that they’ll break even based on the fees charged by the bike alone — the competition is so intense that the bike share companies are almost paying users to use the bikes. I paid zero to ride hours of bikes switching across multiple companies. Does this logic make any sense in the U.S. context? Exit strategy. Tech innovators in China expect to get acquired by one of the tech giants (all the giants want to enter the same hot market); if a company does well, the chances of them getting to a handsome exit is pretty high. This might be a positive sign for Chinese and U.S. investors looking to invest in bike share companies. Does anyone think this way here in the U.S.?
As the bike share market starts to peak in China, other kinds of sharing economy concepts have been introduced; for example — sharing phone chargers, umbrellas, gym passes, you name it.
It seems like everything is becoming cheaper and more accessible to anyone anywhere. This is a picture of a single-person KTV booth for people who need a fix on-the-go.
The powerful personal assistants
I’m a fan of human-powered AI and AI assistants. If you live in the Silicon Valley, you’ve probably heard of or have used one of the personal AI assistants (Amy, Clara, Magic or Fin).
My friend recommended a China-version called Lai Ye before this trip, backed by Sequoia China which made my life much easier (I have no relationship with Lai Ye).
Lai Ye is different from the U.S. AI assistants. It does pretty much everything I can think of: on-demand coffee, scheduling via WeChat, booking cars, finding maids, sending packages, procuring on-demand massages, booking plane and train tickets etc. etc.
I used Lai Ye 3 times and was impressed each time. Lai Ye integrates with popular services so when I asked for coffee, it showed me a menu of different coffee options from Starbucks and asked me to pick one. Lai Ye assigned the task to a worker who delivered the coffee to me in person. The whole experience end-to-end took 19 mins; the transaction on my side took less than a minute.
As a consumer, it made a big difference for me because…
It removed the friction of setting up the app and the payment method for every app I need use. I now have one interface and one payment channel and I can enjoy every service. This is particularly useful given the ever-evolving landscape of apps.
It was super-efficient and reduced the time I needed to spend on things like calling a provider, finding a delivery window that worked for me, putting it on my calendar, etc. With the app, products and services come to me.
The question, of course, is whether they’ll be able to provide consistently impressive level of services when they’re trying to run a profitable business. I only used it 3 times.
I suspect they have significant operational challenges especially on the delivery side during rush hour. Doing one task well (e.g. scheduling) is already hard, let along doing 5–10 of them well. One thing they do very well is that they integrated with many products/services and standardized a big portion of each task (the picture below shows their partners).
Much like Slack, the more integrations they have, the more useful they become. | https://medium.com/startup-grind/the-sharing-economy-ai-assistants-in-china-my-short-trip-in-may-2017-1dc8815fdc90 | ['Lan Xuezhao'] | 2017-06-13 09:03:12.617000+00:00 | ['China', 'Tech', 'Artificial Intelligence', 'Transportation', 'Sharing Economy'] |
What Surveillance Misses | What Surveillance Misses
Why jumping to conclusions is risky
In Mark Bartholomew’s book, Adcreep: The Case Against Modern Marketing, Neil Richards’ idea of “intellectual privacy” is highlighted, which should change how marketers approach their data and targeting.
“Richard makes a normative claim that processes of thinking and making up one’s mind about a subject fare best when performed free from outside exposure. To be truly valuable thoughts need time to incubate in private before they are able to be expressed publicly. When other entities can see the books we read, music we listen to, and the websites we consult, this intellectual refuge evaporates.”
For the larger part of ad targeting and data gathering, the entities in which collect our information blindly skip over the “But”, significantly altering what makes each of us and our decisions unique.
You bought a business book on Amazon, BUT _____ .
The answer to “But”, completely changes what this bit of data means to Amazon, your credit card, and potential marketers.
Just because you bought a business book on Amazon via Prime, doesn’t make you a eager, savvy-business student. There are countless contexts and variables which are not considered when attempting to understand You.
For example, you bought a business book on Amazon, BUT the shipping address was not your home’s. (The book was a gift for someone else.)
Or, you bought a business book on Amazon, BUT you absolutely despised it. (You’re actually not interested in business literature.)
When we consider the culmination of all of our online activity and consumer behaviors, many of our actions lack the invaluable description and motivation which accurately paint who we are, and why we’re doing what we’re doing.
As Neil Richard points out, we need time to incubate reliable opinions and tastes free from onlookers. A Like, Save, Forward, Buy, Watch or Listen do not reveal the full picture of who we are, and if anything paints an unreliable and questionable one… One in which neglects to answer the most valuable piece of data: But, what? | https://medium.com/on-advertising/what-surveillance-misses-4c470c498b1c | ['Matt Klein'] | 2019-05-28 15:54:33.973000+00:00 | ['Marketing', 'Technology', 'Tech', 'Digital Marketing', 'Advertising'] |
For Once | For Once
A poem about Freedom and Men
Photo by Edmund Lou on Unsplash
While humanity shivers in fear
Locked up in homes
The leaves quiver without hesitation
The roses bloom and no one plucks them
Finally, the animals have a retreat
from the long-lasting fear and busy schedules
of running and hiding from human
For once, the earth took a deep breath
Closed her eyes and slept well.
While humanity looks on in horror
The gifts of nature, such as air
Free to all men, yet choked from some
The children frightened and confused
We are not promised tomorrow
But hope remains the little fairy tale
To help us sleep through the night
Looting gold and looting rights
I could excuse them
in the name of the evil plight
But to take away
that which are only abstracts
such as hope and love —
I cannot see an acceptable argument
For once let him breath
For once let the children play. | https://medium.com/the-pom/for-once-33e594174e77 | [] | 2020-12-21 16:48:11.398000+00:00 | ['Poetry', 'Humanity First', 'Freedom', 'Environment', 'Racism'] |
Terminology For Feminists and Anti-Feminists | Terminology For Feminists and Anti-Feminists
Some strong opinions you may, or may not, agree upon
The less known female mouse with her earnings. christyl rivers
Some terms that get people worked up: Rape Culture, Patriarchy, Toxic Men
To be nice, instead of being outraged, is sometimes a great challenge to writers and readers. It’s a word weary world.
It is for me, because I tend to get very animated, and sometimes, overly passionate, about my so-called ‘far leftist’ agenda. My agenda, to get people to appreciate one another and our planet, does not seem so far out to me. However, I do understand how it gets lumped in there with attitudes about man hating, ‘post-racial’ cultures, cultural relativism, and the upcoming elections.
‘Man hating’ is a phrase right up there with ‘toxic masculinity’. Do women hate men so much? Why is masculinity being called toxic? Why is victim blaming a thing, or, for that matter reflexively saying women want to ‘play the victim.’
Crying ‘victim’ is also heard ad nauseam about those of us who have talked about the dangers of climate change (now climate crisis) for decades. People feel that the environmentalists are just doom and gloom-y whiners. The Earth is not going to end, so lighten up, lady!
Virtuous Victory
Here is the thing, though. Victory is the opposite of victimhood. Reducing sexism, or racism, or domination hierarchy, which hurts people and planet, is about victory. It’s about celebration, social healing with grace and strength, and appreciation of one another.
Victimhood is more the purview of those few put upon white males who never got to be president, or boss, go on a consensual date, go to college, or drive a babe magnet. Whole people do not earn their self esteem through those things, alone.
We hear a lot about victim blaming and claimed ‘victimhood’ when we talk about rape culture. Isn’t rape culture just another feminist myth? Isn’t patriarchy, and male privilege just something made up, presumably so women can whine about stuff? It’s not, but when your mind is already made up that women ‘just make stuff up.’ No one wins. No one gains. We all become victims in a futile game of not hearing one another. In this sense, it is true, that victimhood, as a frame of mind, is self-reinforcing.
Let’s look at some other problematic terminology. The word feminism has been attacked from all sides more than a super hero in the Marvel cinematic universe. It is even reproached by some ‘lefties’. I looked at the word from all sides myself, and I agree that it has a serious branding problem. However, we have to sometimes look at words according to what the universal dictionary meaning is:
Feminism: The belief in social, economic, and political equality of the sexes.
Of course, the definition is somewhat slightly different depending upon which dictionary is consulted. Some include the word, ‘advocacy’ for women’s rights. But, instantly, this starts word wars, because, why not advocacy for men AND women?
Equality is not a better term, because it ignores, some say, that it is females, who have been historically disadvantaged. And, the fight is on.
Define it first
But we have to start with some definitions, to get a common grasp of what people mean when they use words. To many people, feminism has become a prejudice favoring women, not equality. To some, it is an enormous threat to our civilization, because they sense it challenges traditional roles.
Here is the truth: Feminism is always about choice. Want to stay home and care for children? That’s your choice. Feminists just want to extend the same choice to men who may want to stay home and care for children. That, and every other privilege.
Boys will be boys. This phrase is problematic. The dictionary tells us that boys are boys. If we must honor dictionary meanings, as we do above, what else are ‘boys’ to be?
But the phrase ‘boys will be boys’ has three words, and it seems to me that only disingenuous people will ignore that those three words together describe males being destructive, and yet excused for being so.
Are males all toxic? Destructive? No. Most are not, the problem is we have all been taught to tolerate a lot of rotters, especially if they are powerful, wealthy, predators like Cosby or Epstein.
In many of my articles, I use the phrase ‘magnificent masculinity’, to describe male behavior that is not toxic, and to be more encouraged. Putting adjectives in front of words should make them more, not less, descriptive.
No word gets more hate than the word abortion. Having studied it for years, my view is that abortion is extremely rare, and far less dangerous, in places where women are afforded easy access to it. More importantly, access to education and contraception lessens the need for abortion. Some people hate it that Planned Parenthood prevents more abortion, than perhaps, any other entity on earth, (except maybe education) why is this so?
Some people believe it is used as birth control. Some do not. Most people do agree that the choice should be up to those most affected by outcomes. It is on this common ground that we should all stand.
Most of us believe that the number of abortions should be reduced. We disagree about how to do that.
Coming to terms with equality
Creating equality, providing healthcare, encouraging education, and compassion is always a good way to go with any issue. With true equality, male birth control and male pleasure depends on all parties sharing consent, compassion, and caution.
Abortion will always be a difficult problem until equality makes it a human rights issue, and not a ‘women’s issue.’ A human being should have safe and legal access to all necessary healthcare for undisputed human beings. Being judgmental is not helpful.
The term ‘pay gap’ is thrown around, and often out, by those who object to modern feminism. Maybe it’s just me, but I think the term ‘earnings gap’ is far more accurate. Women are very, very often simply not paid at all for the housework and childcare they do. Basically, work that women do is devalued, until men become more feminist.
The phrase ‘women who work’ drives me nuts. I have never met a woman who does not work. The problem, for them and often their male spouse, and, or household, is that they don’t earn enough. And when they do earn, yes, we still have a gap to repair. A waiter is of higher status than a waitress. Yet, in some nations, most doctors are female. They earn less because a physician is considered be a care-giving type of occupation. But in places where it is considered science, male doctors earn more.
People fight about terms like ‘rape culture’, because just as with ‘toxic masculinity,’ ‘battered women’ and lots of phrases, we again run into putting an adjective in front of a noun. Is all culture rapey? No. Is all masculinity toxic? No. Are only women battered? No. Are all slut shamed people female? No. Does every job have a pay gap? No.
We have to learn to listen to one another, and solve this adjective problem. Ignoring how all of these kinds of terms became as feminized as Ru Paul’s Drag Race, doesn’t help.
Emotional intelligence
How about the term ‘emotional female?’ With feminism, true equality, expression allows men to be less pent up and violent. Emotionality is human, not female. And, anger, it turns out is also an emotion. Don’t tell human beings not to cry, we are better people when we allow true feelings.
Cultures, and socialization changes over time. Many women suffragettes wanted the vote because domestic violence (a better term than wife battering) motivated women to advocate for an end to alcoholic rage. They preached sobriety. That didn’t work out. But, instead, eventually women — even respectable ones — were welcomed into saloons and bars, and hooray (!?), even alcoholism became more gender neutral and less judgmental.
We live in a climate crisis world now. Feminism, is seldom accused of causing that, but it also is not invoked nearly enough as a solution. In a greener world, equality would see to it that resources are used wisely and fairly. Racism and sexism, always tend to slight one group in order to privilege the powerful. Smart men don’t want this any more than women do. Domination hierarchy, a term I prefer to ‘patriarchy’ is a big reason for this.
In one thousand years, I think the world, if we protect it, will be less unjust than today. The term feminism will hopefully be used in a historical context. Should we survive, and save, not destroy civilization, it will be because more men and women embrace feminism as simple fairness. | https://medium.com/the-ascent/terminology-for-feminists-and-anti-feminists-92b48d5a1b04 | ['Christyl Rivers'] | 2019-07-16 13:41:35.002000+00:00 | ['Language', 'Self-awareness', 'Politics', 'Feminism', 'Culture'] |
Next Tech will be at the 2020 ASU GSV Summit! | I am thrilled to announce that Next Tech has been selected as one of the “Elite 200” companies that will present at the 2020 ASU GSV Summit in the GSV Cup competition! ASU GSV is widely regarded as one of the top education conferences in the world, so we are looking forward to attending this event and being a first-time participant in the competition.
Over the past year, Next Tech has experienced a massive surge of growth from all levels of the technology education industry. Industry leaders are realizing that mastering computer programming, web development, and data science requires a hands-on approach.
The future is, without question, brighter for anyone learning in-demand 21st century skills due to this industry-wide mindset shift. As a longtime proponent for teaching technology skills using hands-on learning, I am encouraged by the current progress and very excited for what the future holds.
If you are in town March 30th to April 1st, come see what Next Tech has accomplished to date and see what the future holds for us and the technology education industry as a whole!
About Next Tech
Next Tech is the provider of the leading platform for teaching and learning computer programming, web development, data science, and other technology skills. It pairs instructional material with interactive browser-based coding environments, where students can master new skills through hands-on learning.
About ASU GSV
The ASU GSV Summit gathers leaders in government, education, and the workforce advancing social and economic mobility by bending the arc of human potential through innovation in education. The GSV Cup itself is a unique cross-sector competition spanning “pre-K to gray” sectors, including technologies in corporate learning and talent management, workforce analytics, early childhood, K-12, HireED and postsecondary education.
About the Summit
The 2020 ASU GSV Summit will be held from March 30 to April 1 in San Diego, CA. The 2019 Summit drew 5,000 attendees from over 45 countries, including more than 300 investors representing $7+ trillion of capital. | https://medium.com/nexttech/next-tech-will-be-at-the-2020-asu-gsv-summit-ff2ad2ab8134 | ['Saul Costa'] | 2020-02-11 23:40:55.050000+00:00 | ['San Diego', 'Startup', 'Education Technology'] |
A city, a jungle and a cave with fresh waters | A city, a jungle and a cave with fresh waters
There and back in a short urban tale
Photo by Boston Public Library on Unsplash
The other day I was in Tokyo, city centre. Or maybe Singapore – I don’t remember.
What I do remember was a feeling of ok-ish sickness of the mind. And of the stomach. I was OK, but somewhere in my depths there was the faintest feeling that I was not.
Let me explain.
I was walking around the city, in mind I had a deal that I was making with a potential new client. I was hungry and out of the office, through the streets, I was wading through people to find a place of restoration. To satisfy my hunger and to relieve my mind.
The deal I had in mind was exciting, as was the thought of biting some all-star sandwich so big that it could temporarily cease starvation in some less fortunate parts of the world. I was firing at all cylinders – tired but energised, focused but also a bit overwhelmed by it all.
Shut.
They’re on holiday?? They’re my best grub in town! What the hell?!
The sandwich deli that my body had been craving for at least 15 minutes was shut. What a punch in the guts. Okay, where next?
As I was mindlessly going through the motions of jumping from take-out to take-out like a bee on several garden flowers my mind started to think.
Not sure you’ve got the gravitas to swing this deal that you want.
Wait, what? Why not? What’s the problem?
Am I suddenly not interested enough to will this deal into the world? I don’ feel the energy, there’s no wave to ride! Gosh, I have no gravitas – that’s true!
Ah-ha! Here’s a burger place, I never get burgers at work – they normally devour me after I devour them. They make me snooze all afternoon as I try to digest them. But what the hell – I’m having a tough day, let’s dive in.
You know, I just need to regain some composure. The deal is there, I can structure it the right way, I can create a win-win for both. He’ll understand that he’s about to make the best purchase available to him. I’ve done it before a number of times, and I can do it again.
In other words, I felt OK. But I was not. From my depths, signals were going all over the place. Up here, superficially, I was experiencing all this as an unfortunate, mild wobble, caused by the world not going the way I wanted.
There she is
Psssst. Hey, you!
My attention is gently, but firmly grabbed. A woman, standing by the side, among the chaos and mayhem of lunch-time city centre, looks at me.
Like a cork in a whirlpool, I was bobbing up and down pretending to be fine. She’d just created a new space and I stepped into it, entirely, unknowingly.
It felt kind of like a waiting room. A small protective space, just by the side of the chaotic, roaring vortex I was in.
Who are you? She asked.
That’s a loaded question, I thought. And quite an important one too actually. It’s so important – why don’t I think of it more often?
Who are you? She asked again.
I need to think. I need to stop and think.
And so I did.
Firmly focused on this thought, the noise of the street started to blur into the background. The images of the street absorbed away. It’s like I took a step forward, and the daily madness became a muffled, retreating entity.
Who are you? She asked once more.
I am what I feel. Or I feel what I am? Confusion! Distraction! Fragmentation!
I need to cling onto this small heaven. I need to find the tethering. Where’s the tethering?
Focus! That’s it. I can’t allow to be distracted, or pulled away. I need focus!
I quickly grabbed the only thing that I felt possible: my breath, and its pattern.
Who are you? Again.
The more she asked, the more I tethered to my breath. Gradually, it was revealed to me that the waiting room was not a waiting room. It slowly became apparent that it was a departure lounge.
I was gradually being ushered to a departure. Guided by this ever so pragmatic form of tethering: my breath – a calm, powerful thing, once you observe it.
Off you go
Photo by Yoal Desurmont on Unsplash
She showed me the way ahead.
In the distance, I saw a glass wall. Beyond that there was a lot of green stuff. Leafy things, trees…it was a jungle.
I was being led to a jungle?!
Strangely, it felt like I knew this jungle. The map of it was in my knowledge. And awareness was my compass.
I was led into a place I wasn’t familiar with, except that I strangely felt I was. It’s like the jungle was, in fact, the map of me.
I don’t know how long I’d been in that place for. But by then, with quite a bit of perseverance and focus, those pre-existing thoughts and wants and preoccupations felt like they were in a far away place – buoying at the top.
Not me.
As I was stepping into the jungle, I also felt a sense of descent: slow, gradual, meaningful. Every step, led me down a notch, and it was something to savour slowly & attentively.
The jungle has its fiends, too
Quietness started to really pervade the space. I could only hear something far away in the jungle – it was splashing water. It was a solemn sound, whole, and I felt a strange attraction to it.
More immediately, I was dealing with the surroundings of the jungle.
In front of me some sort of windy path figured, blending in the dark through the green leaves and the resonating noises of the wilderness. It was clear that it was the way to go, it was also not possible for me to go anywhere else. The only other path was backwards, into the chaos and instability.
Thank you, but no thank you.
Tethered to this strange quality that is observant focus, and I kept descending through the jungle.
It was strange, the more I was trying to get acquainted with the place and its motions, the more I felt nagged by something. It was annoying and childish-like. Almost like a whiney presence that pulls your jacket, to get your attention and to persuade you into other plans.
Here is the spiel: you feel uncomfortable, you’ve been here for too long. You really don’t want to do this. You’re forcing it, it’s clearly not you and not a spontaneous act.
You must check your phone, by the way. You’ve been waiting for that email for so long. You know you need to action it as soon as you get it. Also, you can do the jungle another time – you got all the time you want. But you really want to check your phone now. And scratch that itch on the leg. And get up and get a piece of chocolate – you know you love the taste and the way it makes you feel.
Its force was so strong and it nearly got me. It nearly pulled me back up in no time – back into that endless flow of symbols and representations, of constructs, of precepts made of plastic. Of stimuli to live by as they get you, one by one.
I don’t know what was in me that day, but I did manage to overcome those temptations – letting go, sinking in, packaging those thoughts and seeing them flow away like mailboxes.
They became external concepts that I wasn’t allowing to infiltrate me.
Gradually, the focus turned onto the quietness of the jungle, its mysterious features and reflections of me that I didn’t quite understand.
Gradually, there was the calling of those fresh waters again. Those solemn sounds of truth. A wholesome attraction, innate, bearer of vitality.
Prepare for landing
In awareness, more or less, I landed precariously on the other side: a deep, low, slow cave of oneness. Where I could bathe in peace and in tranquillity. I had found the waters.
Photo by Michael Behrens on Unsplash
These waters had a quality that I immediately sensed. They were quietly strong – they didn’t bother anybody but also they could not be denied. I guess that’s the power of truth. They simply were.
Familiarising with these waters was like becoming them. A sense of vitality and quiet strength was unfurling in me. Slowly, gradually. The more I gave up control the more they found ways to resonate in me.
You’ve landed, sir. Now it’s time to go back up.
Unsustainable. What it is that makes this unsustainable I don’t know. Maybe lack of practice, but at some point you have to get back.
My only hope was to take this awareness with me and carry it – even if just for a little bit – in the day-to-day of my chaotic life.
If anything, sometimes, I was hoping for memories of this experience to bubble to the surface to provide some guidance, some relief, some caring voice letting me know that another way is possible. That another way is there.
As I snapped out of the cave. I could see the car crash of the world that was before me.
It was exactly the same place, at the same time (more or less) as before. It was the same scenes, the same needs and wants. The same tortuous path to achieving something that maybe I didn’t really want or understand. There was the chaos that I thought was my friend, my natural habitat.
This time though, I waved at it from a distance.
I see you, chaos, life of plastic. I see you and I salute you. I salute you and respect your presence. I respect you but I am not you.
I am me, I have my strength and wisdom. And I use these tools to navigate you, chaos. To blend with you and to understand you, but never to be you.
You and I are not one and the same. | https://medium.com/anima-sana/a-city-a-jungle-and-a-cave-with-fresh-waters-c84156ff09eb | ['Abraxas Monk'] | 2020-05-01 09:42:26.643000+00:00 | ['Self-awareness', 'Tales', 'Self', 'Awareness', 'Stories'] |
Writing Dialogue (not a tutorial) | .
Just my own experience.
I’ve heard that talking to oneself was once considered a sign that you need taking away and put in a straightjacket. You’ve only to stand on a street corner and just about everybody who passes you will appear as someone talking to themselves. Of course, this is not true, they are simply talking into a cell phone. But did you ever listen in? I can’t count the amount of times I’ve walked alongside a person talking on a cell phone, not to be rude, or intrude on their conversation, but just to listen to the speech.
I write as an old man, having had no tuition, not even from a book. Had I read more as a child, I’d be a more complete writer today, though I don’t know if I’d be a better writer.
Basically, dialogue is conversation and nothing more. My characters talk to each other, so shouldn’t they sound like real people? I like this idea because then all I’d have to do is transcribe a person’s conversation, and there’s my real person on the page. I’m done.
Unfortunately, I’ve since learned, (it was a Thursday, about two in the afternoon, I was twenty-seven) when it dawned on me it’s not that simple — or rather, fortunately, it’s not that complicated to write dialogue.
Real conversation of a teenager may sound like this:
“Er, Jim, like, have you heard the latest thing, like, on, what’sis name, you know, er, I mean the guy, you know, who’s like in the news all the time — “
Even in direct transcription resembling this one, I can’t accurately indicate where vowels drag, consonants double and so on. Moreover, in real speech, I get to hear a person’s melody of voice, watch his body language, and so I might suffer all the hesitations and indirectness and irrelevancies much better than when I read the transcript in print.
I cannot reproduce real speech. End of story!
I should maybe have given up writing then, at twenty-seven, about two in the afternoon, that Thursday.
Instead, well instead, I learned what any good writer had learned long before me. I can approximate real speech, and in doing so learned that dialogue should be quicker and more direct than real speech.
Moreover, I found there was no need for talk unless there’s a point to be conveyed. Then another dawning, weeks later, sat a coffee shop, how can I find the right balance between realism and economy of speech.
To make realistic dialogue, create a distinctive level of diction. Allow some characters to speak in fragments, others in complete sentences; some in slang, some in professional jargon, others in standard English. Dialogue should convey a sense of spontaneity but eliminate the repetitiveness of real talk.
I was in the shower (I have a lot of ideas when taking a shower) when I began to work it out that most often the important part of dialogue takes place in body language and in the exposition between the lines. Good dramatic dialogue is multi-layered, so that in addition to body language and direct meaning, there’s another parallel meaning to what’s being said.
Every sentence spoken in a piece of fiction ought to convey some kind of information. Dialogue brings us closer to the characters and their conflicts. When I need history, philosophy, biology, and most other sorts of information, I put them in the narrative; unless my character happens to be a bloody historian. Then I’m screwed!
Getting it right.
Another lesson I learned, (don’t even ask at what age for this one) when characters in a scene begin talking, after some discourse, I lost freaking track of which character was talking, where he was, and what the hell he was doing.
This usually occurred because I became aware of repeats in dialogue attribution, so compensated by cutting away tags — resulting in lots of “fluffy floater” quotes. I’m not kidding, I was having my hair cut (I don’t have a lot) when I started to do away with adverbial modifiers for attributions, such as: he said hotly, she said coolly, anyway, you get my drift. Used in moderation I guess these aren’t so bad, (although many will argue that a stronger verb choice is better than the verb/adverb construction) but when we start seeing several per page their effect becomes both diluted and annoyed the hell out of me. More importantly, while they might describe how something is spoken — they are more telling in nature than showing.
When every attribution I wrote was: he snarled, she snapped, he interjected, she declared, he asserted, she affirmed, he announced, well, I drove myself fucking nuts!
I kept one little rule in mind: There’s nothing wrong with the word ‘Said.’
It’s all right — really.
I’m bored now, so I’ll stop. But just wait till I tell you about my experiences with punctuating speech. That’ll send you over the edge, which is probably where you’ll find me.
I’m taking the dogs for a walk. I need some air. | https://harryhogg-com.medium.com/writing-dialogue-not-a-tutorial-899c7d55355f | ['Harry Hogg'] | 2018-10-25 09:42:34.643000+00:00 | ['Writing', 'Dialogue', 'Drive', 'Fiction', 'Menuts'] |
Prince’s Female Mirrors | Prince’s Female Mirrors
The Svengali and his muse.
The following is excerpted from Ben Greenman’s forthcoming book-length study of Prince, Dig If You Will The Picture (Henry Holt), set to be published on April 11.
In pop music, female artists are often controlled by male artists — they are given songs to sing, costumes to wear, sometimes even new names to replace their real names. There’s a long tradition of this, or rather, two of them: the Pygmalion tradition on one hand and the Svengali tradition on the other. Pygmalion stories, which began in Ovid and crystallized into their modern formulation in George Bernard Shaw’s 1913 comedy of the same name, usually end with the older male figure falling in love with his protégée. Svengali stories, rooted in George du Maurier’s 1895 novel Trilby, tended to be more exploitative and predatory: du Maurier’s own illustrations depicted the character as a spider in a web, trapping and devouring his prey. Historically, the music industry has lent itself more to the latter model. Rebecca Haithcoat, writing in Vice Media’s Broadly just a few weeks before Prince died, explored the history of the male Svengali in pop music, from Phil Spector to Kim Fowley, with a special emphasis on the economics of the arrangement. Women were permitted to write and perform songs, while men tended to occupy producer roles, in large part because production was needed (and compensated) whether or not songs were released. Similarly, upward mobility was distributed unevenly among the sexes; women had plenty of male mentors (older industry figures who helped them find their way to more work) but not nearly as many male sponsors (older industry figures who taught those younger women how to create work themselves). In these terms, Prince was a Svengali, though his special relationship with female identity ensured that he was just as snared in the web.
Backstage at the American Music Awards, he met a young model and actress named Denise Matthews. Matthews, in her early twenties, had been born on the Canadian side of Niagara Falls, the product of an ethnic crazy quilt that included Afro-Canadian, Native American, Hawaiian, Polish, German, and Jewish blood. She had gone to New York to model, but she was too short, and an agent suggested Los Angeles and acting roles instead. She found some, including a small part in Terror Train (a slasher film set aboard a moving train that starred Jamie Lee Curtis) and a larger one in Tanya’s Island (a romantic adventure, of a sort, about a castaway juggling relationships with both her brother and an ape-man). In both films, Matthews was billed as “D. D. Winters.” In neither film was she especially memorable.
But she made an impression on Prince. The two of them began a relationship, and when he returned to Minneapolis, he invited her to come stay with him. Once she arrived, he began to involve her in a new idea he had — a highly sexualized girl group performing songs he would create especially for them. The initial incarnation of the group, called the Hookers, was built around Susan Moonsie, a friend of Prince’s from high school. The Hookers petered out around the time of Controversy, but Prince revisited the concept with Denise, who he felt would be a perfect frontwoman for the band. He tried to convince her to change her name to Vagina. She refused, understandably. They compromised on Vanity.
The name had at least two meanings, both perfect: Prince liked to think of Vanity as his female mirror image, and she was also about to become the center of his first true vanity project (though you could make the argument that Morris Day, with his mirror, was also all about vanity). Vanity, Susan Moonsie, and a third singer, Brenda Bennett, entered the recording studio in March of 1982 to add vocals to a set of tracks that Prince had already prepared. Five months later, Vanity 6 released its debut. “Nasty Girl,” the opening song, established the formula: spare synthpop grooves, mostly friendly, with salacious lyrics and vocals that were more coy than powerful. “Do you think I’m a nasty girl?” Vanity asked, and answered a few lines later: “I need seven inches or more/ Get it up, get it up — I can’t wait anymore.”* Elsewhere on the record, Prince gave the band more outré electronics (“Drive Me Wild,” “Make-Up”), risqué songs (“Bite the Beat,” a celebration of oral sex which could have been handed off to Blondie or the Go-Go’s), and relatively innocent songs with risqué titles (“Wet Dream”). The ballad “3 x 2 = 6” had an attractive melody, but it exposed Vanity’s limitations as a vocalist. The funkiest, funniest moment was the Time-like “If a Girl Answers (Don’t Hang Up),” a skit-song combo in which Vanity tangled with the new girlfriend of an ex (played, hilariously and unconvincingly, by Prince).
Vanity 6 ran its course — or rather, ran off course. Vanity was cast as the romantic lead in Purple Rain but departed before shooting started. Replacement auditions for the film were hastily arranged. The part went to Patricia Kotero, a Mexican-American model and actress who had had appeared in various television shows and a few music videos, including Ray Parker Jr.’s “The Other Woman.” Prince renamed her Apollonia — this time, he only had to look as far as her middle name — and cast her in both the film and a new version of Vanity 6. Apollonia 6 featured mostly Kotero and Brenda Bennett, along with backing vocals by Lisa Coleman and Jill Jones. Though Apollonia was a better singer than Vanity, the music had little of the punkish insistence of its antecedent. The songs were more fully realized, which ironically worked to their disadvantage — mostly they just sounded like less energetic Prince songs, and in fact several tracks demoed by Apollonia 6 and left off the album ultimately entered the Prince canon via other routes: “Manic Monday” (which ended up with the Bangles), “The Glamorous Life” (which found a home with Sheila E.), and “17 Days” (which became the B side of “When Doves Cry”). The two keepers were “Sex Shooter,” a close cousin of “17 Days” that the band performed onscreen in Purple Rain, and “Happy Birthday, Mr. Christian,” which reversed the plot of the Police’s “Don’t Stand So Close to Me” and added a deadbeat-dad twist: Mr. Christian, the high school principal, impregnated a student but wouldn’t support her and her child. Apollonia left after the band’s sole album. | https://medium.com/the-awl/princes-female-mirrors-4401171f521 | ['Ben Greenman'] | 2017-04-03 18:18:35.773000+00:00 | ['Excerpt', 'Books', 'Biography', 'Prince', 'Ben Greenman'] |
Top Cryptocurrencies Rated by White Paper Complexity | Top Cryptocurrencies Rated by White Paper Complexity
Studying the correlation between the readability of white papers and the money raised.
I recently discovered the Hemingway Editor App, and it’s slowly changing how I view “good” writing.
Hemingway’s most popular novels are written in a 4th-6th grade reading level.
If you inspect other genres, you’ll find that as a rule of thumb; writing is more successful the more simple it is. (source)
The Hemingway Editor scores your writing based on how simple it is to understand. Your score reflects the, “lowest education needed to understand your prose”.
They explain why you want to score as low as possible in this excerpt:
Writing that scores at a 15th grade level is not better than writing at an 8th grade level. In fact, a high grade level often means it is confusing and tedious for any reader. Worse, it’s likely filled with jargon. After all, unless you’re writing a textbook (and even then) you don’t want it to sound like a textbook.[source]
Articles from GQ clock in around a 6th grade reading level. Whereas Buzzfeed articles — which are shared more than GQ articles — are at a 3rd grade level.
It makes sense when you think about the fact that only 50% of Americans can read at an 8th grade level. Even more sense when you notice a mere 15–20% of Americans are able to comprehend writing at a 12th grade level.
If we want mass adoption of blockchain technology, we need to make crypto more simple.
Readability vs. Dollars Raised
This made me wonder if there’s any correlation here. Between the readability of White Papers and dollars raised.
What is an ICO White Paper?
A “White Paper” is essentially a consumer facing Business Plan for Blockchain Projects. Most coins will allow potential investors to download their White Paper off their official website. White papers are also one of the first elements of a project you should look at when deciding if it’s a solid investment.
I’m not saying that one ICO is better than another based on their score. There are countless ingredients to the perfect ICO recipe.
Yet, some ICO’s hide behind technical white papers and esoteric terminology. As if they can trick readers out of their pockets with fancy words.
A couple months ago, I embarked on a mission to determine “ICO Launch Best Practices” — ranging from white paper design to token supply. I work with BlockchainWarehouse and a part of what I do is vetting projects that approach us for support. We’re an accelerator for blockchain based projects, so as a crypto nerd it’s a pretty great gig. I get to help companies go from “napkin idea” all the way to their Token Generation Event, and have had the chance to meet some of the most talented individuals I’ve ever come across. Both on my team at BlockchainWarehouse, and with the companies we’ve helped launch.
I wanted to determine quantifiable metrics to help measure the chances of an ICO or Token Sales success, so I could apply them to the companies we work with. Less positively, I was also keen to learn if there was a method to the madness because, if there wasn’t, that might suggest that we are indeed in a bubble.
Let’s see how some of the biggest ICO’s rank with good ole’ Ernest.
To ensure fairness in ranking; I removed all text from the white papers except for prose. No lists, team member names, mathematical equations, or anything that isn’t written in proper sentence structure.
The HemingwayApp Experiment
aka Operation Old Man in the Sea
Bitcoin
Score: 4th Grade Reading Level
Difficulty: 20% of Sentences are hard or very hard to read.
Length: 9 Pages
I understand that Bitcoin never had an Initial Coin Offering (ICO). I’m including it here as a frame of reference. As the father of all cryptocurrency it’s important to remember where we came from. | https://medium.com/swlh/top-cryptocurrencies-rated-by-white-paper-complexity-42457f7c0ac7 | ['Reza Jafery'] | 2019-07-04 16:01:01.303000+00:00 | ['Writing', 'Blockchain', 'Cryptocurrency', 'Ethereum', 'Bitcoin'] |
About Me —Lori Mann | About Me —Lori Mann
Or you can call me Dr. Miss.
Author’s Photo
Big Picture
I write and I teach. I live on the south side of Chicago.
Up until recently, I spent 90% of my professional time teaching high school Spanish and then crammed in some writing on the side. However, I recently had to take a leave of absence from teaching — thanks, Covid! — and I now spend the majority of my time writing. Both endeavors are hard and humbling.
My teaching days often ended with me feeling too mentally exhausted to even have a conversation with my family. And while my brain is sometimes just as tired now from writing, I am at my happiest because I’m creating something new every day.
This new pandemic life is very quiet.
I have named the four stray cats who periodically wander through our yard Choco, Oreo, Midnight, and Gus. If I haven’t seen one of them in a few days, I worry a bit because sometimes I’m ridiculous like that. We recently got new sod installed and I wondered how the sprinkler would impact their usual route through the gaps in the fence. You will be glad to hear that they adjusted beautifully.
In addition to the four stray cats, I have three sons who are ages 18, 15, and 12. The eighteen-year-old is suddenly doing mannish things like attending university online and shaving regularly and working full time as a delivery driver. It’s all quite strange considering I was pregnant with him fifteen minutes ago. The other two refuse to stay nascent as well and instead are juggling remote learning and sports and eating everything in the house. They are all hilarious people and I’m lucky to be their mom.
I keep by my front door the two trekking poles that I grab when I go hiking every week. I’m currently training for a ten-day trip to Philmont, New Mexico with my sons’ Boy Scout troop. There are many days when I have self-doubt about my ability to adequately prepare, but I push myself to keep training. I don’t want to be “that lady” on the trip who just couldn’t cut it.
There are four cats, three sons, two trekking poles, and…one husband. I call him Mr. Mann and we got married in September 2019. All things considered, going through a pandemic as a newlywed has been quite nice. Mr. Mann is the best part of my life.
Writing Things
I write every day and strive to submit to a Medium publication several times a week. My cuticles bear the brunt of the waiting-related stress. I have contributed to the following publications so far:
Noteworthy-The Journal Blog
Teachers on Fire Magazine
P.S. I Love You
Heart Affairs
The Startup
Self, Inspired
Climate Conscious
Fellowship Writers
Age of Awareness
The Ascent
The Synapse
My goal is to contribute to Forge and Better Humans.
Newsletter
I have a newsletter that I send out weekly-ish. It’s a place to discuss our current writing projects and good books and any other cool stuff that pops up. I’d love to have you as a member of the Dr. Miss Community.
This is the subscription link.
My Topics
I enjoy writing about teaching, intermittent fasting, parenting, relationships, feminism, and any other nonsense that floats through my head.
Here are a few pieces that will give you a sense of my writing style.
Contact Information
Email: lorimann921@gmail.com
Newsletter
Twitter
Instagram
Facebook | https://medium.com/about-me-stories/about-me-lori-mann-e73fac6a6fd9 | ['Dr. Miss'] | 2020-11-03 10:46:54.097000+00:00 | ['Writing', 'Blogging', 'About Me', 'Medium', 'Introduction'] |
Noble Silence | Photo by Jen Loong on Unsplash
I retreated this weekend into myself.
I meditated and practiced yoga.
I lit the candles and listened to The Universe.
I rested.
I spoke out loud to as few people as possible, and when I did, I used as few words as necessary.
I thought about the story I tell myself regarding my life and for this weekend — I suspended it. Whenever I found myself thinking about it — I would pull myself back to the present moment.
For Nirvana is not necessarily a reward you spend your whole life striving for — to be attained only at the very end. Nirvana can be reached moment to moment — a piece of joy dropped into your life in the now — but only if you’re willing to let go of it as the moment passes. Hand and heart open — allowing what needs to go — to go, and thus being open for the next moment of joy to enter your life. Letting what needs to come — to come.
Each Noble Silence carries with it the promise of a moment of Nirvana — if we can quiet our lives and our minds to the ‘noise’ of our worlds. If we can listen to the stillness — we can feel The Divine reach out to us and gift us with joy. Not everlasting joy — but joy in the moment. Every moment — has that postential. Over and over and over again in our lives.
Everlasting joy might be religion’s biggest fake news story of all. But moments of joy are attainable. Right here. Right now. In every single moment of our lives.
And a moment becomes a day becomes a week becomes a life — when you live your life in the now, letting go of the stories, listening for the Noble Silence.
Namaste. | https://annlitts.medium.com/noble-silence-147ebb69e590 | ['Ann Litts'] | 2018-08-23 01:10:30.261000+00:00 | ['Self-awareness', 'Joy', 'Spirituality', 'Life', 'Meditation'] |
What‘re We Escaping When We Read? | What‘re We Escaping When We Read?
What Chabon’s magnum opus teaches us.
“The Amazing Adventures of Kavalier & Clay.” From BookRhapsody.
When I left Canada for college in America, I felt like Harry Potter arriving at Hogwarts. There’s something otherworldly about the act of escaping, especially when you’ve known the thing you’re escaping from for decades.
It’s not the same as going on vacation. When you’re a tourist, you’re using the escape as a mental reprieve, a psychic refill that prepares you to continue your existence at home.
The kind of escape I’m talking about involves a few self-realizations. First, that the reality you’re used to is in some way oppressive. There’s no opportunities for work, you feel like you’re not “where it’s at,” there’s something about yourself that you hate — maybe you’re just bored.
Second, you have this weird need to adapt to something new. People like novelty, learning new rules and integrating different cultures into a unique identity. We like comfort but we also like to fight for survival. War, both physical and internal, gives people purpose.
Third, you want connection. We’re protean creatures, constantly reassessing and regretting and resolving, as we get older. It feels like the people at home are always going to stay the same. Yet there’s a sea of different worlds of others waiting for you to meet them. You have a craving for empathy, to put yourself in the shoes of the moody New Yorkers, or phlegmatic Californians. You want to live multiple lives.
Let me ask you this: what I’ve been talking about is escape in the physical sense, but couldn’t the same be said of the reader of fiction? | https://medium.com/literally-literary/what-re-we-escaping-when-we-read-d33413137b8a | ['Xi Chen'] | 2018-09-11 10:18:28.391000+00:00 | ['Essay', 'Books', 'Reading', 'Literally Literary', 'Culture'] |
How to get published on Coinmonks Publication? | Process of publishing a story on Coinmonks
Send me your draft/story at gaurav@coincodecap.com
We will add you as a writer on the publication
Once added, you can submit your draft/story using the following steps.
Go to your story Bottom right on your story there will be 3 dots. Click on them. You will see “Add to publication” Select Coinmonks and submit
You can also reach out to me on our Telegram group.
Explore Coinmonks Medium publication + RSS Feed
Motivation behind Coinmonks
We started the Coinmonks publication in Feb 2018. The purpose of the publication is to create a knowledge repository for decentralized technologies and its new economy.
Coinmonks is a non-profit publication that thrives on its writers and readers. if you ❤️ reading Coinmonks you can donate to us.
Coinmonks is read by more than half a million crypto-fans every month.
What we publish?
In one short line, we only publish crypto-related educational content.
Tech Tutorials, development stories (Related to decentralized tech only)
Ideas, Insights and futuristic view on Decentralized technologies
Crypto/Token economics
Project insights and analysis (Educational content only)
Crypto trading strategies (Educational content only)
Opinion pieces related to above
❌ No news and flashy articles❌
✔️ Original content (Written by you) ✔️
Wrote something useful around crypto? mail us, do not hesitate. | https://medium.com/coinmonks/how-to-get-published-on-coinmonks-publication-bdf172add414 | ['Gaurav Agrawal'] | 2020-12-03 11:49:30.667000+00:00 | ['Cryptocurrency', 'Writing', 'Medium', 'Publishing', 'Bitcoin'] |
5 design methods I’ve successfully applied as a UX manager | TL;DR: Depending on the environment you have to navigate, it’s not always easy to try and apply all of the awesome design methods that are out there, be it due to daily business or a lack of management buy-in. In this article, I describe 5 methods that I have successfully and effectively applied in C&A’s eCommerce department: Design Jams, Storyboards, Crazy 8s, 4×4×4, and Buy a Feature.
Originally published on Tales of Design & User Experience.
Currently, I work as a User Experience Manager in the eCommerce department of C&A, one of Europe’s biggest and oldest fashion retailers. Now, what does a UX manager do? For one thing, I organize and coordinate quantitative and qualitative UX research in the department. For another thing, together with the rest of my team, I identify problems and create concepts for new features. If you’re also involved in design, I think it’s a safe bet to assume you’re aware of the plethora of design (thinking) methods and techniques that are out there. From my experience, some of them are more and some of them are less feasible, depending on the environment you have to navigate.
For instance, while a Design Sprint is highly effective, you and/or your colleagues might simply not be able to take the five days due to other obligations or daily business. Or the features you work on and deliver are not large enough to justify a design sprint. Or you simply don’t have the management buy-in to organize one. Here, one solution could be to instead opt for several standalone design jams.
In this article, I want to share with you a selection of design methods that, at least for me, have proven very valuable and effective in a fast-paced fashion eCommerce setting.
Design Jam
This one I brought from the University of Michigan Information Interaction Lab, where Prof. Michael Nebeling regularly and frequently organized design jams to involve students in our research projects.
A design jam is a standalone session that can last anything from 2 hours to a day. It always has a clearly defined challenge and a clearly defined deliverable, such as ideas, wireframes, prototypes, or user study results. Participants work in teams to first tackle the challenge and then present their results to the other teams to get feedback. A design jam can also be broken up into multiple phases with multiple presentations. Teams can be remixed between phases, and results from one phase can be given to a different team in another phase, who then iterate on it.
At Michigan, we have used design jams to evaluate research prototypes, create wireframes for AR applications, or investigate and test novel UX design methods for AR/VR. At C&A, I’ve primarily made use of the method to brainstorm ideas for A/B tests and our personalization roadmap. For instance, in one design jam, I had a team from our department first brainstorm potential audiences that might be of interest for personalization. Then, in a second phase, the audiences were randomly assigned to other teams who then had to come up with specific personalized content for these audiences.
The outcomes of one design jam can also be used as the basis for another design jam with completely different participants. In this way, it is, e.g., possible to split up a Design Sprint into several independent entities, if necessary.
Storyboards
Storyboards are a perfect means to have designers and stakeholders think more clearly about a user’s context. It’s always easier to empathize with your customer if you’re very specifically visualizing their journey using a product or feature.
I like to employ this method in early design stages, when I already know which problem to solve, usually in a cross-functional workshop setting with participants from all over the department. This prevents silo mentality and makes explaining and “selling” the generated ideas for a new feature way easier.
Crazy 8s
When it comes to really rapidly generating a number of initial ideas for a concept, nothing beats Crazy 8s. A part of Google’s Design Sprint methodology, this design technique forces people to think beyond their initial idea — “frequently the least innovative”.
Crazy 8s works as follows: Fold a piece of paper three times, so that it’s divided into 8 parts. All participants now have exactly 8 minutes to fill these 8 parts with one individual idea each. The facilitator announces each elapsed minute.
While this exercise, from my experience, often seems stressful to participants at first, and not everyone might fill in all 8 blanks, so far, it has always generated a good number of ideas that you can then iterate on. For instance, I’ve successfully used Crazy 8s to brainstorm potential variations for A/B test ideas that had been previously generated in a 4×4×4 session …
4×4×4
Like Crazy 8s, 4×4×4 is a technique for generating a range of, e.g., ideas or scribbles. According to InVision’s Inside Design blog, it’s “a group guided brainstorming activity designed to focus on a single issue”. That is, individual participants (or teams) first come up with 4 ideas each (within 4 minutes, or however much time you have available, just not too long). Subsequently, participants/teams are paired and from their pool of 8 ideas together select the best 4. After that, repeat this step once more.
I’ve used this technique as part of an introductory workshop on A/B testing, in order to have participants generate their own potential test ideas. However, I adapted it to a 2×2×2, since I didn’t want to completely overwhelm them. That is, teams of participants had to come up with 2 ideas each, in 8 minutes. After that, we did two rounds of “play-offs”, which left us with a total of 4 A/B test ideas, all of which were implemented and tested in our online shop.
Buy a Feature
While I’m a fan of dot voting (due to its plain and simple nature), I also very much like Buy a Feature. That is, instead of sticky dots, workshop participants are provided with a certain amount of fake money (the more real it looks the better). Additionally, the features (or ideas) you want to sell have a price in your fake currency (say, design dollars — D$) that corresponds with their overall complexity (design, implementation, deployment, maintenance, …). Now, each participant can buy the features they prefer. But of course not all of them; the facilitator has provided them with just enough money to buy roughly 50%.
In comparison to dot voting, the advantage here is that Buy a Feature feels less abstract. Due to the nature of money, costs, and potential savings, it seems more serious and is — frankly — also more fun.
All of the above are just very rough descriptions of the design methods. If you’re looking for more complete instructions (and I hope you do), please feel free to follow the links I inserted. All of them can, of course, also be applied and are specifically useful in co-creation settings. Have fun trying out these really cool techniques — and happy designing! | https://uxdesign.cc/5-design-methods-ive-successfully-applied-as-a-ux-manager-at-c-a-ca3e1da11b8c | ['Max. Speicher'] | 2020-05-10 15:33:51.643000+00:00 | ['Design Thinking', 'Design', 'Ecommerce', 'Design Sprint', 'UX'] |
The power of storytelling in a video-first world | According to Mark Zuckerberg, we’re entering a golden age of video, and in five years most of what people consume online will be exactly that — video.
By 2019, 80% of the world’s internet traffic will be video. Cisco Study, 2015
Over the last few years we’ve seen brands start to experiment with new media platforms, including Facebook Live and Snapchat stories, to connect people and inspire brand engagement. But in order to keep up with the rising demand for more content, brands have to scale up their video production capabilities, not just in speed and volume, but in quality.
So how do brands create content that stands out from the crowd whilst adding value?
Here are seven things we need to do to create quality video content:
Identify the purpose of the video
Before spending any time, energy or money, it’s essential to identify the purpose of the video. Is it to educate? Inspire? Instruct? Define the overall objective you need to achieve.
Understand your audience
Do the research and use consumer insights, media behaviours and trends to understand your audience on a deeper level. Learn how and when they watch online content and figure out the best time, place and way to engage with them.
Tell a story
It’s important to take people on a journey. To do that you must know your brand’s character, determine the plot and identify a theme. It’s the story that consumers engage with and the emotion you evoke connects them with your brand.
“Video is the format, not the message. It’s all about the story that’s being told and what that means for brands.” Yannick Connan, Creative Director at Hugo & Cat
Create multi-platform content
When creating a video, cover a wide range of platforms with the same piece of content. Re-purpose your assets into different formats and use them across a variety of media to reach a bigger audience.
Note: Be sure to use different creative techniques for different platforms. E.g. If you’re creating a video for Facebook make sure it works with both sound on and off. If you’re creating a TrueView ad for Youtube make sure you get your message across in the first five seconds before viewers can opt to skip out.
Show, don’t tell
Connect with your audience by showing, not telling. Paint a picture and allow the audience to draw their own conclusions. Royal Mail don’t say, “you can trust us” — they show how trust is built with every parcel that’s delivered to the right address, safely and on time.
Have fun with it
If you enjoy creating it, the audience are more likely to enjoy watching it.
Include a strong call to action
Tell people what you want them to do next and show them why it’s worth it. Take your audience on a deeper journey and guide them with clickable links through to other relevant content.
As we enter this golden age of video, the challenge will be to produce fast, effective content at scale, whilst embracing new platforms and formats. And if Cisco’s predictions come true, customer centred design will be vital in making sure brands fulfil their potential with online content. To harness the power of storytelling for future brand engagement, it’s time to realise the power of quality over quantity in a video-first world. | https://medium.com/nowtrending/the-power-of-storytelling-in-a-video-first-world-c939f15bb27b | ['Nicola Thompson'] | 2017-06-07 10:58:34.226000+00:00 | ['Storytelling', 'Videos', 'Brands', 'Digital Marketing', 'Social Media'] |
The Comeback | The Comeback
A fictional short story.
Photo by Ryan Wilson on Unsplash
‘Shoaff! Come out and face me like a real man!’
Lori Linden, AKA the Duchess of Death, waits for me to enter the scene, spitting insult after insult, twirling her dagger with expert precision, slicing the air with more than is necessary gusto. I wonder how much she practiced, leading up to this day.
By this point I’m sweating bullets. It’s the running and the ducking, the weaving between ferry bench seats, dodging her throwing knives that has me worked up. That’ll do it.
Not only are my foe and I fighting one another, we’re fighting the violent up and down dipping and rising of the ferry on uneven waters.
‘Thought you were Jimmy The Wolf? Or is your bark bigger than your bite?’ she asks.
I call out. ‘It’s James Shoaff to you! Only my friends get to call me The Wolf.’
The lights switch off on queue. Enough darkness to risk a quick glance below deck.
Lori spins around, anticipating an attack. ‘You think you can even the score after our date in Shanghai?’ she mocks with a sneer. ‘Tell me James, how is your wife?’
Three… two… one.
I hurdle the second floor ferry railing, landing on all fours, keeping my head down with a spray of water splashing across my front, the fan blows my golden dyed locks away from my face. I raise my head slowly, hoping to sprinkle about a dash of that typical Shoaff bravado. And to be honest, I’m hoping nobody can see me for real — the middle-aged man in a bit of pain.
The old knees don’t bend like they used to.
It’s your time for a comeback they said, early retirement doesn’t suit you they argued, the President insisted you give the USA the fairytale ending we all deserved.
Lori burst forward, dagger ready for a jab. I jump up, blocking her advance, stamping my left foot down onto the wooden deck. Except…
‘Holy mother of hell! You got my actual foot you idiot!’
‘Sorry.’ I apologised profusely, but I don’t think this up-and-coming starlet will accept it.
‘CUT!’ The Director, Billy, announces from across the deck with his megaphone in one hand, and what looks like a ham sandwich in the other. ‘Walk it off Sadie! Let’s take five everyone. Brian!’ he points at me with the sandwich. ‘Get over here, we need to talk about a stunt double.’ | https://medium.com/short-b-read/the-comeback-de5a568a5d65 | ['Dayle Fogarty'] | 2020-09-25 02:30:50.403000+00:00 | ['Writing', 'Creative', 'Fiction', 'Creative Writing', 'Short Story'] |
5 Signs That You Are a Good Researcher | 5 Signs That You Are a Good Researcher
Do you have the guts to speak the insignificant truth?
Are you a counterfeit? Photo by Ystallonne Alves on Unsplash
There are researchers who love their work, and there are those who only love the glory that comes with it.
Ironically, you can only do one of these two things truthfully. Most of the time, the scientific process brings mildly satisfying results, if any at all, and only once in a century, there comes a groundbreaking revelation that is capable of turning heads and breaking hearts.
If you are a researcher, the odds are that you are helping the world massively, but it probably feels like your research is just a tiny drop in the ocean. This is absolutely normal.
The “Ugg”ly truth
To say something as simple as “Hence we conclude that your Uggs wear out half a millimeter over 150 km” needs the poor researcher to wear those Uggs and walk for thousands of kilometers and measure the thickness possibly every 10km. Not to mention the years he spends walking around with a notebook on all possible terrains to be able to generalize every word in that sentence.
Even then, he is only sure of this particular result with a man his weight and walking style. So he calls in 50 college students, pays them 12 bucks each, and walks them around with uggs and kilometer-trackers after taking their height weight and getting them to walk on a treadmill to classify walking style.
Sigh. Such is science.
Diamonds in the Rough
In the scientific community, it is not uncommon to find blasphemous deviants to the truth and process willing to conclude that “Uggs might cause cancer” just because that paper has a better chance of being read. (No, Uggs don't give you cancer. Actually, I don't know if they do).
Financial expert William J Bernstein rightly identifies a genuine scientific process as one of the central pillars for civilization to exist (Birth of plenty). In Order to keep civilization on its road to progress, it is necessary to weed out the deviants. Here is a beginners checklist to see if you are a good researcher:
1. The Disclaimer
Do you see that line in a research paper? It says :
“The current results may vary when performed under different circumstances. Further research may be needed to generalize the results and establish the conclusions.”
That disclaimer is NOT a legal obligation. It is a heart-felt truth that makes the researcher think even in his sleep, “I hope people don't jump to conclusions based on my paper.” Be this guy. Disclaim loud and clear.
2. You swept nothing under the carpet
Did you perform an experiment 20 times, but you got the expected results only thrice? So you titled your paper saying “Verified thrice” and silently swept 17 attempts under the carpet? Don’t be this guy. All 20 of your attempts need to see daylight. The 17 falses bring science forward as much as the 3 trues.
3. You err in the direction of less glory
Do you have an ambiguous discovery, that in one sentence, looks like you discovered the proof for the existence of god and in another means that you wasted 8 years of your life?
Don’t choose to say, “The results are very close to being almost statistically significant that there exists a God that cares about you.”
Say instead, “It could not be shown with this experiment with a significant probability that there exists a God. More research and replication is needed to establish the result.” (Note that there is a difference between it could not be shown that there is God, and there is no God).
4. Your results can be replicated
That brings us to replication. If a research result is claiming something to be true, it should be possible to replicate it. It is painful to spend time doing what has been done and to say, “What was true last year is still true.” Even more so when you have to replicate it multiple times.
The future of civilizational credibility of science rests on being able to replicate. Dust those replicated experiments, publish them and affirm replicability.
5. You are terrified of ambiguous language
Significance of a result should mean to you that “The experiment would rarely fail to deliver the expected results.” Not that it “Has at least once delivered the expected results.” If you see language in a paper that does not establish case 1 above, you should have your doubts. If you are writing a paper yourself, you should make black and white conclusions only when you establish case 1. If your result belongs to case 2, you should make sure that you publish them with clear disambiguation as to what you really intend. Say, don't imply. | https://medium.com/towards-artificial-intelligence/5-signs-that-you-are-a-good-researcher-2bac5256000f | ['Sruthi Korlakunta'] | 2020-12-14 08:47:27.708000+00:00 | ['Publishing', 'Research', 'Research And Development', 'Science', 'Scientific Method'] |
A Guide to using Prometheus and Grafana for logging API metrics in Django | Configuring the Prometheus server
This is done via a .yml file that you’ll need to create. Our Docker-compose file will load it into the Prometheus server when it starts.
The targets (line 27) and alias (line 29) are references to our Django app which will be running under the Docker network. Line 24 tells Prometheus what to poll to get the metrics.
Configuring Docker-Compose
Once your app has been configured and the .yml file created, we can spin up our Django app, Prometheus and Grafana containers from the Docker-compose file.
Start the services by running:
docker-compose up
If everything started ok then you should be able to navigate to:
http://localhost:9090 for Prometheus
and http://localhost:3000 for Grafana
Note: Both Prometheus and Grafana will only display custom metrics once they’ve been hit. So searching for my_counter in the Prometheus server won’t return anything unless something has happened in the app to generate it.
A quick note on Error handling:
Make sure you get the correct indentation and be careful of tabs in the .yml files, otherwise you may see:
err="error loading config from \"/etc/prometheus/prometheus.yml\": couldn't load configuration (--config.file=\"/etc/prometheus/prometheus.yml\"): parsing YAML file /etc/prometheus/prometheus.yml: yaml: line 19: did not find expected key"
If you see the following error message, it means Prometheus can’t find the .yml file and the path is wrong in the docker-compose file:
unknown: Are you trying to mount a directory onto a file (or vice-versa)?
Configuring Grafana
At this point we’ve got a running app with the Prometheus server configured to poll our metrics endpoint. The next step is to get Grafana to poll the Prometheus server.
There should be an option to add a Prometheus data source.
Add the following configuration to your Grafana instance:
Hitting save should test the connection and return successful.
And that's it… all that's left is to play around in Grafana to get the metrics looking how you want them.
Conclusion
So there, hopefully now you’ll be able to create a set of bedazzling charts fit to adorn any project management slide deck. Let me know how you get on. | https://medium.com/swlh/a-guide-to-using-prometheus-and-grafana-for-logging-api-metrics-in-django-43863eebe5b7 | [] | 2020-07-01 00:20:40.182000+00:00 | ['Prometheus', 'Docker Compose', 'Python', 'Django', 'Grafana'] |
How to Practice Self-Discipline with Self-Love | They should always go hand in hand
Photo by Tim Mossholder on Unsplash
Growing up I’ve always been regarded as one of the most disciplined people in my cohort. I got lots of cards from friends saying how inspiring I was. And I just assumed self-discipline was probably the easiest thing for me in the world.
Until, one day I started to feel resistant. Reluctant. Unsure. And I started to question myself: Where did the willpower go? Just went down the drain with a blink of an eye?
So I talked myself into being more disciplined. I made a set of rules: 30 minutes of yoga, 10 minutes of meditation, then 30 minutes of writing every morning before breakfast.
And you know what happened: After yoga and meditation I started to get hungry, so I had to eat. That “30 minutes of writing before breakfast” thing rarely happened. | https://medium.com/an-idea/how-to-practice-self-discipline-with-self-love-5e3cc98d9c30 | ['Yiqing Zhao'] | 2020-12-22 16:27:33.547000+00:00 | ['Self-awareness', 'Self Improvement', 'Self Love', 'Self Discipline'] |
How to Soothe the Savage Toddler | As usual, when my daughter and grandson occasionally sleep over in the upstairs bedroom, my phone pings around 7:30 am. The text usually reads Theo’s awake…are you ready? At which point he is brought down and gifted to me while Gemma heads back up for a little more sleep. As I snuggle him under the covers for storytime, I count myself blessed.
This particular morning the text comes with the above photo. Hey, Mom are you up? I look at what the picture depicts and groggily text back Umm just let me grab a coffee, kay?
Five minutes later and two sips into my coffee, I text Bring it! Moments later Theo is handed over, warm, flushed and somewhat mollified to be released from his playpen prison. My daughter beats a hasty retreat upstairs as I tuck him in beside me with his little head in the crook of my arm.
This love of morning reading came early in my life.
As the oldest of four children all born in the span of five years, my mother (a former librarian) found a way to buy herself an extra half hour of sleep every morning. After we were asleep, she’d creep in and place two or three fairy tale books at the bottom of each bed and top them with five or six animal crackers.
As we woke up one by one, we’d crawl over to grab our respective books/treats and curl up to read (if we could read) and if we couldn’t, we enjoyed the pictures. What a soft, cozy way to start the day.
So I carry on the tradition with 18-month-old Theo, reading books to him that I read to his mother. The first book I reach for is this one.
Author
The illustrations are lovely and the rhymes are short — some well known and others not, but each lends itself to be read in a lilting and song-like manner. The uneven edges of the board book make it easy for Theo to turn the pages himself. My daughter overheard me reading this one once and said: “I loved that book!” The children throughout the book are cherubic toddlers — appealing to other cherubic toddlers I should think.
A few minutes later I reach for the next book in the pile.
Author
This sturdy book introduces counting through the activities of a group of playful and mischievous mice. The illustrations are actually quite detailed and every time I read it I see something new. Theo seems entranced with them as well and often reaches out to touch one of the mice. Its perfect simplicity lies in the fact there are only three words on each page — Three mice dancing, Six mice jumping, etc. I also have the author’s ABC book which has charming pictures as well.
At some point, I usually read Piggety Pig from Morning ’til Night or No More! Piggety Pig by Harriet Ziefert and David Prebenna. These stories feature a fat pink porker that goes about his day over the pages of this board book. Some critics have said Morning ’til Night introduces concepts that toddlers have trouble grasping (such as morning, afternoon and evening) but these are two books that my daughter distinctly remembers loving, so I’m hardly going to overthink reading them to her son.
Next up is the most educational story in the small book pile. We picked it up in England whilst visiting Gemma’s grandparents when she was 18 months old.
Author
The pages are busy with the activities of a family with young children — having a party, going shopping, bath time, etc. The illustrator, Stephen Cartwright, is known for “hiding” his trademark tiny duck somewhere on each page. See if you can spot it —
Author
At the bottom of each page are the words reflected in the illustrations and a drawing of each, to be spotted in the bigger picture. This is fun for me too! This is obviously a book from England because some of the vernacular is that of my daughter’s heritage, which is not a bad thing in my mind. Words like jumper (sweater), trousers (pants), vest (undershirt), biscuits (cookies) and sweets (candy).
The author clearly gave a lot of thought to what the first hundred words should be because they encompass everyday things a child would see and do. Family members (Mommy, Daddy, dog, etc), items of clothing, types of food, animals, colors, toys, body parts and more are covered.
This was a book that entertained my daughter from eighteen-months-old right up to kindergarten, so I know I’ll get a lot of mileage out of it with Theo.
Last up of the current reads — because Theo was getting a little restless — is the book made famous by a viral video of a Scottish grandmother. She was choking with laughter as she read it to her grandson.
Author
This is the book that allows me a few more precious moments with Theo before he’s off and running. He seems to enjoy the repetition/reappearance of the same words with each flip of the page. Apparently he now knows when the “Hee Haws” come in too because his head automatically twists on cue to watch his Nana muster up the most raucous “Heee Haaawww” possible!
The only thing I don’t like about this book is that it’s a paperback and too flimsy for Theo to handle at his age. Nothing pleases me more than to allow him to pick up and flip through the board books on his own, but it’ll be a while before he can be let loose with this one.
As he scrambles out of bed, I crawl out after him. Mornings like this bring back fond memories of storytime in bed with my parents and grandparents and the soft start of childhood days under the covers with fairy tales and cookies.
All it takes is a book and a willing reader
I’m an avid reader thanks to the early start with storytime and books. I feel I’ve learned more from reading than I ever learned in school. This, hopefully, will be something we can pass on to Theo. Heee Haaawww!
New Zealand based author Craig Smith (The Wonky Donkey) said of the video that made his book famous: “Remember, this viral sensation came about because a grandmother read a BOOK to her grandson, albeit a very special grandmother.” | https://medium.com/raise-a-lifelong-reader/how-to-soothe-the-savage-toddler-f8fe3e852bf9 | ['Kallie Allen'] | 2019-11-27 18:38:45.507000+00:00 | ['Literacy', 'Books', 'Reading', 'Family', 'Life'] |
The Little Things Turning 30 Taught Me About Happiness | Photo by Mathias Konrath on Unsplash
Not long ago, I had one in what is bound to be a long series of old-person moments.
Over lunch, a friend and colleague in her early twenties was telling me about how she was struggling to choose a career path. She faced pressure from her family, competition from her peers, and feared messing up a decision that could affect her for years to come.
Yet to my astonishment, she hadn’t invited me to lunch to ask for career advice. Which incidentally is probably for the best — my career has been less of a steady climb to the top, and more of a trans-continental roller coaster. Instead, she asked me something much broader: What can I do to be happier?
After giving it some thought, I came to an unfortunate conclusion: As far as I was concerned, not much could have made my twenties happier. I had to go through the experiences I went through to become the person I am today.
That being said, there are definitely some things I know now that I wish I had known in my twenties. Things that could, maybe, have flattened the infamous happiness u-curve.
Overall life satisfaction in the UK, graph by the World Economic Forum: source
There’s no “u” in happiness
Recent studies have shown again and again that our perceived happiness fluctuates over the course of our lives to form a u-curve. It’s moderately high in our early 20s, then steadily descends until the dreaded “mid-life crisis,” until ultimately going back up and peaking as we head into retirement.
This curve has always intrigued me, especially since it puts me smack in the middle of that long descent into misery. The usual explanations we see are linked to stress, family obligations, comparing ourselves to others, and not meeting our own unrealistic expectations for where we should be in life.
But I have a somewhat different theory, just based on how things have been going for me so far. This is purely subjective and not based in science, but perhaps it will resonate with some of you. My theory is that the u-curve is the result of instability; a shift in balance between two incompatible ways of looking at life.
Holding on to a burning fuse
In my early 20s, I had a lot on my mind. I was afraid of failing to get into a good university, failing to start a solid career, failing to enjoy my early years as much as I was supposed to, and so on. A lot of the stress I was carrying had to do with what was happening in the moment, and whether that could bring me to how I wanted my life to look like in the future.
At the time, life was like an endless hallway with infinite doors. The doors I opened led to other endless hallways, and those I didn’t became closed to me forever. Every door I didn’t open carried an opportunity cost, but every door I did revealed new and exciting paths that I had never before seen or even imagined. The anxiety I felt from missed opportunities was crippling at times, but it was at least somewhat counteracted by excitement about the future. I was in the stable plane of the u-curve.
Photo by runnyrem on Unsplash
Things started to change once I was already cruising in my career — when I hit the quarter-century mark. Opening new doors was no longer giving me that same rush. I was obsessing over the paths I hadn’t taken. To feel the sense of excitement and purpose that had animated my late teens, I needed to embark on a more daring adventure — which, looking back, was probably what motivated my decision to leave Switzerland (at the age of 26) and move to Japan.
That’s when the door metaphor started falling apart for me altogether. The doors only make sense when looking at the world from the present, with limited options ahead and unlimited missed opportunities behind. But at that point, life began to appear more as a whole. I was holding in my hand a burning fuse, the fire slowly crackling beside my clenched fist, with a big chunk of rope already burned to ash.
My attachment to the present wasn’t yet entirely gone; I still saw my decisions as a funnel of possibilities. However, on top of that, I also began understanding that time was much more than just a part of some equation. It was the fundamental currency of life itself
In other words, my decisions weren’t just important because they meant missing other opportunities. They were important because everything I did carried I price tag — one that I could only settle by giving away a piece of my existence.
Photo by Carlos Alberto Gómez Iñiguez on Unsplash
A delicate balancing act
This shift from first seeing life extend onward from the present, to then considering it as a whole is, I suspect, the deeper reason behind the massive slump in the happiness curve. By the time we are mature enough to accept that life is a lit fuse — and act accordingly — so much of it is already gone that we succumb to existential dread.
While dealing with anxiety is hard enough, I believe fear and regret are far worse demons. Nothing could be more detrimental to our happiness than to stare at all the ash on the ground and imagine the things we could have made with it. And because the fuse never stops burning, regret too must be paid for in the currency of time.
It seems that the common reaction for many is to either become paralyzed by indecision, or lash out against oblivion by making radical and irrational decisions. This kind of erratic behavior drives people further down the rabbit hole, culminating at the lowest point in the u-curve; the notorious mid-life crisis.
And yet, data shows that most of us find our way out of the slump. Once the mid-life crisis has been overcome, people in their late 50s and 60s report unprecedented levels of happiness. Beyond even when they had the youth, potential and drive of young adulthood. Why so?
I can’t pretend to fully understand the answer just yet, but I have a hunch. My theory is that those beaming grandpas and grandmas have completed the great shift.
Every moment counts
When time is all that’s left, every moment becomes invaluable. Research shows the elderly are happier because they have gained a healthier perspective on life; a new appreciation for their time on Earth. The end is clearer than ever, which means that finally, they can focus their mental energy not on some wild hypotheses of where life may take them, but rather on making the most out of every last step.
When we’re young, we have lived so little that every bit of life has too much meaning. Every decision is connected to our identity, to who we will become, to our dreams and aspirations — all these future fantasies where we think true happiness awaits us.
Then, in the crisis stage, all meaning is lost. Everything is ephemeral, all decisions become subjected to the cold fatalism of time. We feel constrained, claustrophobic, squeezed by the grasp of death, and in our dread we react instinctively by either freezing in terror or lashing out in denial.
But once we pass that stage, meaning returns. Not the weighty burdensome meaning from our youth, but a richer, more delicate meaning. Every action we take becomes precious, valuable, and intimately our own.
This new meaning can be a bottomless source of energy and inspiration. The absurdity of the human condition no longer appears before us every night like an infinite void, ticking away the days before the inescapable moment when we slide off into non-existence. Instead, we get a canvass. Not quite blank, as society and our past naivete have already filled in much of the background. But still, with more than enough space left for us to express our radical freedom.
Flattening the happiness curve
What were the moments in your life when you felt the happiest? For me, it was when I felt lost in the now. Surrounded by friends on a road trip in a new and exiting part of the world. Putting all of my energy and creativity into a project I cared deeply about. Opening my heart to write and watching in awe as words pour out faster than I can put them to paper.
I don’t believe we can skip stages in life. Nobody is born thinking about life as a fuse, just as no centenarian still sees an endless hallway. We need to feel the pressure of lost opportunity to be able to experience existential dread. We need to embrace and accept existential dread to understand in the depth of our being what it means to be truly free. And finally, exercising that freedom to fill in the canvas of our lives is the only way to achieve lasting happiness.
We cannot reshape the happiness curve into an upward slope, but maybe we can flatten it a bit. For that, I wanted my young friend to know two things.
First, we’re all in this together. We may have different boats, we may be going down different rivers, but our journeys have much in common, and they all come to end. By sharing our experiences and thoughts across cultures and generations, we can lighten the psychological burden of facing the unknown.
The second is that by being aware of what is to come, we can move more swiftly through the rough parts. The u-curve is an average; some people reach peak happiness much faster, whereas others struggle in the slump for longer. By being conscious of the journey ahead, being aware of what we feel and that better times are yet to come, we can reach understanding faster, and have more time left to be happy.
Unfortunately, I couldn’t tell my friend that she was just a skip and a hop away from true happiness. But I could share my experience, and encourage her to seek the wisdom of those who have lived much longer than we. And while true happiness may not be around the corner, I truly believe we’ll reach that peak someday. There, we’ll find the kind of happiness that neither she nor I have ever felt before.
With the right mindset, who knows. Maybe we’ll get there sooner that we think. | https://alexstwrites.medium.com/the-little-things-turning-30-taught-me-about-happiness-f72508398f18 | ['Alex Steullet'] | 2020-04-18 16:45:22.617000+00:00 | ['Life Lessons', 'Mental Health', 'Happiness', 'Life', 'Philosophy'] |
PSA From Santa Claus Regarding COVID-19 | PSA From Santa Claus Regarding COVID-19
Santa will not be accepting your gifts of milk and cookies until further notice.
Photo by Tim Mossholder on Unsplash
Ho! Ho! Ho! Merry Christmas, everybody! We here at Santa’s Workshop are working very hard to ensure everyone has a safe and happy holiday season. Now, due to COVID-19, Christmas is going to be a teensy bit different this year. But don’t worry, boys and girls, it’s still going to be a magical time! Ho, ho, ho!
First of all, out of concern for everyone’s safety, Santa will not be coming down your chimneys this year. Instead, I will be leaving your presents on your rooftops. I kindly ask that your mommies and daddies climb up your chimneys and retrieve them for you. Everybody’s got to do their part! Ho! Ho! Ho!
On that note, Santa will not be accepting your gifts of milk and cookies until further notice. I just don’t feel comfortable eating something that’s been picked out of a box with your superspreader fingers. This Christmas, I kindly ask that your mommies and daddies climb up your chimneys and leave me a bottle of bourbon.
Unfortunately, due to an 85% drop in our mall Santa revenue, I had to furlough a quarter of my elves. On a brighter note, I hired back senior elf Hermey to help out with the backlog. He was an awful dentist.
A special message to all the dedicated mall Santas still working the shops this holiday season: Stop. Just stop. No one wants to sit on your lap right now. Stop.
Coca Cola has dropped our sponsorship contract due to internal budget cuts. From now on, Santa will be sporting a Tom Selleck mustache, and my suits will all be in gamboge.
We at Santa’s Workshop are doing everything we can to prevent the spread of COVID-19. All elves must wear masks and gloves while assembling your toys. We have also adopted a strict work from home policy. Every elf is to complete their part of the toy, walk over to the house of the next elf on the assembly line, deliver said part, walk back to their house and build the part next on their queue, then walk back to the other house, and repeat the process until all the toys have been completed. Staff meetings are conducted by Zoom.
Despite our best efforts, Rudolph the Red-Nose Reindeer has contracted COVID-19. We previously thought his nose was red due to nasal microcirculation, but it turned out to be a COVID-19 symptom. If you or any of your loved ones develop a shiny red rose, please get tested.
Due to delays in postal delivery from the 2020 US Presidential Election, our staff has been overwhelmed with a backlog of letters to Santa. The boys and girls living in Pennsylvania and Georgia can expect their toys to arrive on January 18th, 2021.
Mommies and daddies, rest assured we’re still actively monitoring which of your children have been naughty and which of them have been nice. To find out how, please visit www.google.com, www.facebook.com, or purchase an Alexa from Amazon.com.
And we’re all out of PS5s. Stop asking, Ed.
May all the boys and girls of the world have the safest and merriest of Christmases!
Ho, ho, ho,
Santa | https://medium.com/slackjaw/psa-from-santa-claus-regarding-covid-19-c15848fdcfcb | ['Andrew Cheng'] | 2020-12-22 15:52:17.449000+00:00 | ['Santa', 'Humor', 'Christmas', 'Coronavirus', 'Satire'] |
The Weekly Authority #43 | What to Post on Twitter to Be an Authority:
5 Insider Tips to Improve Your Twitter Posts (& Engagement)
Consider this: The average lifespan of a Tweet is about 18 minutes (whereas the lifespan of a Facebook post is about 5 hours and the average lifespan of an Instagram post is about 48 hours, according to HubSpot). What’s more is that nearly 50% of Twitter users only logon once a day, and the average user is only on Twitter for about 13 minutes.
That’s a pretty small window of time. And that window is clouded by an astounding number of Tweets (with some of the latest research showing that about 6,000 new Tweets are posted every second).
So, how do you make sure your Tweets cut through the noise and capture your target audience’s attention in the short time they’re on Twitter?
The answer: post interesting Tweets and provide value to your audience.
Easier said than done, right?
Well, this week, I want to help you overcome the challenge of finding great content to post on Twitter by sharing some insider tips on what to post in order to build your authority, following and engagement on Twitter.
Top 5 Types of Tweets to Include in Your Twitter Posts
1. Educational content — Does your target audience have a common problem that you can help solve? Does your business offer one (or more) unique solution(s)? Share your knowledge and insights, as well as industry news or the latest research, via your Tweets. People like learning new things, especially when those things help them solve problems they are facing.
Pro Tip: How-to’s, tip lists, and cheatsheets are all great, engaging formats for educational content.
2. Funny content — Make people laugh with your Tweets! People like and tend to remember funny content, so incorporate some funny posts in your Tweeting when possible (and appropriate).
Pro Tip: Great funny content can come in the form of jokes, comic strips, funny comments or even other’s funny stories or experiences. You can even share funny mistakes or accidents related to developing or growing your brand.
3. Retweet content — Not all of your Twitter posts have to include original content that you craft. In fact, retweeting others’ posts can get you noticed by the original Tweeter, and it can add a fresh, new dynamic to your Tweets.
Pro Tip: Tweets posted by industry leaders, social media influencers, and even your current clients (or prospects) can be great to retweet.
4. Promotional content — Don’t forget to talk about your brand in your Tweets. Sharing news about what’s going on with your business via Twitter can be a powerful way to keep people connected and up-to-date with your offerings.
Pro Tip: Offer deals or discounts for your products and/or services via your Tweets. You can also share info about upcoming events, new product releases, new locations or other facets of your business.
5. Visual content — Images are worth a lot more than 1,000 words on Twitter, and they can capture people’s attention even before they read your Tweet (or even if your Tweet isn’t at the top of their feed). So include some interesting images in your posts.
Pro Tip: Post pictures (and/or videos) of your staff, your business’ daily operations, new products or events your business hosts. This can humanize your brand and make your business more memorable.
Remember the Rules of 3’s When Tweeting
As you start to pull together content ideas to inspire your Twitter posts, here’s another thing to keep in mind: you don’t want to over or under do the promotional Tweets.
Posting too many promotional Tweets can cause users to stop checking out your posts (because users will probably perceive that all you have to offer them is a sales pitch — and pretty much no one wants to hear a bunch of sales pitches day in and day out). The flipside of that coin, posting too few promotional Tweets, can mean that you’re never getting your sales message across or seeing any conversions from your Twitter activities.
Strike the right balance by following the social media rule of thirds:
Make 1/3 of your Tweets promotional.
Another 1/3 of your Tweets should share industry news, interesting stories, helpful tips or other industry/brand-related content.
The last 1/3 should share some personal aspect of you, your business or your brand (to establish and maintain a human element).
What to Post on Twitter: The Bottom Line
When it comes to Twitter content and posting strategies, the bottom line is this:
You have to provide value to your followers in order to spark valuable interactions and connections. Or more simply, you have to provide value to receive value from your target audience.
How’s your Twitter marketing going? Are your posts getting the engagement you want? Are they falling short? Or have you uncovered some great new tactics or strategies that have taken your Twitter posts to the next level?
Tell me more on Facebook and LinkedIn. And don’t hesitate to get a hold of me on social media to ask any digital marketing question or just to say ‘hi.’ I look forward to hearing from you! | https://medium.com/digitalauthority/the-weekly-authority-43-62a7e553b929 | ['Digital Authority Co'] | 2017-01-30 12:52:00.903000+00:00 | ['Marketing', 'Content Marketing', 'Digital Marketing', 'Social Media', 'Conversion'] |
I Mistakenly Published 3 Articles Within 24 hours By Accident And This Is What It Taught Me | Panicking will not help
Getting an article accepted into a publication is good. Getting multiple articles accepted into multiple publications is even great. Yet, getting them all accepted at the same time is the worst that can ever happen (Well, maybe not the worst, but it's bad).
That is what exactly happened to me:
I woke up in the morning with 3 emails notifying me that 3 of my articles have been accepted into 3 different publications. I panicked when seeing a later email and realizing that all 3 of them were scheduled to be published the next day.
Well… Ok, I’m fucked. What should I do now???
FYI, I have a degree of compulsion when it comes to the regularity, frequency, or constancy of publishing articles.
It wasn’t exactly the best start for the day with a problem like this foisted upon you early in the morning. What seemed a small matter now felt like the world was collapsing at that instant. My mood shifted to the extreme; the next thing I know, I was sitting there stressing and panicking.
For a while, I had not had a single solution because I was in such a panic. I had to have my boyfriend to remind me of the simplest thing that I can at least do — to contact the publications and ask if the scheduled time can be changed.
I was only reassured after I did something about it. Still, I wasn’t feeling particularly good because of all the chaos and panic. The rest of my day wasn’t productive either because of my mood swings.
Stop, take a deep breath, clear the mind
It’s hard to come up with rational solutions for a mistake when you’re in a panic state. It’s even harder when this happens at 7 am and it is the first thing you see on your phone when you wake up.
Panic is the last thing I need when it comes to problem-solving if only I had realized this earlier on.
If I had only stopped at that instant to take a breath, distance myself from the problem and clear my mind, my morning wouldn’t be so chaotic. The rest of the day would certainly be less moody as well. | https://medium.com/illumination/i-mistakenly-published-3-articles-within-24-hours-by-accident-and-this-is-what-it-taught-me-900bea63de9e | ['Christie Li'] | 2020-11-28 00:53:51.805000+00:00 | ['Self Improvement', 'Life Lessons', 'Writing', 'Education', 'Life'] |
JS Module Swapping for Better Development and Testing | Normal Module Replacement
NormalModuleReplacementPlugin does precisely what its name says — it will make even more sense when you look at the code below. Regardless, let's take a look at its online description.
The NormalModuleReplacementPlugin allows you to replace resources that match resourceRegExp with newResource. If newResource is relative, it is resolved relative to the previous resource. If newResource is a function, it is expected to overwrite the request attribute of the supplied resource.
We tell it a module to keep an eye for, say Child.js, and if it notices something using it, a module it should swap it for, say Toddler.js.
Let’s take a closer look at this and talk about our eventual goal. First, the code looked something like this:
loadLangauge.js
As you can see above, any language could be passed in. So when we statically analyze this code laster with Webpack, any of the 200 languages could be imported. Webpack will automatically go to that folder and collect all files that have the potential to be imported, and the system will automatically chunk every file it finds.
Our goal is to limit the number of languages that the dynamic import could load while we are in development mode; that way, Webpack will know it can only load a limited number of files.
Simple, that brings us back to our module replacement plugin. The plugin allows us to select a module (like the one above), and tell Webpack to load a different module in its place. Let’s create a module that only loads javascript.
loadLanguage.dev.js
Now in our webpack.config.js, we can add the NormalModuleReplacementPlugin to the plugins section. I've added an isDev variable to indicate that this plugin should only execute in dev mode. Note that this will need to be handled accordingly in your codebase (perhaps using the NODE_ENV ).
webpack.config.js
Note that we don’t have to target where the file exists, just some regex if the filename appears. The second call is the replacement module. Now when we build our development environment, only the Javascript is part of our build!
Jest Module Swapping
Another excellent use case of module replacement is Jest. Jest still has several issues with ESM and therefore often needs to fallback to CJS still. It’s usually recommended you use the ESM files in your development/production builds, and then module swap to CJS during your tests.
At the time of writing, one library that uses this swap for testing is React-dnd. They’re a great example because they need to swap any or all of the modules you could use by them. We’re going to do all of them, but it might not be necessary for your case. To swap files in your jest testing environment, open up your jest.config.js. In module.exports add:
jest.config.js | https://medium.com/better-programming/js-module-swapping-for-better-development-and-testing-5c79612e09f0 | ['Denny Scott'] | 2019-08-06 19:24:19.894000+00:00 | ['Webpack', 'JavaScript', 'Software Development', 'Programming', 'React'] |
Afghanistan: saving lives in one of the world’s deadliest conflict zones | Amanullah, a 32-year-old welder, worked in his shop near the Afghan capital Kabul when the neighbourhood suddenly rattled with automatic gunfire. People ran for cover as government troops engaged in heavy clashes with Taliban fighters in the streets outside.
“Bullets were flying everywhere,” Amanullah, who like most Afghans uses only one name, recalls. “Then someone fired a rocket that landed just outside my shop.”
Sakhi, Emergency Head Nurse, examines Amanhullah’s ex-rays showing shrapnel damage. ©European Union, 2020 (photographer: Peter Biro)
Amanullah was thrown backwards by the blast, hitting his head on a concrete wall. He woke up in the hospital four days later. The explosion had ripped off his right leg above the knee and sent jagged pieces of hot metal into various part of his body.
“When I woke up, I was wondering where I was,” he says, straining to speak due to damage by shrapnel to his lungs. “Then I looked down and my right leg was gone.”
Amputations are carried out only when it is lifesaving and absolutely necessary, but in a country dependent on manual labour and agriculture, the loss of a limb often means the loss of a livelihood as well.
“I worry a bit about the future,” Amanullah, a father of four, says.
A long-lasting conflict
Ordinary Afghans like Amanullah have long borne the brunt of fighting in a country that after more than 40 years remains one of the world’s deadliest conflict zones. Almost 6,000 Afghan civilians were killed or injured in the armed conflict, from 1 January to 30 September 2020, according to the latest report of the UN Assistance Mission in Afghanistan (UNAMA). September 2020 marks a 39% increase in the number of civilian casualties in comparison to the same period last year.
The toll has been consistent over the past six years, putting the total number of casualties in the past decade to over 100,000 people. The Afghan conflict remains the deadliest in the world for children. Child casualties were 31% of all civilian casualties in the first nine months of 2020, women 13%.
A patient is cared for by a nurse at Emergency’s facility in Kabul. The 35-year-old was carrying food along a secluded path near Kabul when he stepped in a land mine. He will require weeks of rehabilitation to be able to walk on the prosthetic leg. ©European Union, 2020 (photographer: Peter Biro)
Amanullah is now being treated in the surgical facility of an EU-supported aid group Emergency in Kabul, housed in a former kindergarten built by the Soviets. Emergency, based in Italy, opened its first surgical centre in Afghanistan in 1999, during the war between the Taliban and the Northern Alliance. The centre, with 120 beds, treats war victims from across Afghanistan and has seen has seen an increasing flow of civilian patients suffering from serious injuries as a result of war.
The US and the Taliban recently signed an agreement that could result in American troops leaving the country. However, the Taliban has continued its offensive operations against the Afghan government. Meanwhile, the staff at Emergency remain on a 24-hour standby to deal with the next suicide attack or roadside bombing, which often results in a high number of killed and injured.
“We see an average of over 400 patients per month,” says Sakhi, Emergency’s head nurse. “But when a suicide bombing or other large attack occur, we are of course flooded with patients requiring immediate life-saving treatment.”
Dealing with mass tragedies
In 2018, when an ambulance loaded with explosives detonated near a crowd of police in the centre of Kabul, Emergency’s facility received over 130 wounded patients in one instance, brought to the front gates in cars, taxis and the back of pickup trucks.
“In mass events, the injured are taken to a triage station by the main gate, where we prioritise the wounded,” says Dejan Panic, Emergency’s Programme Coordinator in Afghanistan. “They are then taken to our three operation theatres, where they are treated by highly experienced staff specialised in trauma surgery.”
Outside the wards, patients in wheelchairs relish the first rays of the spring sun and in the hospital’s rehabilitation room, a young man practices walking with a new artificial leg, his hands gripping rails running along a ramp for support.
One of the facilities where many patients relax with a relative in the hospital garden. ©European Union, 2020 (photographer: Peter Biro)
Violations of International Humanitarian Law abound amongst all parties to the conflict. In the first nine months of 2020, UNAMA verified a total of 52 attacks on medical missions and 45 attacks against education facilities.
In an adjacent room lies Parwiz, 13. Just two days earlier, he was collecting firewood in a small village in the eastern province of Logar when he found what he recognised as a landmine in the undergrowth. As he lifted it to throw it away, the device exploded, sending shrapnel into his abdomen.
“We’ve removed all of it and the accident could have been much worse,” says Sakhi. “We expect him to fully recover.”
In 2019, over 40% of casualties were caused by improvised explosive devices (IEDs), planted by anti-government forces, including the Taliban and Islamic State. According to the UN, 885 people were killed by IEDs, with nearly 3,500 injured. Last year also saw record-high levels of civilian casualties from airstrikes, mostly carried out by international military forces, with more than 1,000 killed and injured.
The side effects of conflict
As health facilities continue to report record-high admission levels of conflict-related trauma cases, the EU has stepped up its humanitarian support for emergency treatment and related psychological assistance to include nearly 5,000 people each month.
“Treating war trauma has become one of the EU’s largest priorities in Afghanistan,” says Luigi Pandolfi, who oversees the EU’s humanitarian programs in the country. “We spend almost 30% of our overall budget to equip healthcare facilities with trauma care capacity.”
Pandolfi says that the EU’s and its humanitarian partners have been able to provide these critical services not only in large urban centres but also in remote and conflict-affected areas.
Ali Sajad, 17, is learning how to walk again after a bullet damaged his bones and muscles in the upper leg. ©European Union, 2020 (photographer: Peter Biro)
Healthcare in all forms is badly needed across the country. A staggering 10 million Afghans — over a quarter of the population — lack regular and sustained access to basic health services. Specialised care, such as treating trauma patients and rehabilitating amputees, is even rarer in government-run hospitals and clinics. To ease the burden on the strained Afghan healthcare system, the EU is also supporting the training of nurses and other healthcare staff in first aid, mass casualty management and basic life support.
“We have increasingly seen patients suffering from serious trauma arriving in the hospital properly bandaged, on spinal boards and stabilized,” says Dejan Panic. “Training is literally saving lives.” | https://europeancommission.medium.com/afghanistan-saving-lives-in-one-of-the-worlds-deadliest-conflict-zones-8e9bf736f4f2 | ['European Commission'] | 2020-11-23 08:33:41.802000+00:00 | ['Humanitarian', 'European Union', 'Afghanistan', 'Conflict', 'European Commission'] |
The Title Is ‘Story’ | Stories are everywhere
a personal journey
a significant memory lane reminiscence
a variance to the daily chores
a part of livelihood, perhaps
flashlights turn on when depicting the pieces
connecting dots
words merge
emotions converge into structured pillage
stories are striking
words put a context to it
a personal one, the writer wants us to convey.
Suntonu Bhadra ▪ December, 2020
Alternative words to explore: | https://medium.com/scrittura/the-title-is-story-7f4b07d2662b | ['Suntonu Bhadra'] | 2020-12-27 15:35:29.925000+00:00 | ['Poetry', 'This Happened To Me', 'Storytelling', 'Poem', 'Writer'] |
Feeling Political Fatigue and Pandemic Burnout? | I don’t know about you but I have PTSD from the last election. What I thought was going to happen didn’t. The political world as I knew it turned upside down with a shocking news cycle that has not let up. The comorbidity from the anxiety and stress due to the Coronavirus adds a whole other complicated layer. Record wildfires and hurricanes are in the news cycle. Reports of Election Day fears of outbreaks of violence are resulting in people stocking up on canned goods, toilet paper, and other basic supplies. Fears of a Civil War breaking out in our streets has resulted in an increase in gun sales and many to first-time gun owners. It’s no wonder we are all on edge.
Family infighting and friends and coworkers have severed relationships over political views that get further divided by Facebook and Twitter. Loyal viewers to cable outlets like Fox News compared to CNN or MSNBC feel as though they’re living in alternate universes. People of color and their white allies are bewildered that not everyone is condemning white supremacy or neo-nazism. America is a country founded by immigrants and yet not all of its diverse citizens receive equal justice. The pandemic continues to make things worse and highlights the vast inequities.
According to an ongoing weekly poll of 900,000 Americans conducted by the Census Bureau, 35% of Americans currently meet the diagnosis of generalized anxiety disorder or major depressive disorder. The cases have quadrupled since COVID19 became a household name, forcing us to social distance and many to lose their jobs and even worse, their loved ones. We’re fatigued with restrictions put on us, worried and restless that our kids aren’t back in school, and saddened we can’t see our older relatives. Businesses are closing and more and more people need help to support their families' basic needs. We don’t want to cancel Thanksgiving and our winter holiday plans with our loved ones and yet it’s the responsible thing to do not just for our health but for the good of mankind as we’re warned it’s about to be a very dark winter.
The good news is there are antidotes to counteract the actual and the perceived doom and gloom because so much of moving forward is still in our control.
Three Antidotes to Fight Political Fatigue Syndrome and Pandemic Burnout:
1. Hope
*hope- /hōp/ (noun)- a feeling of expectation and desire for a certain thing to happen.
My best friend, who I consider an intuitive because her predictions are almost always spot on, tells me the last Mercury Retrograde of 2020 ends on Election Day. She says we’re moving into Scorpio season, an intense sign that will bring transformation and regeneration. All I know is I’m hoping the time we are entering is a rebirth for positive change on a global scale.
Hope is what gets us up in the morning and it’s what keeps us going. In any form, hope is powerful. My faith in hope is solidified when I see small acts like people waiting in lines for eleven hours and not giving up because they see the importance of voting. I see signs of hope when complete strangers offer to drive seniors or those without transportation to the polls, bring pizza to those waiting in long lines, and offer daycare to watch their kids. I see signs of hope from all the early voting. Whereas in the past, many people may have sat this one out, they are finally realizing what a privilege their right to vote is and are either voting for the first time in their life or at least the first time in a very long time.
One of the most hopeful quotes I can think of comes from Martin Luther King, who famously said, “The arc of the moral universe is long, but it bends toward justice.” It encourages us to never give up hope and that our collective voice is exponentially louder. With more people shining a light on the darkness, our kids and many who never really paid attention to politics or the news cycle before are able to see the importance of getting involved. This increased awareness of local and federal government means a broader representation to help the masses rather than a few constituents.
It’s so heartening to see so many young women leaders getting involved and handling complex situations expertly. Our kids now have wonderful female role models, like Angela Merkel, Germany’s Chancellor who is very popular thanks to her handling of the coronavirus and her country’s strong economy. Many of us have considered moving to New Zealand after watching Jacinda Arden, the Prime Minister who, as a brand new mother, crushed the Coronavirus. The way she dealt with the Christchurch mosque shootings by calling it terrorism while swiftly implementing gun control was a masterful example of strength and leadership.
Stories about humanity at its best are beacons of hope illuminating that there is more good than bad in our human race. To increase your hopefulness and all-around positive attitude, instead of doom scrolling, actively unfollow people who are negative and make you feel terrible. That’s not to say get rid of people who have differing views because we need to continue to be open-minded. Consume more inspirational and uplifting stories, like Instagram accounts such as Good News Movement, Positively Present, and Humans of New York. I can’t forget my all-time favorite Instagram account, The Dodo where there are so many wonderful animal rescue stories to make you smile.
We’re all aware the Coronavirus case numbers are going up but don’t lose hope. We’ve learned so much since the pandemic hit. We have therapeutics like Remdesivir. The death rate is not rising as rapidly as it once did. A vaccine is on the horizon. Wearing masks, washing hands, and properly socially distancing is proven to prevent the spread and it’s within our control. It’s up to us to choose precautions over short-sided behaviors. It’s also up to us to choose decency over unfairness and hope over fear.
2. Action
*action- /ˈakSH(ə)n/ (noun)- the fact or process of doing something, typically to achieve an aim.
Taking action creates a positive cycle that helps get you closer to your goals to create the life you desire. If you take action politically, it helps you create the community and on a broader scale, the country you want to live in. So many selfless men and women are committing to help not just themselves but their fellow citizens achieve a happy and healthy life. They are stepping forward to advocate and change policies to rid unfairness and replace it with what is best for the majority to prosper.
More and more people are realizing we are in fact not just stronger together but so much better off when we are united. Once you are able to get over the fear of “the other” and see a commonality in the human race, it’s undeniable how diversity strengthens and adds value. Think of it like a food court with American fast food that only has hamburgers and fried chicken. Wouldn’t you prefer to have access to additional offerings, like Mexican, Indian, Thai, Chinese, French, Italian, Japanese, Korean, Ethiopian, and Greek food? Variety is not only the spice of life and expands our palates, it also expands our views on the world and makes life that much richer.
When we take action to support the poor and our hard-working middle class, our collective economy and the overall health of our country is better off and stronger for all. “A rising tide lifts all boats,” the slogan John F. Kennedy made famous, is the perfect metaphor for the need for our government to lift everyone up. A divided country being fed scare tactics about scarcity instead of assurance there is and can be abundance for all, will undoubtedly be lopsided and sink like a boat with one end filling with water.
It’s fabulous to see how the last four years have turned our youth into informed advocates. My then 14 and 11-year-old daughters who are now 18 and 15 have accompanied me, their mom to Women’s Marches. My oldest daughter has attended Black Lives Matter Marches and Briona Taylor protests on her own. When I was their age, I had no desire to watch a Vice Presidential debate and yet my girls stopped everything they were doing to come and join me on the couch to watch. They’ve phone banked and postcarded, aware that teens like them, like the phenomenal Swedish teen activist, Greta Thunberg, and the gun violence activist, Emma Gonzalez use their powerful voices to truly make a difference.
My daughters and their cousin at the Women’s March 2017, Los Angeles
My daughters are part of the youngest generation to vote, Gen Zers, people born after 1996. According to Pew Research, one-in-ten eligible voters in the 2020 electorate will be part of Generation Z. They were to inherit a very strong economy until COVID came and changed the foreseeable future making the job market feel very uncertain. Their political clout will continue to grow steadily in the coming years, as more and more of them reach the voting age.
Gen Z welcomes change and is more racially and ethnically diverse than any previous generation. They are on track to be the most well-educated generation; digital natives who have little or no memory of the world as it existed before smartphones, growing up with access to information at their fingertips. They are more open-minded and accepting when it comes to gay marriage and gender fluidity. As a mental health advocate, I must point out the worrisome fact that their anxiety and suicide rates are higher, likely from the firehose of information and social media coming at them at all times. Add the fact that our teens and young adults are currently being asked to not go to school, have parties, and let off steam, and fuel is added to the fire.
Our youth are resourceful and resilient. I have to believe this pandemic will make them stronger and more appreciative of their relationships when all is said and done. Their disciplined, organized, and informed activism mindset on political issues combined with their use of technology will continue to strengthen their voice in movements they believe in like March for Our Lives on gun safety or the Sunrise Movement mobilizing on climate change
It may feel counterintuitive but rest is productive when it comes to action. Take some time to step back, breathe, relax, and shut the news, social media apps, and firehose off. When you’re ready to get informed and do your due diligence, consider limiting the time to a minimum and try reading news sources not from America like the BBC which has less of a stake and is unbiased.
When it comes to the Coronavirus, you don’t have to be a passive bystander. Take action by believing in science and listening to the top infectious disease doctors like Dr. Fauci. If we all follow public health measures, we can flatten the curve and get more schools and businesses to open until a vaccine is widely distributed.
3. Gratitude
*gratitude- /ˈɡradəˌt(y)o͞od/ (noun)- the quality of being thankful; readiness to show appreciation for and to return kindness.
Freedom is like our health because both can be taken for granted until it is taken away. The pandemic and political strife threaten what we treasure and are most grateful for. Maybe we needed this jarring shake to wake up to a new reality with big open eyes in order to view what’s important. Finding the silver linings and discovering the lessons to be learned after difficult periods is the key to gratitude.
When we were asked to social distance back in March, I published an article called 15 Silver Linings to Manage Anxiety During the Pandemic. Eight months later, almost everything holds true like the gratitude we continue to feel towards the incredible essential workers, doctors, nurses, and scientists heroically dedicating themselves to our public health.
When we count our blessings we feel better. I’m grateful for the added time this pandemic has given me to spend with my teenage daughters. I don’t take for granted the extra meals and conversations I’m having with my social high school freshman who would normally be at her friends' houses and with my high school senior before she goes off to college. Of course, my complaint list is long for all our kids who are missing out on so much or for my aging parents who can’t hug their grandchildren. It’s healthy to acknowledge our disappointment but afterward let it go and replace it with the powerful antidotes- hope, action, and gratitude.
While some may want to make America Great Again and others want to Build Back Better, we can and need to find common ground. I’m grateful we can all agree on the love for our country and the reason we will continue to fight for the stars and stripes. Patriotism can and should unite us. America is flawed but we still have a model democratic political system with built-in checks and balances that have undoubtedly been tested over the last four years. The US Constitution is hardly perfect but I’m grateful it’s a living document whose cracks and flaws are meant to be debated and adapted to new circumstances. We can all be grateful for progress even if sometimes it feels like two steps forward and one step back. Don’t forget, even baby steps move us forward.
The American Way, the American Dream, and our American Exceptionalism are ideals we will never give up on. We stand on the shoulders of people who came before us and we will continue to fight for a better future for those who come after us. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness. | https://medium.com/curious/feeling-political-fatigue-and-pandemic-burnout-7b3740336f91 | ['Rachel Steinman'] | 2020-11-05 21:07:46.475000+00:00 | ['Pandemic', 'Mental Health', 'Election 2020', 'Hope', 'Activism'] |
My Mother-in-Law’s Anti-Advice Could Have Prevented My Depression | My Mother-in-Law’s Anti-Advice Could Have Prevented My Depression
In hindsight, it was the only “advice” worth taking seriously
Photo by BBH Singapore on Unsplash (adapted)
Months leading up to the birth of my son Nate, I was in “consumption” mode. I read every book I could find on parenting and fatherhood. I read articles about brain development for babies and watched Netflix’s TV show, Babies. I asked for advice from every parent I knew.
“I only want one child, so I don’t want to screw anything up,” was my thinking.
Well, guess what? I screwed up anyway! I’d even argue that I screwed up more because I was more “informed”.
But I’m jumping ahead…
Three months before Nate’s birth, I took note of every piece of advice I could get my hands on, especially from parents in my entourage. Sadly, none of it applied in our case. That’s the thing with babies — not a single one is the same. There’s no such thing as an average baby.
The same is true in adulthood, that’s why I’m not reading much life advice anymore. Your context is different from mine, and so everything I say you “should” do is irrelevant. I don’t know you. Only you truly know yourself.
The best piece of advice I received
As I gathered advice all over, one piece of advice truly stood out, and it was from my mother-in-law. While others were quick to spit out every idea that came to mind, she said a single sentence. It was the shortest discussion I’ve had about the topic by a wide margin.
Here’s what she said:
“My advice: don’t listen to anyone’s advice, including this advice.” — V.
When she said that, I dismissed it. I disagreed with her. I wanted to know everything so that I could make informed decisions and avoid mistakes. If you’ve watched The Good Place on Netflix, I was Chidi through and through. Better to learn from someone’s mistakes than your own, right?
Not quite!
Everybody is different
In hindsight, nothing could have prepared me for what was to come with my son. He was healthy but had all the little issues in the book. Colics, reflux, tongue-ties, food intolerance, you name it.
For the first two months of his life, screaming was routine. The ever-optimistic Danny fell into postpartum depression; something many think only applies to women.
People told me I’d love parenthood. For me it was hell. Not because I didn’t like my baby, but because my baby was in pain and no one could figure out why.
All the advice I got was immediately overruled by the fact that my baby was different… But that’s the thing, all babies are different. Every single human being is different.
When you take advice from others, you only take the result, not the context. Every piece of advice out of context is bad advice. And context, my friend, is no simple matter.
My mother-in-law
“Context” is why my mother-in-law’s advice was the best. As a professional social worker with over 30 years of experience, she has seen her fair share of bad advice. She works on a wide variety of cases, and you guessed it, all different.
The only pattern she knows for certain exists is that, when it comes to people, no two problems are ever the same. My problem isn’t the same as your problem, even if it is.
She knew that full well, based on her extensive experience in the field of social work. She solves issues with tough family situations daily. That’s her life’s work.
She was right from the start and I wish I had listened to her advice before.
Why all advice is bad advice
I mentioned my son above. You may have a baby who had the same problem, but what are the chances that you live in Montreal, one of the parents runs two businesses, the other is an overachiever and perfectionist, you are minimalists, have travelled all over the world, have family living an hour away, etc.
You get the point. No two contexts are ever the same!
I could ask a million questions here, and the chances of stumbling upon a single other person who is in the same or very similar situation are close to zero.
When you gather advice from all over the place, you give yourself too many options that likely don’t apply to you. You create a bank of advice that technically works, and then you feel bad when it doesn’t work for you.
You feel like you’re doing everything right, but beat yourself up when it doesn’t go according to what others said. How couldn’t you? Others said it worked, so it has to work, right?
As an example, we were fully aware of the benefits of breastfeeding before Nate was born. We wanted to do it exclusively for six months, as recommended by Public Health Canada. Every mom group promotes it, so it has to work, right?
Wrong!
It turns out, the “most natural” thing in the world is damn complicated. For us, it “worked” for two months, but it came with a lot of adversity. People were quick to give advice, none of which worked. Nate simply couldn’t do it. His intolerances, reflux, and tongue-ties made it near-impossible.
Listening to all the advice led both my wife and me to depression.
But surely you’ve experienced something similar, right? You took advice, implemented it, failed, and felt like you just weren’t good enough? That’s the negative power of taking advice from others. | https://dannyforest.medium.com/my-mother-in-laws-anti-advice-could-have-prevented-my-depression-7a6554b83e17 | ['Danny Forest'] | 2020-11-05 17:39:06.875000+00:00 | ['Life Lessons', 'Mental Health', 'Advice', 'Parenting', 'Inspiration'] |
The Impact of Customer Connection Programs: A Study in Focus | Research has shown it takes 23 minutes to return to peak focus after we experience a distraction. That means every interruption to a person’s focused time costs them nearly half an hour. It’s no wonder, then, that helping people find the right balance between maintaining focus and keeping up with important information has become an emphasis across Microsoft and the tech industry.
Here at Microsoft, we see this emphasis emerging in many ways, including recent innovations to Microsoft 365 capabilities:
Windows notifications can be managed directly from the pop-up.
Focus plan in My Analytics helps people carve out regular time for uninterrupted work.
Scheduled focus time in Outlook cues Teams to display your status as “focused,” pausing notifications there.
As a principal program manager leading development of the Focus Assist feature, I’m familiar with the 23 minutes statistic, along with research about focus and distraction that Microsoft has been working on for many years.
But through our guided customer connection program, I discovered that conducting studies has a far more profound impact on a person’s sense of investment and empathy with customers than reading reports.
Our program helped me understand customer needs more fully, informing improvements in our do-not-disturb feature and the larger Microsoft initiative to help people focus.
Getting started with training and research questions
As part of Microsoft’s drive toward customer obsession, I and the other PMs in the organization attended a research training immersion. The researchers introduced us to the program’s hypothesis-driven framework and offered to guide us through our own studies.
The team was working on research-informed solutions to help people manage the flow of information in their lives. We wanted to understand customer pain points from an experiential perspective and assess whether we were on track, solving the right problems, so we set out to answer a few human-centered questions:
What frustrates people when they’re trying to focus? Which circumstances most require focus? What new information do people find it frustrating to miss?
In the first phase, centered on problem discovery, our research program helped us establish four hypotheses that we would test by talking to customers. Some of these focused on managing information and interruptions:
Many Windows customers are most frustrated about effectively managing the increasing influx of information and interruptions they get from people, services, and apps because of the lack of effective ways to control and triage them.
Others centered around completing a task:
Many busy people who are trying to complete a task that they care about are frustrated by irrelevant interruptions that distract them.
All of our hypotheses focused on problems and pain points, allowing us to surface our expectations and test them, rather than carry them into the solution phase as latent assumptions.
Customer insights and empathy in the problem phase
Next, we framed interview questions to test our hypotheses. It was essential to have some formal structure and guidance at this point, to help us articulate questions in a way that was neutral and not leading.
Heading into the interviews, our guide also coached us on how to listen, pausing and asking open-ended follow-up questions to draw out more insight from customers.
Our first round of interviews supported our hypothesis that people wanted an easier way to handle information and interruptions. We also learned that the customers we talked to would like to be able to interact and engage with relevant notifications more.
Some of our findings led us to a broader understanding of customers’ needs and pain points. For example, while work-related tasks emerged as the most crucial for focus, participants routinely brought up gaming and movie watching as tasks for which they wanted to quiet the noise. Customers also cited people and their surrounding environment as some of their biggest sources of distraction.
As we talked with customers, listening to their concerns, I felt the power of their stories in a way I hadn’t expected. One mother I spoke with was trying desperately to support her kids while making as much quality time with them as possible. At work, she would end up bringing her laptop in the bathroom because it was the only place where she could get focused time away from coworkers. As she spoke about her fight to manage her time, she started to cry.
Now whenever I think about the value of those 23 minutes a person loses when they’re distracted, it’s not just a statistic. It’s about this human being, and others like her, whose time is priceless.
Concept-phase interviews and solutions
Actively listening to customers is a great springboard for new ideas. In the next phase of our study, the team formulated four new hypotheses around solutions that might meet the needs we’d surfaced.
To test our solution hypotheses, we created new interview questions and sketched four low-fi concepts representing potential solutions. Of these, customers strongly preferred the solution which would allow them “to easily set up both your digital and physical environments in a way that meets your focus needs, so you can get things done.” When in focus mode, devices and software such as Skype and Teams would show as busy.
After we prioritized our concepts, other studies were conducted to refine features and prototypes. In 2018, Windows overhauled its previous do-not-disturb feature, which had simply blocked all the noise, rebranding the improved version as Focus Assist. This solution uses smart filtering governed by automatic rules that allow for Windows notification suppression in the following contexts:
When you’re in a meeting, sharing your screen
At recurring times of day that you can define
When you’re gaming or watching a movie (in full-screen mode)
In the default setting of focus mode, only messages sent with a priority will break through, but customers can designate who breaks through at any time. | https://medium.com/microsoft-design/the-impact-of-customer-connection-programs-a-study-in-focus-d039aa128554 | ['Lee Dicks Clark'] | 2020-03-20 21:27:16.160000+00:00 | ['Research And Insight', 'User Experience', 'Technology', 'Design', 'Microsoft'] |
Deep Learning #4: Why You Need to Start Using Embedding Layers | Welcome to part 4 of this series on deep learning. As you might have noticed there has been a slight delay between the first three entries and this post. The initial goal of this series was to write along with the fast.ai course on deep learning. However, the concepts of the later lectures are often overlapping so I decided to finish the course first. This way I get to provide a more detailed overview of these topics. In this blog I want to cover a concept that spans multiple lectures of the course (4–6) and that has proven very useful to me in practice: Embedding Layers.
Upon introduction the concept of the embedding layer can be quite foreign. For example, the Keras documentation provides no explanation other than “Turns positive integers (indexes) into dense vectors of fixed size”. A quick Google search might not get you much further either since these type of documentations are the first things to pop-up. However, in a sense Keras’ documentation describes all that happens. So why should you use an embedding layer? Here are the two main reasons:
One-hot encoded vectors are high-dimensional and sparse. Let’s assume that we are doing Natural Language Processing (NLP) and have a dictionary of 2000 words. This means that, when using one-hot encoding, each word will be represented by a vector containing 2000 integers. And 1999 of these integers are zeros. In a big dataset this approach is not computationally efficient. The vectors of each embedding get updated while training the neural network. If you have seen the image at the top of this post you can see how similarities between words can be found in a multi-dimensional space. This allows us to visualize relationships between words, but also between everything that can be turned into a vector through an embedding layer.
This concept might still be a bit vague. Let’s have a look at what an embedding layer does with an example of words. Nevertheless, the origin of embeddings comes from word embeddings. You can look up word2vec if you are interested in reading more. Let’s take this sentence as an example (do not take it to seriously):
“deep learning is very deep”
The first step in using an embedding layer is to encode this sentence by indices. In this case we assign an index to each unique word. The sentence than looks like this:
1 2 3 4 1
The embedding matrix gets created next. We decide how many ‘latent factors’ are assigned to each index. Basically this means how long we want the vector to be. General use cases are lengths like 32 and 50. Let’s assign 6 latent factors per index in this post to keep it readable. The embedding matrix than looks like this:
Embedding Matrix
So, instead of ending up with huge one-hot encoded vectors we can use an embedding matrix to keep the size of each vector much smaller. In short, all that happens is that the word “deep” gets represented by a vector [.32, .02, .48, .21, .56, .15]. However, not every word gets replaced by a vector. Instead, it gets replaced by index that is used to look-up the vector in the embedding matrix. Once again, this is computationally efficient when using very big datasets. Because the embedded vectors also get updated during the training process of the deep neural network, we can explore what words are similar to each other in a multi-dimensional space. By using dimensionality reduction techniques like t-SNE these similarities can be visualized. | https://towardsdatascience.com/deep-learning-4-embedding-layers-f9a02d55ac12 | ['Rutger Ruizendaal'] | 2020-06-02 14:24:43.630000+00:00 | ['Machine Learning', 'Tech', 'Artificial Intelligence', 'Data Science', 'Deep Learning'] |
The Social Media Reformation | Facebook isn’t just a company that needs to be broken up, its a religion in need of reform.
Photo by William Iven on Unsplash
On an early summer day in June 2017, Mark Zuckerberg gave a speech at what was billed as the Facebook Community Summit in Chicago. The intention of the summit was to encourage the use of groups on Facebook as well as to unveil the social media platform’s new mission statement: “give people the power to build community and bring the world closer together.” This new mission was a pivot from its previous goal of making the world more open and connected. The difference was subtle yet profound.
To articulate the importance of community and provide the rationale behind Facebook’s desire to pursue this new emphasis, Zuckerberg highlighted various forms of community and how they function.
“We all get meaning from our communities. Whether they’re churches, sports teams, or neighborhood groups, they give us the strength to expand our horizons and care about broader issues,” stated Zuckerberg.
In reference to trends suggesting that people are increasingly isolated and not a part of literal communities, he even went so far as to claim, “A lot of people now need to find a sense of purpose and support somewhere else.” Of course “meaningful (Facebook) communities” were the solution Mr. Zuckerberg proposed.
It was the comparison of Facebook to church that generated the most buzz in the days following the summit. Commentators took issue with Zuckerberg’s willingness to equate Facebook with a religious institution and suggest that people could find their purpose in groups on the social media platform. An article in the Telegraph cited a Church of England Bishop who called the idea of Facebook replacing the church, “a delusion.” Another commentator reflected, “Unlike Facebook, a church tells us that we are not at the centre of the world.”
Beyond these comments and a few others like them, talk of Facebook being a church or a religion, has for the most part ceased. But as Facebook’s active monthly users climb towards 2.5 billion — more than Christianity and more than Islam, the world’s two largest religions — it might be time to revisit the idea that Facebook is a religion.
A quick search of the word religion relieves our fears, and defines religion as “the belief in and worship of a superhuman controlling power, especially a personal God or gods.” Facebook lacks a superhuman deity-like figure that is the object of user’s worship.
While it is true that Facebook, and social media in general, aren’t belief systems per se, one could argue that they still function like a religion.
From a sociological perspective, we could consider the definition of religion put forward by sociologist Thomas Luckmann, who defined religion as, “a socially constructed, more or less solidified, more or less obligatory system of symbols.” This system of symbols includes, “a stance toward the world, the legitimization of natural and social orders, and meanings…that transcend the individual with practical instructions on how to live and with personal obligations.”
The above definition is used by historian Kaspar von Greyerz in his book Religion and Culture in Early Modern Europe, 1500–1800, with one addition: religion is a “system of symbols and rituals.”
According to this definition of religion we see something that much more closely resembles how social media users interact on various platforms. Facebook, Twitter, Snapchat, and others certainly are socially constructed. And in today’s social economy it is absurd to even consider the idea of forgoing the relational and economic opportunities social media provide, thus making it “more or less obligatory.”
Symbols are ubiquitous when it comes to social media and the rituals of wishing “friends” happy birthday and posting selfies or pictures of one’s latest culinary triumphs are the liturgical expressions of our day. Users order their lives around social media. How they decide to spend their time and money, how they approach the world, and the thought processes they utilize while approaching it, are all shaped and formed by the vast influence of social media. It is safe to say that social media has more bearing on many user’s daily life than any religion does.
What’s more, according to research, internet users now spend more than 2 hours and 20 minutes on social media platforms every day. If Christians, Muslims, or Hindus spent that much time praying or reading their holy books per day, they would be considered among the most devout and holy.
The amount of time users spend on social media reflects the impact it is having on their lives and is another argument for claiming it to be a religion. It reveals that social media is the functional religion of the day as it impacts users as religion impacted European daily life before the Enlightenment. According to von Greyerz, that “was an era in which religion still played a central role in the daily life of Europeans, it was experienced primarily in everyday settings.” Just like social media is today.
If social media is the functional religion of our day, and impacts users in similar ways to how religion impacted Europeans in the 1500’s, there are numerous implications that need to be considered. For example, if social media functions like a religion, Mark Zuckerberg is not only the founder and CEO of a hugely successful and profitable company, he is the pope of the largest religious order the world has ever seen. Pope Zuckerberg wields more worldly power and influence than any other religious leader past or present. | https://medium.com/soli-deo-gloria/the-social-media-reformation-894807a366ca | ['John Thomas'] | 2019-11-05 19:18:36.340000+00:00 | ['Politics', 'Culture', 'Books', 'Social Media', 'Religion'] |
Top 20 Reasons To Include SEO In Your Digital Marketing Strategy | Top 20 Reasons To Include SEO In Your Digital Marketing Strategy Visualmodo Follow Jul 31 · 10 min read
Businesses should be proactive and innovative in marketing to be relevant and competitive. They do so by incorporating the developments and impact of the technologies of the digital age into their business processes. Specific to marketing, digital advertising is now the most convenient and accurate way to reach clients. In this article, we’ll share the top 20 reasons to include SEO in your digital marketing strategy.
Many business owners today realize that Search Engine Optimization (SEO) is vital to a successful marketing plan. What they might don’t know is just how to use it. According to Oregon Web Solutions, implementing strategically selected keywords into valuable content will significantly increase your visibility in the online world. Even more critical is the invaluable information it provides to understand how your clients find you.
However, in comparing SEO to PPC, affiliate marketing, and any other marketing methods, your question might be: How does SEO conform, and is that essential?
To help you with that, below are some points that address the SEO role in your marketing strategy. Also, explained below are reasons why SEO should be prioritized in your business’s digital marketing campaign.
What Is SEO?
SEO involves the development of an easy-to-categorize and searchable website. This is an integral component of digital marketing efforts — an integrative model that fuels clients to a company’s website.
With SEO, the goal is for your website to rank high in search engine’s result pages. When that happens, you lead part of billions of consumers to your market. This means a large percentage of people are searching for what you are selling.
Having the highest rank and maintaining that rank on search engines or search engine result page (SERP) is essential. Only, then, can you have reliable online traffic.
What Are The Different Types Of SEO?
There are three types of SEOs to introduce and give your website the highest chance to improve its ranking on the SERPs. These types are:
In-page SEO
This involves researching keywords and using keywords in high-quality content from various web pages on your site.
Off-page SEO
Aims to improve your website’s partnership with other relevant sites. Off-page SEO is mainly focused on growing backlinks. These links can bring interconnected traffics to your website from a vast number of related websites.
Technical SEO
This includes loading speed, syncing, crawlability, smart devices friendliness, data structures, site architecture, and protection.
Reasons To Include SEO In Your Marketing Strategy
The next thing to know after having that general knowledge about SEO, is why you should include SEO in your digital marketing campaigns?
Read on below to find out why.
1 Efficient SEO Adds Legitimacy To Your Business
Consider the instances you’ve checked on Google or some other search engine for a good or service. You view the links and information provided on the very first page as being the most reliable sources accessible.
Clients rarely get beyond the first few listings as they assume that the first pages are relevant and trustworthy.
By employing Portland SEO services and engaging in your business’ SEO strategy, you are bringing value to your brand. SEO places your web pages, products, or services on those first search pages.
2 Easy Access Prevails, And Valuable Content Is Important
With exciting and easy to find information, user-friendly sites will make your website rank. With original content, web pages build around concepts of keywords for search engines to index and rate you up.
Positive experiences from visitors are the best choice for a higher rating. Thus, keep your material honest and focused. Avoid stuffing content with buzzwords and keywords to prevent people from leaving the site. Dissatisfied prospective clients will risk your rankings.
3 Inbound Marketing Is Better Than Other Forms Of Marketing
Inbound marketing strategies, such as SEO, social media posting, and blog posts, usually get many more leads than outbound and other paid practices.
Thus, as business owners, instead of relying on outbound or other paying ads, engaging in quality content creation would be a more profitable option.
Also, improving and maximizing your platform-based social media pages and integrating SEO into all facets of your business’ digital marketing techniques would be a much smarter marketing plan.
4 Search Engines Are Getting Bigger: Reasons Include SEO In Your Strategy
Will you naturally think that when someone addresses search engines, they are speaking about Google?
The tech giant does have a large market share, and they are so significant that people make ‘Googling’ a verb.
As a business owner, you need to be aware that on alternative services, like Microsoft’s Bing, a substantial percentage of the searches also take place. Make it a point to browse blogs from alternative options to see how far you’re on the list. You might be surprised that boosting user engagement and inserting meta tags could be all it takes to raise a few ranks on other search engines.
5 Increasing Your Traffic
The essential benefit of search engine pages is that it provides prominent selling setups. Higher ranks on search engine pages attract more views, which contributes to more visits and, eventually, sales.
6 Most People Use Mobile Devices To Search
You may not need statistics to prove that the web-based mobile phone market has risen in the last couple of years, surpassing desktops.
SEO makes your website mobile-friendly by utilizing mobile browser web pages. Thus, if you’d like to rate well in the search engine results pages, having a mobile-friendly site is crucial.
7 Return Of Investment: Reasons Include SEO In Your Strategy
SEO gives businesses the ability to monitor and measure results. It allows you to see where your marketing strategies are going, and if any changes are considered necessary.
An SEO organization will trace which pathways users take, based on the keywords before a transaction was finally made.
This data helps you understand your ROI throughout your SEO strategy as opposed to your expenditure.
8 Creating A Better Experience For Visitors
Another importance of SEO is that the whole time you spend producing quality content and improving your platform, it increases your site’s usability. As a result, it ensures a smooth and optimistic consumer experience.
For example, if you make improvements to ensure the site’s responsiveness, it will start making it available from all devices.
Equally, you can reduce the bounce process, increase your loading times, and encourage people to spend even more time on your web page. Bear in mind that most users expect to load a website in less than two seconds. That means the longer it takes to load, the higher would the bouncing rate would spike, resulting in lower conversions.
9 Cost-efficiency: Reasons Include SEO In Your Strategy
SEO is far more cost-effective compared to other advertising methods. That is because you can directly attract consumers when they search for your product or service.
The more established your website is, the more likely will you hit on a hot lead using a good SEO marketing strategy.
10 Efficient SEO Will Boost PPC: Reasons Include SEO In Your Strategy
When you’ve used PPC as another marketing strategy, you’ll realize that the advertisement affects quality ratings. A superior quality score for PPCs will cut the price per click and make your ads perform much better.
Boosting your site with SEO will improve the overall scoring rate of your PPC ads.
Many marketing strategies often work hand in hand with SEO. Matching SEO with search engine advertisements improves the effectiveness of the ad and increases traffic. SEO can also boost attempts to retarget and raise awareness of the brand.
11 Improve The Accessibility And Usability Of The Site
SEO makes it easier for people and browsers to explore your page. It reshapes the connections and the layout of the platform and making it easier to locate. This streamlines the task of seeking information on the website and allows search engines ease in scanning your website for relevant pages.
12 No Manipulations Of Contents At All
Texts and connections of contents on any site can’t be tricked. Traditional black-hat referencing — a process of linking to a website that is entirely irrelevant to that of the source site — is replaced by mentions or web quotes.
In using mentions and quotes, the link would lead to sites that must be meaningful to the source site once a brand is faultlessly integrated into site content.
13 Improve Awareness Of The Brand: Reasons Include SEO In Your Strategy
Having your web page in the highest rank on search results would eventually give you a lot of views or impressions. That implies that your platform is far more accessible, and the higher your company can have in brand awareness.
Moreover, to be on the upper edge of your targeted keywords, search engine pages would allow users to identify your brand to the same keywords. That, in turn, increases the brand’s trustworthiness
14 Search Engines Are Unreliable: Reasons Include SEO In Your Strategy
A notice worth a mention is that SEO is essential because browsers are not ideal. So, when you don’t take measures to overcome their inadequacies, the price will be charged on your website.
For instance, when your site does not provide a clear link structure, browsers may not adequately crawl and benchmark your site, leading to a lower ranking.
In fact, coding errors could also effectively eliminate search engines, which makes your site impossible to scale higher regardless of the time and resources you invested in your SEO efforts.
Common areas in which issues may occur with search engine results include:
Photos, audio, video, files, and other graphics included in the content
Duplicate articles
Formats
Semantics
Language
15 Links To Your Site Are Of Utmost Value
Another way to boost your ranking in search pages is when other websites create links leading to your site. When this happens, search engines would rank your page high for producing useful materials.
Way back a couple of years, all it takes to rank on search engines is to get hundreds of references from poor quality pages to improve the score.
Today, however, the importance of links on your blog or website relies on the website’s standard that links to you.
Just some connections from high-traffic sites to your company can do miracles for your rating!
16 Increases Loading Speeds Of The Website: Reasons Include SEO In Your Strategy
Your website’s loading speed lets users instantly experience your material. When your site loads slowly, consumers are more likely to drop without seeing contents from that link. SEO offers you page load level preference to make sure it’s faster and easier.
17 More Links Higher Ranking is Not The Strategy Of Today
A key factor to scale on the search engine is your website’s credibility. The old method of creating a link has certain improvisations, and the popular perception of ranking high throughout the search engine results is evolving. Content tactics are not always the principal means. But creating inbound quality connections remains a significant SEO factor to influence the rankings.
In this sense, the meaning of the word “link” is evolving. The core concept behind the conventional connection building is to calculate all the see-follow connections that a website will produce and disregard all the no-follow links entirely. The aim was to create as many interconnections as practicable. The method has drawbacks in which certain black-hat advertisers banked, and eventually, search engines revised its formulas, which ultimately ended the process of conventional link building. This was termed spamming.
Also, guest-blogging is burnt away with a small intention of creating ties. Guest-blogging performs very well when it comes to attracting new followers, positioning yourself as an authority, and interacting with the target crowd.
18 The Secret To SEO Is Analytics
For better performance, it is essential to track your rankings on different search engines. Begin by monitoring your website’s most critical indicators to create a benchmark for your results. Make minor improvements to your content and monitor if it affects an increase on your rank or perhaps funneled in some traffic.
Avoid concurrently implementing many improvements, and you will still process and report what was necessary for higher results.
19 Targets Markets: Reasons Include SEO In Your Strategy
Today’s SEO doesn’t only focus on attracting prospect’s attention but getting leads involved with what you’re offering. Try to evaluate your SEO marketing campaign strategies by answering the questions: Who is the customer looking for concerning demographics? How do web searches perform? Where are they from?
The more precise your responses are, the more valuable your SEO efforts become.
20 Social Media Performs A Pivotal Function
Finally, social media is now a constantly changing platform which has shifted to a massively lucrative sales channels from an underlying messaging platform. Many users start their social media searches and work their way to the site of a business. Sharing up-to-date, entertaining, and tailored content would draw more users to your page, and ultimately to your site.
How To Include SEO In Your Digital Marketing Strategies?
If you’re thrilled with all the benefits SEO offers for your business, then the next big thing to do is adapting it in your marketing strategies. Start by appointing SEO professionals to integrate search engine optimization into your digital marketing campaigns. The critical SEO-specialized digital marketing and technical responsibilities involve positions, such as a Front-end Creator, SEO Strategist, Web Manager, and some personal relations jobs.
If you don’t want to be bothered by hiring these specialists and adding up to your team, you can also resort to SEO marketing firms.
Final Thoughts About Reasons Include SEO In Your Strategy
Rome, with all its glory and grandeur, isn’t built in a day. Nor does a perfect and robust marketing plan for your business.
If you are in the market of building your own “Rome,” take the following advice. Do your data collection, start a site that will raise your search engine rankings, test what you have put in place, and then use the collected data to help sustain your marketing campaign. Also, use methodologies, and collectively work across all areas of your business to maximize your success.
In incorporating SEO on your digital marketing strategies, never forget the reasons explained above. These points will help you a lot in the long run. | https://medium.com/visualmodo/top-20-reasons-to-include-seo-in-your-digital-marketing-strategy-403dcd695523 | [] | 2020-07-31 02:37:13.785000+00:00 | ['Marketing', 'Digital', 'SEO', 'Marke', 'Strategy'] |
You (Probably) Don’t Need For-Loops | But first, let’s take a step back and see what’s the intuition behind writing a for-loop:
To go through a sequence to extract out some information To generate another sequence out of the current sequence This is my second nature to write for-loops because I’m a programmer
Fortunately, there are already great tools that are built into Python to help you accomplish the goals! All you need is to shift your mind and look at the things in a different angle.
What you gain by not writing for-loops everywhere
Fewer lines of code Better code readability Leave indentation for managing context only
Let’s see the code skeleton below:
with ...:
for ...:
if ...:
try:
except:
else:
In this example, we are dealing with multiple layers of code. THIS IS HARD TO READ. The problem I found in this code is that it is mixing the administrative logic (the with , try-except ) with the business logic (the for , if ) by giving them the indentation ubiquitously. If you are disciplined about using indentation only for administrative logic, your core business logic would stand out immediately.
“Flat is better than nested” — The Zen of Python
“I wish the code is flatter,” I hear you.
Tools you can use to avoid using for-loops
1. List Comprehension / Generator Expression
Let’s see a simple example. Basically you want to compile a sequence based on another existing sequence:
result = []
for item in item_list:
new_item = do_something_with(item)
result.append(new_item)
You can use map if you love MapReduce, or, Python has List Comprehension:
result = [do_something_with(item) for item in item_list]
Similarly, if you wish to get a iterator only, you can use Generator Expression with almost the same syntax. (How can you not love the consistency in Python?)
result = (do_something_with(item) for item in item_list)
2. Functions
Thinking in a higher-order, more functional programming way, if you want to map a sequence to another, simply call the map function. (Be my guest to use list comprehension here instead.)
doubled_list = map(lambda x: x * 2, old_list)
If you want to reduce a sequence into a single value, use reduce
from functools import reduce
summation = reduce(lambda x, y: x + y, numbers)
Also, lots of Python’s builtin functions consumes iterables (sequences are all iterable by definition):
>>> a = list(range(10))
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> all(a)
False
>>> any(a)
True
>>> max(a)
9
>>> min(a)
0
>>> list(filter(bool, a))
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> set(a)
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
>>> dict(zip(a,a))
{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}
>>> sorted(a, reverse=True)
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
>>> str(a)
‘[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]’
>>> sum(a)
45
3. Extract Functions or Generators
The above two methods are great to deal with simpler logic. How about more complex logic? As a programmer, we write functions to abstract out the difficult things. Same idea applies here. If you are writing this:
results = []
for item in item_list:
# setups
# condition
# processing
# calculation
results.append(result)
Apparently you are giving too much responsibility to a single code block. Instead, I propose you do:
def process_item(item):
# setups
# condition
# processing
# calculation
return result results = [process_item(item) for item in item_list]
How about nested for-loops?
results = []
for i in range(10):
for j in range(i):
results.append((i, j))
List Comprehension got your back:
results = [(i, j)
for i in range(10)
for j in range(i)]
How about if you have some internal state in the code block to keep?
# finding the max prior to the current item
a = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
results = []
current_max = 0
for i in a:
current_max = max(i, current_max)
results.append(current_max) # results = [3, 4, 6, 6, 6, 9, 9, 9, 9, 9]
Let’s extract a generator to achieve this:
def max_generator(numbers):
current_max = 0
for i in numbers:
current_max = max(i, current_max)
yield current_max a = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
results = list(max_generator(a))
“Oh wait, you just used a for-loop in the generator function. That’s cheating!”
Fine, let’s try the following.
4. Don’t write it yourself. itertools got you covered
This module is simply brilliant. I believe this module covers 80% of the cases that you makes you want to write for-loops. For example, the last example can be rewritten to:
from itertools import accumulate
a = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
results = list(accumulate(a, max))
I know, I know. This was a terrible example. I was just trying to prove a point — “for-loops could be eliminated in your code.” However, this doesn’t the elimination any better. This example is very convoluted and hard to digest and will make your colleagues hate you for showing off.
Also, if you are iterating on combinatoric sequences, there are product() , permutations() , combinations() to use.
Conclusion
You don’t need to write for-loops in most scenarios You should avoid writing for-loops, so you have better code readability
Action | https://medium.com/python-pandemonium/never-write-for-loops-again-91a5a4c84baf | ['Daw-Ran Liou'] | 2017-03-28 17:53:38.772000+00:00 | ['Coding', 'Programming', 'Software Development', 'Python', 'Best Practices'] |
The Declining Middle Tier of ICO Fundraising | The median fundraise in initial coin offerings (ICOs) has been on a steady march downward since its peak in the heady days of the second quarter. In the fourth quarter, even the high-profile Science Blockchain (12.2 million USD) failed to come near its hard cap. Chatter at cryptocurrency conferences points to a more cautious investor base. But the truth is, crypto is starting to resemble the world wage economy: tokens that can command large fundraises are rolling in on bigger and bigger waves of money, while the bottom tier bounces along and the middle tier gets squeezed.
We took the list of ICOs that opened in each quarter, established fundraise amounts for the ones that had closed, and ranked them into three tiers by the amount raised. In Q4 so far, the gap has only gotten wider between top projects by fundraise, like Polkadot (USD 142.4 million) and Liquid (USD 105 million, and projects in the middle, like Dragonchain (USD 13.7 million) and AirSwap (USD 12.6 million).
Meanwhile, the number of projects calling it quits is ballooning. We’re now seeing new projects list at a rate of about five to seven per day. Most of those make it to ICO open, but a growing number of them are shutting down websites and social media, without a successful close. At the end of Q3, we counted 15 canceled out of 241 ICOs opened in the quarter. Since then, 35 more have canceled and 15 of those opened so far in Q4 are already canceled.
Of course, it’s not wise to measure projects’ success by the amount they raise. And in Q4, we’ve already seen a few high-profile projects, like Simple Token (21.6 million USD), come to market with modest fundraising goals. That hasn’t slowed the overall pace: So far this quarter, 133 projects have raised 1.38 billion USD, by our preliminary mid-quarter count, on pace to easily surpass the 1.74 billion USD raised by 133 projects in all of Q3. | https://medium.com/tokenreport/the-declining-middle-tier-of-ico-fundraising-725767586f6b | ['Galen Moore'] | 2017-12-05 16:48:43.484000+00:00 | ['Startup', 'ICO', 'Blockchain', 'Ethereum', 'Bitcoin'] |
I’m Tired Of Being Afraid | I’m Tired Of Being Afraid
Fear is my arch enemy. I can’t lose this time.
Image by ambermb from Pixabay
For as long as I can remember my life choices have been primarily governed by fear. Fear of abandonment, fear of physical pain, fear of humiliation, fear of pressure and expectation, fear of my parents’ wrath, fear of not pleasing those who I rely on for emotional validation, fear of failing and falling, fear of the unknown, fear of predators, fear that I didn’t really exist, fear that life is only a dream, fear that I was trapped in a time loop, fear that I was stuck in a dimension that was ruled by entities that were inimical to human beings, fear of uncertainty, fear of death, fear of eternal torment, fear that I would be the only one who couldn’t make the leap to the other side of forever, fear that I was not up to the challenge of living life. And fear that everyday I was drowning.
With all these fears it is a miracle that I am still functional. There but for the grace of God go I. They say my fears are formed partly by traumatic past life experience, partly from trauma in this life, remembered and suppressed, and partly by social conditioning and the malign influence of forces beyond my control. I am afraid because I lack faith. I am afraid because I am living my worst-case scenario. I am afraid because when I look in the mirror I see a haunted, hungry man-child who looks nothing like the way I feel inside. And I’m so gosh darn tired of it.
But all the fears I listed above seem like symptoms instead of an actual illness. What is my diagnosis? What lies beneath all this worry? I think it’s two things. One, it’s a inherent distaste for the human experience. Two, it’s a fundamental mistrust in the intentions of the universe. What can be done about this?
The first is an issue I’ve struggled with since I first emerged from the womb. Nothing fit right. I didn’t like the feeling of skin on bones. I didn’t like that my caregivers couldn’t read my mind and that I had to sob to get my needs met. I didn’t like that there were so many crude boundaries drawn reinforcing our separation from one another, boundaries that grew more flimsy and transparent as life went on.
The second is a slightly more recent concern. Once 4D made its presence known in my life I discovered to just what degree our world is manipulated by those in the higher realms. My space was invaded and it was made clear to me that I exist at the mercy of those who control this plane of being. One night my guides showed me their shadows while playing with my environment and it was one of the most terrifying experiences I’ve ever had. I thought my worst fear had come to life. The universe was actually evil, all this was actually a joke, and I was trapped here forever.
I hovered over my body as a dark cloud of smoke. Then suddenly I poured back into myself, up to my eyeballs, and was vacuum sealed back into my body and then, and only then, did I truly know the horror of being human. I knew your density. I knew your vulnerability. I knew your anguish. So I am wary of those who try to persuade me that this is a beautiful life if you want it to be so. I know for a fact that it isn’t, but we have to pretend that it is because we don’t want to further distress the rest of the patients in this madhouse.
Trust and acceptance are my avenues towards overcoming fear. I worry about accepting. I worry that it might give way to complacency. It might mean I stop asking questions and who wants to stop asking questions, when the answers you are getting are so unsatisfying? Trust is something else entirely. What am I meant to trust? The positive intentions of those who have given me my life back only to snatch it right away again? I think I am meant to trust myself and have faith that if I am doing the next right thing then I will be supported.
And then there is the small matter of God. He is never far from my mind. I’ve been looking for Him for close to thirty years now and I have felt His presence exactly twice in my life, both times making a connection with not a singular intelligence but with an awareness of intelligent infinity that made such an impression on me that my soul soared while it lasted. I want more of that in my life. If I could touch that then there would be no need for fear, because fear would have no place in the sheer cosmic immensity of the transcendent phenomenon to which we attach the label, ‘God’.
I have been mocked for my belief. Once when I was being tortured I said ‘It’s in God’s hands’ and my torturers chuckled and said, ‘God works in mysterious ways’ and then, ‘You’re in our hands for now’. And these were entities that have perspective that makes mine look like a child building castles in the sand. They’re in the position to know and they’re atheists. Am I wrong? Is what I call God merely an energy field? Is God just music? I can’t say. I hope not, because honestly God, I have suffered a lot in search of you. A lot.
One of my stranger obsessions is my fear of my blood sugar dropping. It’s happened only twice in my life, but that was enough to permanently imprint the fear of a recurrence on my brain. I am not diabetic. My blood sugar has not dropped in ten years. Yet I am always worrying when my next meal is coming, when I can sneak in my next snack, whether I can make this drive without having to pull over and buying something sugary. It’s part of the reason I’m so overweight. I can’t cut down on my food intake for fear if I do then I will pass out. It’s a silly fear, but it has drained more of my mental and emotional resources than practically any other over the past decade. I did conquer it briefly. I did it by just being angry enough to stand up to it. I used to carry snacks in my pockets wherever I went. I decided to stop. I said if my body wanted to kill me, it could kill me. Worse case scenario though, I pass out and wake up in the hospital. That would be worth being free from this obsession.
Fear can motivate you to take action. To be proactive and constructive. But fear only motivates me to escape and avoid. Fear causes me to fall back into the same bad habits that created the fear in the first place. It’s similar to those who struggle with agoraphobia. You’re afraid of the leaving the house so you don’t leave the house, and the longer you go without leaving the house the greater your fear of leaving the house becomes. I never learned how to grow up. I dwell in the perpetual purgatory of late adolescence. And the longer I stay here the more afraid I am of moving on, so the more I cling to the things that make me dependent on others. Fear is a clever adversary. It knows all your weak points and can exploit them with the skill of a master tactician. It evolves to keep pace with you. It adapts to fit any circumstances. That’s why facing any one fear can be a fool’s quest because another one will pop up and you’re stuck playing emotional whack-a-mole.
Conquering fear requires digging down into your deepest self and summoning your values and virtues and seeing if they can contend with the monster. Fear is corrosive and can chew through almost anything, leaving it lifeless and inert, but it cannot corrupt who you truly are at your center point. We were built to react to fear instinctively in the moment, to clear and present dangers, not to be ruled by anticipation and superstition. Fear, like pain, is data. If we can distance ourselves from it using mindfulness techniques, if we can learn not to judge it, then we can free ourselves from its vise-like grip on our minds.
Another aspect of my fear is that not only do I not have control of my external environment, but that I’ve lost control over my internal environment as well. No more popping pills or knocking back a pint of vodka to feel better. No more using something tangible to produce specific, measurable results. I know now that this house of cards could collapse at any moment. That’s why I keep my readers on their toes by publishing three stories a day. That’s why I published 2000 pieces last year. That’s simply what a dying man does.
I’m tired of being too afraid to live my life. It’s odd. I’m not afraid of those who seek to kill me, but I am afraid of putting the recycling into the trash and receiving a tongue-lashing from my mother. Where are my priorities? I’m not afraid of having my soul snatched away by Satan, but I am worried that I’m inundating my best friend with too many stories. I’m not afraid of being struck down by a vengeful God, but I am afraid to call up and make a new appointment to see my psychiatrist. I’m not afraid of the angels and the demons. I’m afraid of people. I’m afraid of living. And I’m tired of it.
What is really bothering me? Trauma, certainly, but there has to be more to it than that. I think I’m really bothered by the dissonance between what I do and who I am, between what I say and what I think, between where I came from and where I am, between being a person and being something else. None of this sits right with me. It never has. As a child I would be constipated for two weeks at a time, then when I finally would go I would be literally bathing in my own filth. Now I’m only constipated when I’m stressed or using opioids. What was I so stressed about all the way back then? I think I was stressed about having a shadow that I could never shake. I could feel its presence oppressing me from my earliest memories. Eventually it got what it wanted, and then some. Now the two of us are stranded on a lonely beach in the distant past as high tide comes rolling in and the ocean of consciousness threatens to sweep us away. This shadow is the ultimate source of my fear. It is the voice that tells me I should be afraid, because everything and everyone is a threat.
The shadow makes a compelling case, but I no longer take what it says at face value. I know it’s trying to destroy me. I will not be run off the field of battle. I will not be disgraced. One day my inner and outer worlds will synchronize and I will act like the brave man I know myself to be. | https://medium.com/grab-a-slice/im-tired-of-being-afraid-a81f1eeb1f84 | ["Timothy O'Neill"] | 2020-12-27 14:40:08.557000+00:00 | ['Life Lessons', 'Self', 'Mental Health', 'Spirituality', 'Fear'] |
Securing Kubernetes secrets : How to efficiently secure access to etcd and protect your secrets | Etcd is a distributed, consistent and highly-available key value store used as the Kubernetes backing store for all cluster data, making it a core component of every K8s deployment.
Due to its central role etcd may contain sensitive information related to access of the deployed services and their associated components, such as database credentials, CA keys, LDAP logins credentials it is a premium target for malicious attacks.
Historically, in traditional, non-containerised environments, this data was NOT stored in such a centralised manner as credentials were usually under an ownership of a specific team that was responsible for maintaining a certain component of the stack: the DB access credentials, for example, were known only to the DBA team, CA keys have been in the hands of few selected System Administrators etc.
With K8s, the required approach is notably different as credentials are now kept within a single central place (etcd), which, if not properly hardened, can lead to serious security breaches as the attacker may now create fake certificates, access databases and applications.
Managing and hardening your secrets becomes even more critical with tools such as Helm and Tiller; these tools allow you to install (or redeploy) an entire K8s based datacenter within minutes and they constantly interact with etcd.
The Center for Internet Security (CIS) came up with this publicly available document providing guidance on how to properly harden and secure your Kubernetes cluster.
The only single recommendation CIS provides regarding hardening etcd is using TLS:
ETCD is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. Its access should be restricted to specifically designated clients and peers only. Authentication to ETCD is based on whether the certificate presented was issued by a trusted certificate authority. There is no checking of certificate attributes such as common name or subject alternative name. As such, if any attackers were able to gain access to any certificate issued by the trusted certificate authority, they would be able to gain full access to the ETCD database. Use a different certificate authority for ETCD from the one used for Kubernetes.
However, using TLS on its own is not sufficient as a solution. Every certificate created and signed with the same CA has the potential to access every service inside the cluster. The problem is further exacerbated if a single CA is used for all k8s clusters. Even when each kubernetes cluster has a dedicated CA, new client keys can be easily created but as easily revoked. Once again, any new keys created automatically have access to every service in the targeted k8s cluster.
Because of the severity of the security risks associated with etcd, we will look into 2 additional methods that can be implemented to further secure your etcd data:
Encrypting secrets (and/or other resources) in etcd
Using certificates to stop clients from accessing the etcd server
To follow the steps illustrated in the following sections, it is necessary to start up a Kubernetes cluster. This can be done using any of the methods immediately below.
Vanilla K8s:
install script for latest kubernetes 1.10. This is the first version that installs etcd with tls
install script for older kubernetes versions, when etcd was not installed with tls by default
This also works on openshift platform. You can see the install script here. All relative commands for openshift are in this script.
The install scripts have been tested on AWS Centos ami. It should work for you too if you use the same image.
Starting with K8s 1.7 (and etcd v3) you can encrypt resources inside etcd using several different algorithms. At the very least, you should encrypt all your secrets. It is especially true if you are using Helm as a lot of Helm charts require LDAP or DB credentials to be directly made available in the ConfigMaps.
The encryption follows a very simple rule:
encrypt using the first provider defined
decrypt after locating a functional provider at checking each provider in the order the providers are defined
To implement the full workflow, it is necessary to add the experimental-encryption-provider-config flag to the apiserver
Define the EncryptionConfig config file (place the content in /etc/kubernetes/pki/encryption-config.yaml)
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- identity: {}
Within the file, the resources.resources field is an array of Kubernetes resource names that should be encrypted. The providers array is an ordered list of the possible encryption providers.
Enable experimental-encryption-provider-config in the kube-apiserver. Edit /etc/kubernetes/manifests/kube-apiserver.yaml and add:
spec:
containers:
- command:
- kube-apiserver
- --experimental-encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml
Restart the apiserver. Because the API server is being run as a static pod, kubelet will restart it when the configuration change is detected. Otherwise, you will need to restart the service yourself.
We also install the etcd package in order to print the data from inside the etcd server:
# yum install etcd -y
Resolving Dependencies
--> Running transaction check
---> Package etcd.x86_64 0:3.2.18-1.el7 will be installed
--> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================================================================================================================
Installing:
etcd x86_64 3.2.18-1.el7 optymyze_external_rpms 9.3 M Transaction Summary
==================================================================================================================================================================================================================================
Install 1 Package Total download size: 9.3 M
Installed size: 42 M
Downloading packages:
etcd-3.2.18-1.el7.x86_64.rpm | 9.3 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : etcd-3.2.18-1.el7.x86_64 1/1
Uploading Package Profile
Verifying : etcd-3.2.18-1.el7.x86_64 1/1 Installed:
etcd.x86_64 0:3.2.18-1.el7 Complete!
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, priorities, product-id
Also, to make the commands shorter, set an alias for etcdctl command with TLS parameters. Here we will use the certificates paths created by kubeadm-1.10. You should update them for your specific cluster if needed (check by running grep -- '--etcd' /etc/kubernetes/manifests/kube-apiserver.yaml ).
Meaning of variables:
DIR — path where the k8s certificates are created
SSL_OPS — etcdctl parameters to enable TLS connectivity
SECRETS_PATH — path in etcd where kubernetes keeps secrets
DIR=/etc/kubernetes/pki/ SSL_OPTS="--cacert=${DIR}/etcd/ca.crt --cert=${DIR}/apiserver-etcd-client.crt --key=${DIR}/apiserver-etcd-client.key --endpoints=localhost:2379"
SECRETS_PATH=/registry/secrets
Test that we can list stuff in etcd:
# ETCDCTL_API=3 etcdctl $SSL_OPTS get --keys-only=true --prefix $SECRETS_PATH
/registry/secrets/default/default-token-rhwwn /registry/secrets/kube-public/default-token-9qfc8 /registry/secrets/kube-system/attachdetach-controller-token-clvsn .............
No Encryption
To demonstrate the difference of our solution, we begin with no encryption. This provider doesn’t do any encryption. It can be used in case you want to decrypt everything or just to test.
Let’s create a secret and read it directly from etcd. You should be able to clearly see the key name and key value:
# kubectl create secret generic secret1 --from-literal=XX_mykey_XX=ZZ_mydata_ZZ
secret "secret1" created
# kubectl get secret secret1 -o yaml
apiVersion: v1
data:
XX_mykey_XX: WlpfbXlkYXRhX1pa
kind: Secret
metadata:
creationTimestamp: 2018-06-18T13:11:54Z
name: secret1
namespace: default
resourceVersion: "20410585"
selfLink: /api/v1/namespaces/default/secrets/secret1
uid: 2bb3b7df-72f9-11e8-ad5f-005056b1028d
type: Opaque
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret1 -w fields | grep Value
"Value" : "k8s\x00
\f
\x02v1\x12\x06Secret\x12s
L
\asecret1\x12\x00\x1a\adefault\"\x00*$2bb3b7df-72f9-11e8-ad5f-005056b1028d2\x008\x00B\b\b\x9aߞ\xd9\x05\x10\x00z\x00\x12\x1b
\vXX_mykey_XX\x12\fZZ_mydata_ZZ\x1a\x06Opaque\x1a\x00\"\x00"
Apply an Encryption Algorithm
Let’s add an encryption algorithm to see what happens. We choose aescbcbecause this is the recommended choice for encryption at rest.
Update encryption-config.yaml:
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- identity: {}
Since kubelet only monitors pods defined in /etc/kubernetes/manifests, this change will not be caught, so we need to restart the apiserver manually:
docker stop $(docker ps | grep k8s_kube-apiserver | gawk '{print $1}')
Test:
# kubectl create secret generic secret2 --from-literal=XX_mykey_XX=ZZ_mydata_ZZ
secret "secret2" created
# kubectl get secret secret2 -o yaml
apiVersion: v1
data:
XX_mykey_XX: WlpfbXlkYXRhX1pa
kind: Secret
metadata:
creationTimestamp: 2018-06-18T14:23:06Z
name: secret2
namespace: default
resourceVersion: "20418382"
selfLink: /api/v1/namespaces/default/secrets/secret2
uid: 1e4f5d2f-7303-11e8-8c2c-005056b1028d
type: Opaque
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret2 -w fields | grep Value
"Value" : "k8s:enc:aescbc:v1:key1:7^İ\xe9\xc8\x1e\xa7̔=D+\x9e%\x1a\xf4\x10o@\xec\xc14&<Z\xd1\xde\xfa\xca-'#\xa2K\x1c\xff\x101a\x86\xb0\xd7.\xa9\x19\x04\x93m\xa1\xee\xacDe\x95/\xd8\xe7\xaehp~\xc9\x0e\xe9\x8f}\x9a\x8a\xb0f\xf9\xeb\xb7\u007f@\x87\xa0\xa6\x98\xe78\xd0+\xd45\"S\x17\x8c\x84\xa6ㅽb\xda\xe6\xfc\xa1\xd9[[~\x82\xfbKS\x82\xf0>o\xc1 \x8b&{\xa1\r\x14Un\x03\xf7\x1f=\xe5\x1b \xa7t\xed[\x8a\xec\xb8\xf1\xe4\xe2\xc1\x81\xb00=cbl·ɬ\x12`\xf2|\x1b\t\xe4#\xcd"
The new secret was encrypted now with “k8s:enc:aescbc:v1:key1”.
Let’s encrypt all the other secrets as well:
# kubectl get secrets --all-namespaces -o json | kubectl replace -f -
secret "default-token-rhwwn" replaced
secret "secret1" replaced
secret "secret2" replaced
secret "default-token-9qfc8" replaced
secret "attachdetach-controller-token-clvsn" replaced
secret "bootstrap-signer-token-xgnfg" replaced
....
Check that the old secret is now encrypted:
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret1 -w fields | grep Value
"Value" : "k8s:enc:aescbc:v1:key1:\xda
W0~\x83\xe4\x80Ճ$J\x1e\xa2\x02z\xc9\v\xd1\xd0$)\xb2K\x9f\xc2\xff\xcdJ5\xfa\"\x13\xc4\f\x86\xc0{P\xceW\x9e\xd1z;b$\x97\xe8\xb4l\xd0\xfa\xd8 \xe2Vc\x8c\xa2\xcd\xe5\xb0\x04(l\x18\x13\xbf\xe2\xb7|\xf1m\xef)\xfd\x97\xcbk-\"\xba\x819\xcf,_\xf6\fxP\xf2\x13\x94\x9b\xca\xf4\xde{d\xcb\xceq\x84q\xae\xaa\x06\x14\xb7q\x1d|L\x8eS\x8c\xc9$\x8e\x80D\xf0\xda\xe2si\xb6,@\xa2\xf9\xae\xf2~\xe3w\x8e4fr{e\x0f'\xcc\xf6\xe7\xadd\x83^\xdb\x03\xf1jT\x13>"
Each resource is encrypted with a specific key. If you change the value of that key, kubernetes will not be able to decode it anymore. You can test this by swapping the name of the keys and try to retrieve it:
# cat /etc/kubernetes/pki/encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key2
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key1
secret: dGhpcyBpcyBwYXNzd29yZA==
- identity: {}
# docker stop $(docker ps | grep k8s_kube-apiserver | gawk '{print $1}')
000e03b50c0f
# kubectl get secret secret2 -o yaml
Error from server (InternalError): Internal error occurred: invalid PKCS7 data (empty or not padded)
Using multiple algorithms
In this example we encrypt a secret with a new algorithm and check that different secrets are encrypted with different providers. After that encrypt everything with the new provider. This will change the encryption algorithm to all previous keys to the new one.
Change encryption-config.yaml, so that the first provider to be the secretbox provider:
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- secretbox:
keys:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- identity: {}
Restart the apiserver:
# docker stop $(docker ps | grep k8s_kube-apiserver | gawk '{print $1}')
Verify that secrets are encrypted correctly: old secret is using aescbc, new one will use secretbox.
# kubectl create secret generic secret3 --from-literal=XX_mykey_XX=ZZ_mydata_ZZ
secret "secret3" created
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret1 -w fields | grep Value
"Value" : "k8s:enc:aescbc:v1:key1:\xda
W0~\x83\xe4\x80Ճ$J\x1e\xa2\x02z\xc9\v\xd1\xd0$)\xb2K\x9f\xc2\xff\xcdJ5\xfa\"\x13\xc4\f\x86\xc0{P\xceW\x9e\xd1z;b$\x97\xe8\xb4l\xd0\xfa\xd8 \xe2Vc\x8c\xa2\xcd\xe5\xb0\x04(l\x18\x13\xbf\xe2\xb7|\xf1m\xef)\xfd\x97\xcbk-\"\xba\x819\xcf,_\xf6\fxP\xf2\x13\x94\x9b\xca\xf4\xde{d\xcb\xceq\x84q\xae\xaa\x06\x14\xb7q\x1d|L\x8eS\x8c\xc9$\x8e\x80D\xf0\xda\xe2si\xb6,@\xa2\xf9\xae\xf2~\xe3w\x8e4fr{e\x0f'\xcc\xf6\xe7\xadd\x83^\xdb\x03\xf1jT\x13>"
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret3 -w fields | grep Value
"Value" : "k8s:enc:secretbox:v1:key1:\xba\xf8,Q@\xb9\xb6q3$k\x04\xeeV\x99|Z'\xdeE<\xa5\xa9n\x91u\xb9]RY\xccc\xe3\x13\x8b\u07b4Q\x91\x9cR2\xcc\xc5\xd9\x0e\x19?\xca\x1ch\xde\x1d%\xa3N\x85H\xb0\xf6֢\xe6\xab\x06\xf6\x960{\xdb\xd8^eQ\xb3\x05\x03\x06)\x05JH\x16\x18\fp\x9eu<t\xea\x06\x12\xf1۹y\u007f\x15\xe5\x1d\xef\x8a2G\x85'\x94
\x1d\x99\x85ku3\xa2~\x12\x04\xe5\x84~\xaaG\xd3n\x98\x95\xa0\xc8_1B\xcb\x0f\xb7;\x80\xe1xR\x86ij\f\xef\xd7SA\x950MQfz~)\x13\xc5\xf1\xf8\x91\x14\x9d_\xba\x82[=M\x81O\x1dFNj\xc1\x98\xe4"
Migrate all secrets to the new provider:
# kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Key rotation
Here we use the same provider for encryption, but we add a new key. Everything from this moment will be encrypted with the new key. Old values are encrypted with the previous key. At the end we migrate everything to be encrypted with the new key.
Add a new key to be the first for the secretbox provider (which is still the first provider)
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- secretbox:
keys:
- name: key2
secret: sAkccgM28JdPNCX9FfTcloYet1zp4OEAtHyViT038zM=
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- identity: {}
Restart and verify that the new values are encrypted with the new key:
# docker stop $(docker ps | grep k8s_kube-apiserver | gawk '{print $1}')
4bdac1937570
# kubectl create secret generic secret4 --from-literal=XX_mykey_XX=ZZ_mydata_ZZ
secret "secret4" created
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret4 -w fields | grep Value
"Value" : "k8s:enc:secretbox:v1:key2:\x92\xeeyj\x96\xfc쵪-8\x0e\xa7\x9a\xb0\x16\xe2\xb8J\f_\x81\xec\xf65\xa9\x1a\xe5\\xۛ%Ҝ\xbb\ax\xbf\x00Kz\xabaD\x1c\x94\x87\xaervsP\xf3q\xf3\xaeH\xb8\x95-\xef\r*[yl\xf3/\xc4\x0f\x00\a\x132\f\xe1\x17\xbf\xff\xb4;<\xec\xc2\x01\xa8\xc8f\xff\xcd\xf3ʦ\x83P\x01\xcdu\x16\x16\xfa\xba\x8f\xe6\xe5\x05\x96\xf7k,\xaa\xea\x0f\x99\x8f\xb3\xc7\xe6\xa4=\x93\x8a\xf3S\x17\xc6S\r\xee\xea㟷\x00\x945o\xe8\x8e:W\xacot\xeaj,P\x14\xbe\xd0\x13\xf91Y\xf0\xf0\x93fW\xcczD3\xb9\xa0\xb4\x9e\xef\x1aE\x16\xc8j_TX\xae"
ETCD authorization
Etcd can use 2 methods to authorize users:
With username and password
With certificates if started with ” --client-cert-auth=true ”. It will use the CN from the certificate as the username.
Unfortunately there are a couple of issues with this:
https://github.com/coreos/etcd/issues/9816: auth doesn’t work at all with the default etcd version. You need to update your etcd to version 3.2.18
https://github.com/coreos/etcd/issues/9691: this affects you if you try to create the openshift user inside etcd. But for this one there is also an workaround
We use cfssl to create the certificates we need to connect to etcd. Lets download the binaries and put them in our path:
# curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O && /bin/mv ./cfssl_linux-amd64 /bin/cfssl && chmod +x /bin/cfssl
# curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O && /bin/mv ./cfssljson_linux-amd64 /bin/cfssljson && chmod +x /bin/cfssljson
In order to enable etcd ACL, first create the root user:
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add root:secretpass
User root created
Create some roles to test our certificates.
Users with this role should have read access in the entire cluster:
# ETCDCTL_API=3 etcdctl $SSL_OPTS role add readonly_all
Role readonly_all created
# ETCDCTL_API=3 etcdctl $SSL_OPTS role grant-permission readonly_all --prefix=true read /
Role readonly_all updated
Allows users to read and write everywhere:
# ETCDCTL_API=3 etcdctl $SSL_OPTS role add readwrite_all
Role readwrite_all created
# ETCDCTL_API=3 etcdctl $SSL_OPTS role grant-permission readwrite_all --prefix=true readwrite /
Role readwrite_all updated
User with this role should only be allowed to access part of etcd tree:
# ETCDCTL_API=3 etcdctl $SSL_OPTS role add readonly_secrets
Role readonly_secrets created
# ETCDCTL_API=3 etcdctl $SSL_OPTS role grant-permission readonly_secrets --prefix=true read $SECRETS_PATH
Role readonly_secrets updated
Role that allows a user to read/write a specific key only:
# ETCDCTL_API=3 etcdctl $SSL_OPTS role add readwrite_secret4
Role readwrite_secret4 created
# ETCDCTL_API=3 etcdctl $SSL_OPTS role grant-permission readwrite_secret4 readwrite $SECRETS_PATH/default/secret4
Role readwrite_secret4 updated
Create users and add assign specific roles to them. Generate random passwords because we don’t expect to use them:
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add reader:$(head -c 32 /dev/urandom | base64)
User reader created
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add viewsecrets:$(head -c 32 /dev/urandom | base64)
User viewsecrets created
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add admin:$(head -c 32 /dev/urandom | base64)
User admin created
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add usersecret4:$(head -c 32 /dev/urandom | base64)
User usersecret4 created # ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role reader readonly_all
Role readonly_all is granted to user reader
# ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role viewsecrets readonly_secrets
Role readonly_secrets is granted to user viewsecrets
# ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role admin readwrite_all
Role readwrite_all is granted to user admin
# ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role usersecret4 readwrite_secret4
Role readwrite_secret4 is granted to user usersecret4
Since we are enabling user authorization, we need to have special permissions for the user used by the apiserver to connect to the etcd cluster: we will give it the root role. Kubernetes installation has 2 connections to the etcd server: apiserver and the livenessProbe.
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver-etcd-client.crt | grep "Subject:" openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/healthcheck-client.crt | grep "Subject:"
Create an user with the name from the CN field of the certificate:
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add kube-apiserver-etcd-client:$(head -c 32 /dev/urandom | base64)
User kube-apiserver-etcd-client created
# ETCDCTL_API=3 etcdctl $SSL_OPTS user add kube-etcd-healthcheck-client:$(head -c 32 /dev/urandom | base64)
User kube-etcd-healthcheck-client created # ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role kube-apiserver-etcd-client root
Role root is granted to user kube-apiserver-etcd-client
# ETCDCTL_API=3 etcdctl $SSL_OPTS user grant-role kube-etcd-healthcheck-client root
Role root is granted to user kube-etcd-healthcheck-client
Enable authentication:
# ETCDCTL_API=3 etcdctl $SSL_OPTS auth enable
Authentication Enabled
From this moment, nothing can connect to the etcd cluster without proper certificates. Let’s create a certificate with a user that it’s not define in etcd and check that it doesn’t have access at all:
# function create_certificates {
NAME=$1
cat <<EOF | cfssl gencert -config=ca-config.json -profile=client -ca $CA_PATH/ca.crt -ca-key $CA_PATH/ca.key - | cfssljson -bare $NAME
{"CN": "$NAME","key": {"algo": "rsa","size": 2048}}
EOF SSL_OPTS="--cacert=$CA_PATH/ca.crt --cert=$PWD/$NAME.pem --key=$PWD/$NAME-key.pem --endpoints=$HOSTNAME:2379"
} # CA_PATH=/etc/kubernetes/pki/etcd
# cfssl print-defaults config > ca-config.json
# create_certificates tester
2018/06/18 18:52:54 [INFO] generate received request
2018/06/18 18:52:54 [INFO] received CSR
2018/06/18 18:52:54 [INFO] generating key: rsa-2048
2018/06/18 18:52:54 [INFO] encoded CSR
2018/06/18 18:52:54 [INFO] signed certificate with serial number 722235405009026418318053946143102861163105227800
2018/06/18 18:52:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (
specifically, section 10.2.3 ("Information Requirements"). 2018/06/18 18:52:54 [INFO] generate received request2018/06/18 18:52:54 [INFO] received CSR2018/06/18 18:52:54 [INFO] generating key: rsa-20482018/06/18 18:52:54 [INFO] encoded CSR2018/06/18 18:52:54 [INFO] signed certificate with serial number 7222354050090264183180539461431028611631052278002018/06/18 18:52:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum ( https://cabforum.org );specifically, section 10.2.3 ("Information Requirements"). # ETCDCTL_API=3 etcdctl $SSL_OPTS get / --keys-only --prefix=true
Error: etcdserver: permission denied
Allow an admin user to access the cluster for 2 hours only:
# cfssl print-defaults config | sed s/8760/2/ > ca-config.json
# create_certificates admin
Admin user can do list:
# ETCDCTL_API=3 etcdctl $SSL_OPTS get / --keys-only --prefix=true
/registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io
Get a key:
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret1 -w fields | grep Value
"Value" : "k8s:enc:aescbc:v1:key1:\xda
W0~\x83\xe4\x80Ճ$J\x1e\xa2\x02z\xc9\v\xd1\xd0$)\xb2K\x9f\xc2\xff\xcdJ5\xfa\"\x13\xc4\f\x86\xc0{P\xceW\x9e\xd1z;b$\x97\xe8\xb4l\xd0\xfa\xd8 \xe2Vc\x8c\xa2\xcd\xe5\xb0\x04(l\x18\x13\xbf\xe2\xb7|\xf1m\xef)\xfd\x97\xcbk-\"\xba\x819\xcf,_\xf6\fxP\xf2\x13\x94\x9b\xca\xf4\xde{d\xcb\xceq\x84q\xae\xaa\x06\x14\xb7q\x1d|L\x8eS\x8c\xc9$\x8e\x80D\xf0\xda\xe2si\xb6,@\xa2\xf9\xae\xf2~\xe3w\x8e4fr{e\x0f'\xcc\xf6\xe7\xadd\x83^\xdb\x03\xf1jT\x13>"
Delete the key:
# ETCDCTL_API=3 etcdctl $SSL_OPTS del $SECRETS_PATH/default/secret1
1
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/secret1 -w fields | grep Value
Increase the date and try to list:
# date $(date +%m%d%H%M%Y.%S -d '+1 hour')
# ETCDCTL_API=3 etcdctl $SSL_OPTS get --keys-only --prefix=true /
/registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch # date $(date +%m%d%H%M%Y.%S -d '+1 hour')
# ETCDCTL_API=3 etcdctl $SSL_OPTS get --keys-only --prefix=true /
Error: context deadline exceeded
# date $(date +%m%d%H%M%Y.%S -d '-2 hour')
Check that user ‘reader’ has access everywhere and can’t delete anything:
# cfssl print-defaults config > ca-config.json
# create_certificates reader # ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret2 -w fields | grep Value
"Value" : "k8s:enc:aescbc:v1:key1:7^İ\xe9\xc8\x1e\xa7̔=D+\x9e%\x1a\xf4\x10o@\xec\xc14&<Z\xd1\xde\xfa\xca-'#\xa2K\x1c\xff\x101a\x86\xb0\xd7.\xa9\x19\x04\x93m\xa1\xee\xacDe\x95/\xd8\xe7\xaehp~\xc9\x0e\xe9\x8f}\x9a\x8a\xb0f\xf9\xeb\xb7\u007f@\x87\xa0\xa6\x98\xe78\xd0+\xd45\"S\x17\x8c\x84\xa6ㅽb\xda\xe6\xfc\xa1\xd9[[~\x82\xfbKS\x82\xf0>o\xc1 \x8b&{\xa1\r\x14Un\x03\xf7\x1f=\xe5\x1b \xa7t\xed[\x8a\xec\xb8\xf1\xe4\xe2\xc1\x81\xb00=cbl·ɬ\x12`\xf2|\x1b\t\xe4#\xcd" # ETCDCTL_API=3 etcdctl $SSL_OPTS del $SECRETS_PATH/default/secret2
Error: etcdserver: permission denied
Check that user ‘viewsecrets’ has access only to read secrets:
# create_certificates viewsecrets
# ETCDCTL_API=3 etcdctl $SSL_OPTS get --keys-only --prefix=true /
Error: etcdserver: permission denied
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret3 -w fields | grep Value
"Value" : "k8s:enc:secretbox:v1:key1:\xba\xf8,Q@\xb9\xb6q3$k\x04\xeeV\x99|Z'\xdeE<\xa5\xa9n\x91u\xb9]RY\xccc\xe3\x13\x8b\u07b4Q\x91\x9cR2\xcc\xc5\xd9\x0e\x19?\xca\x1ch\xde\x1d%\xa3N\x85H\xb0\xf6֢\xe6\xab\x06\xf6\x960{\xdb\xd8^eQ\xb3\x05\x03\x06)\x05JH\x16\x18\fp\x9eu<t\xea\x06\x12\xf1۹y\u007f\x15\xe5\x1d\xef\x8a2G\x85'\x94
\x1d\x99\x85ku3\xa2~\x12\x04\xe5\x84~\xaaG\xd3n\x98\x95\xa0\xc8_1B\xcb\x0f\xb7;\x80\xe1xR\x86ij\f\xef\xd7SA\x950MQfz~)\x13\xc5\xf1\xf8\x91\x14\x9d_\xba\x82[=M\x81O\x1dFNj\xc1\x98\xe4"
# ETCDCTL_API=3 etcdctl $SSL_OPTS del $SECRETS_PATH/default/secret3
Error: etcdserver: permission denied
Check that ‘usersecret4’ can access only a specific key:
# create_certificates usersecret4
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret3 -w fields | grep Value
Error: etcdserver: permission denied
# ETCDCTL_API=3 etcdctl $SSL_OPTS get $SECRETS_PATH/default/secret4 -w fields | grep Value
"Value" : "k8s:enc:secretbox:v1:key2:\x92\xeeyj\x96\xfc쵪-8\x0e\xa7\x9a\xb0\x16\xe2\xb8J\f_\x81\xec\xf65\xa9\x1a\xe5\\xۛ%Ҝ\xbb\ax\xbf\x00Kz\xabaD\x1c\x94\x87\xaervsP\xf3q\xf3\xaeH\xb8\x95-\xef\r*[yl\xf3/\xc4\x0f\x00\a\x132\f\xe1\x17\xbf\xff\xb4;<\xec\xc2\x01\xa8\xc8f\xff\xcd\xf3ʦ\x83P\x01\xcdu\x16\x16\xfa\xba\x8f\xe6\xe5\x05\x96\xf7k,\xaa\xea\x0f\x99\x8f\xb3\xc7\xe6\xa4=\x93\x8a\xf3S\x17\xc6S\r\xee\xea㟷\x00\x945o\xe8\x8e:W\xacot\xeaj,P\x14\xbe\xd0\x13\xf91Y\xf0\xf0\x93fW\xcczD3\xb9\xa0\xb4\x9e\xef\x1aE\x16\xc8j_TX\xae"
$ ETCDCTL_API=3 etcdctl $SSL_OPTS del $SECRETS_PATH/default/secret4
1
Conclusion
In a cluster with multiple masters, where etcd servers listen on all interfaces and not on localhost, limiting the access to etcd is vital.
Kubernetes manages this with RBAC, but by default etcd is only protected by requiring the client to have a valid certificate.
While employing EncryptionConfig can take care of most of the issues, it is still possible to have data in etcd that is not fully encrypted. Since confidential data can now be required in configmaps, even in statefulsets and deployments as environment variables, using simply EncryptionConfig is not sufficient.
If you cut access entirely to etcd by using authentication and only allow the apiserver to connect, you protect yourself from leaking sensitive data to others. You can create short lived certificates for any other uses and no longer need to worry about the longevity of the certificates.
The solution is also applicable to the users added previously to etcd: if you don’t feel confident about what happened to certificates created for them before, you can revoke the users access, or delete them. | https://medium.com/opsguru/securing-kubernetes-secrets-how-to-efficiently-secure-access-to-etcd-and-protect-your-secrets-b147791da768 | ['Anton Mishel'] | 2018-06-20 18:38:25.916000+00:00 | ['Etcd', 'Security', 'Kubernetes', 'DevOps'] |
Unpacking The Mess At Arsenal And What Comes Next Under Mikel Arteta | After letting Manchester City come to the Emirates on Sunday and comfortably walk away with the three points, Arsenal have dropped down to 10th place in the Premier League. Of course, losing to Man City in itself is not so bad, but to put up such a weak performance will be the real disappointment for Arsenal fans.
It only took two minutes for De Bruyne to give City the lead, and the game was wrapped up when that lead reached 3–0 before half time. The reality is that this was just another in a long line of poor showings from Arsenal, which has actually saw them fall to a negative 3 goal difference in the league.
For further evidence of the dreadful season Arsenal are having, you only need to look at their recent form. They have managed just one win from their last twelve games in all competitions. That’s one win since the 24th October. A run of results that embarrassing seems almost impossible for a club of Arsenal’s size and for a squad that expensive.
To make matters worse this form spans across two different coaches. Yes, the situation at Arsenal is such a mess that they couldn’t even get a new manager bounce when Freddie Ljunberg was installed as interim head coach replacing Unai Emery. Failing to see an upward trend in results or even performances makes it clearer, to any degree that it wasn’t already, that there are major flaws within the squad as well as with the managers.
So what are the deficiencies in the playing staff assembled by Arsenal? We might as well start with the most blatant area of weakness and that is in the heart of their defence. Time and time again they are punished for their lack of quality at the centre back position, something made all the more ridiculous by the fact that they knew that was the case coming into the season but failed to rectify it in the transfer market.
Truthfully, Arsenal’s decisions in the summer window were confusing at the time, but after seeing how the first few months of the season have played out, they now seem closer to thoughtless. Despite already being a top heavy team, they opted to spend big on a winger in Nicolas Pepe and go budget at centre back with David Luiz.
Now this isn’t a criticism of Pepe because in the end I think he will go on to be an excellent player for Arsenal. Where the problem does lie however, is they ended up in a situation where Pepe couldn’t even get into the starting eleven while David Luiz has been a regular in their backline alongside Sokratis, who combine to form one of the most error prone centre back combos in the Premier League.
Of course they also bought William Saliba, who has remained on loan at Saint-Ettiene for the 2019/20 season, but that doesn’t make matters any better. It’s likely that Saliba will be a big addition for their defence in the long run but it doesn’t change the fact that coming into this season they knew centre back was perhaps their biggest weakness and their plan to fix it short term was to buy David Luiz.
Another problem that Arsenal have to contend with is the fact that while their best player is operating at his peak, they are in the early stages of rebuilding their team. By the time the 2020/21 season starts Pierre-Emerick Aubameyang will be 31, which means ideally Arsenal would already have a quality team built around him to compete in the top competitions.
Instead, you’re looking at a situation where the biggest strength of this Arsenal squad is maybe the amount of young talent they have at their disposal. By the time they have found their way out the current mess and developed this crop of players coming through, Aubameyang’s prime Arsenal years may have been wasted. Still looking at their striker situation, to have both Aubameyang and Lacazette on the books, while seriously lacking in quality both in defence and central midfield is simply poor squad planning.
In order to get them both into the starting lineup a two striker formation is required, at least that is if you’re going to get the best out of them. Already that is limiting whoever the manager is quite significantly. More importantly, to have two of your best players in the same position, who were both bought for large transfer fees within six months, should be an indicator there are problems with the transfer decisions at Arsenal.
Then there’s also Mesut Ozil to consider. He’s on a massive deal that doesn’t run out until 2021, but right now his performances are from justification for the amount he’s earning. Under Emery Arsenal actually found themselves in a position where their highest earner was being left out of the match day squad regularly.
With the benefit of hindsight I think many fans would accept that letting Ozil leave, as they did with Sanchez, would have been the better option instead of breaking their wage structure to keep him at the club. Maybe going forward Ozil rediscovers some of his best form, but in the more likely scenario that he doesn’t, it won’t exactly be easy to move him on either.
What’s next for Arsenal then? It appears Mikel Arteta is the chosen one to be their next permanent manager, and honestly I quite like that choice. Before going any further, it is worth saying that as Arteta has no previous experience as a manager that it is hard to properly judge this appointment. What I will say though, is that a lot of the talk of this being too risky an option is overplayed.
What coach out there, that Arsenal could realistically attract, wouldn’t be a gamble? Who out there would have come in and been a nailed on success? There are none. When they appointed Emery they went for experience and a trophy record but look where that got them. This time maybe a different route is best.
Moreover, as mentioned previously, there is a lot of quality young talent on that Arsenal roster so opting for a younger coach who can oversee a longer term project is ideal. The circumstances he is arriving into might help him as well. After what we have seen from Arsenal so far this season, the expectations for this squad are almost nonexistent and the current campaign is bordering on a write-off.
Such a low bar for the remainder of 2019/20 would allow Arteta to find his feet and assess his squad properly. That’s not to mention that with his arrival, you would expect, would come the style of football that Arsenal fans want to see back at the Emirates. A more attacking brand of football would be a change from the more pragmatic approach of Unai Emery that in the end was near lifeless.
As for the players, one idea that has been circulating on social media is the sale of Lacazette to free up more funds that can be redistributed to other positions more in need of reinforcements. For me it is a far from ideal scenario, but Arsenal cannot afford to enter another season without seriously addressing the centre back and central midfield positions.
If we were to be looking for the positives that Arsenal can build upon, then obviously there are the two mentioned already in Aubameyang and the promising group of youngsters. Aside from that they look to have a quality fullback pairing in Tierney and Bellerin, once they can get them both fit of course. Finally there is Bernd Leno who is quietly going under the radar as one of the best keepers in the league.
So, there is plenty for Arteta to work with when he returns to Arsenal as head coach and should Arsenal implement a better strategy in the transfer market, then I think their fans have plenty to be optimistic about. At the time of his departure it was said you don’t want to be the man who replaces Wenger, the expectations would be too high. Now however, Arteta can give Arsenal a fresh start and watching to see if he can succeed will be one the Premier League’s most interesting plots going forward. | https://jackmcc98.medium.com/unpacking-the-mess-at-arsenal-and-what-comes-next-under-mikel-arteta-79c9c4b6a936 | ['Jack Mccutcheon'] | 2019-12-20 14:32:54.757000+00:00 | ['Arsenal', 'Premier League', 'Soccer', 'Football'] |
Teaching Poetry Through Imitation | Photo by Toa Heftiba on Unsplash
Mimicry Poetry can be a lot of fun and can teach your students rhythm, rhyme scheme, and theme.
In a poetry lesson, I had my class try to rewrite Lewis Carrol’s Jabberwocky- about something they might fear and conquer. They had to try to keep the rhyme scheme and meter the same. So I tried mine:
’Twas early in the days gone by
did float and wander in the trees
All imagined if my eyes did lie
And thoughts myself did tease.
“Beware the clock, my daughter!
The hours that bite, the times that catch!
Beware the older ones and shun
The memory will unhatch!”
I took my pad and pen in hand:
Long time the ticking foe I sought
So rested me by the willow tree
And slept awhile in thought.
And as in awakened dreams I stood
The years passed without a flame
Came racing only as time could
And whispered out my name!
One, two, three, four! And more and more
I’d written my life on the page!
I went ahead, no fear or dread
embraced my growing age.
And, did I beat the growling clock?
It is after all a small toy!
Cherish this day! We have today
For gladness and spread joy.
’Twas early in the days gone by
did float and wander in the trees
All imagined if my eyes did lie
And thoughts myself did tease. | https://medium.com/sky-collection/teaching-poetry-through-imitation-6011ea7ba4c0 | ['Samantha Lazar'] | 2019-12-27 13:57:11.077000+00:00 | ['Poetry', 'Nonfiction', 'Teaching And Learning', 'Education', 'Teaching'] |
For Those Who Ruminate | For Those Who Ruminate
Part 1 of a series on Rumination
Rumination might not be the most widely used word that we hear every single day, but there’s a good chance most of us may have a general idea of what it means. I never gave it much thought, but when I was recently reading about rumination, I swiftly realized that it’s a topic of that I can greatly connect to.
Many experts have spent the last several years researching rumination, and are beginning to come to some conclusions that the act of it (ruminating) is a human behaviour that links to mental health diseases like Massive Depressive Disorder. The term, ruminating, is a bit of a broad word, and it has come to represent many different branches on the mental health tree of behaviours. It covers a large spectrum of levels too, some minor, while others have the chance to be debilitating.
Ruminating comes in many forms. I won’t say that it’s always all totally bad. Some people do believe certain types of ruminating is necessary in their lives. It’s not always black and white. Just look at the type of ways it manifests. You’ll see some of things it includes, carry some complexity with them.
Rumination can sometimes carry an obsessive nature with it. Because of that, we see many references to obsessive compulsive disorder, and the theory that ruminating can have a strong hold on those suffering from OCD. It also emcompasses similar methods, and it shows itself in the forms of things like over-worrying, especially when the worrying may be of something small or exaggerated.
We worry, we think about the same things for long periods of time, we “stew” about things, rack our brains, we overanalyze, and of course, we can become obsessive over repetitive thoughts.
Timing can be terrible. What doesn’t help most of this, is the fact that a majority of this stuff is done right as we’re lying in bed, all is quiet, and we are attempting to start our nights sleep. So right there, we can have occasional sleep issues to full blown insomnia. Just another piece on the rumination list.
So, like mentioned at the beginning, I came to a realization that I connect to rumination a lot. I think I knew that subconsciously for decades, but after all that time, I’m only coming to discover it only now. It was my new venture of learning about Mindfulness that seemed to guide me right to this topic.
I can relate to the ruminating (at maximum peak) right in that first hour of the night that we’re trying to go to sleep. I’ve suffered those brutal effects many times.
I’ve always done it, and it’s also had many peaks and valleys throughout my life. It connected to my mental health issues, and past addiction issues. The addiction itself was a way of self medicating or emotion dodging that I did in order to just shut the damn ruminating off in my head. Was I ever really shutting it off? Or was it more like simply muting it?
I also suffered greatly in some past relationships. With all my erratic behaviour, came rumination out of guilt. When I was losing all kinds of people and trust, I would spend days stuck in my own head, trying to convince myself of who alleged hated me, who was about to discover something bad I did, who was soon going to call me and berate me. Or worse, when would police be knocking at my door. And for what reason, as there were usually multiple things they could choose from. There is nothing worse than the ruminating that develops as a result of doing things immoral, dishonest, illegal, or just plain ethically disgusting. It brings a strong paranoia that is so difficult to shake off.
Finance issues were big parts of this for me as well. Living a life of addiction or just being erratic in general, made financial situations develop. It would sure be tough times, when ruminating how to get the rent paid, when it’s seven days late, and the bank account balance is $0. Sometimes, it was those kind of situations, where rumination would lead to even more erratic decisions.
That was the thing with ruminating. For as much thinking, as the process of ruminating seems to be, it’s actually very illogical thinking. Endless, constant, illogical thinking. Taking the form of rumination.
Not to mention that usually, ruminating thoughts never took the form of true problem solving. With a start, a plan, and a solution. Because the problem with rumination is the fact that its thinking and repeating the same thing, over, over and over again. We seem to have this one subject in our minds, spinning fast. However it’s in a tornado motion. So, it’s fast, churning, but stuck at the same time.
This is why it is connected to mental health issues. Especially the fact that ruminating already existing obsessive thoughts is known to usually strengthen or prolong mental health issues.
It’s something that seems quite familiar to me. Ruminating is something I’ve put myself through at almost all periods of my life at some point. When I look back at its presence as it connects to my mental health, I know it certainly prolonged it at times. It also most certainly made it worse at times too. I feel strongly that it seems to be something that takes form of a fuel that is behind issues of mine relating to habits of procrastination and being unproductive. Especially when it came to projects that got, slightly started, yet never completed. That can be another unwanted trait of all of this. Plenty of ruminating, but not a lot of getting things done.
Can we completely rid ourselves of rumination? The human factor makes my opinion say, probably not. Control has to be gained. That is the goal to face and address. Next time, we take another look at rumination; and the techniques we can learn on how to control it. | https://medium.com/illumination/for-those-who-ruminate-9ecf2eb5e375 | ['Michael Patanella'] | 2020-12-09 01:47:35.894000+00:00 | ['Self Improvement', 'Life Lessons', 'Self', 'Mental Health', 'Life'] |
Editor’s Faves — Top 10: Why Coding Knowledge Is the Future of Literacy | Here is a list of our top 10 writers who are telling you different things from learning to code to object-oriented programming — but it includes less technical pieces as well:
10. How To Code With SoloLearn
Agnes Laurens is a radio journalist, violinist, a mother, and a painter. She is a fabulous writer from the Netherlands. She is an editor for Illumination as well.
In this interesting story, she is sharing how she learned to code and found that it was not difficult. Don’t miss this one.
“A girl can’t code”, people say often to me, but I think this is very presumptuous. People assume all sorts of things that might not be true. For example that men are not good for the fashion industry. Look at black people now, they have been put in corners they don’t want to be, and they don’t need to be. There are always groups that are willing to belittle other groups of people. Since a few years ago, I have been trying to learn code. Every attempt failed.
9. The Best Programming Languages to Learn First — A Roadmap for the Indecisive Beginner
Mark Basson loves to write about programming. You’ll love his writing style.
Anyone can learn to code, but what programming languages should absolute beginners learn first? There are so many choices! If you are suffering decision paralysis, then I encourage you to stop researching and instead use the time to learn something that will get you off to a flying start. The following is a roadmap that will get you into the world of programming in the most frictionless, time-efficient, and cost-effective way possible. I won’t give you a multitude of choices. I will show you the exact courses to take so you don’t have to do any more searching.
8. OOP Concepts explained with real world scenario + (java)
Nisal Sudila is a programmer and a writer. He is telling the basic concepts of object oriented programming to our readers. If you are new to OOP, don’t miss this one.
OOP concepts cannot be grasped without understanding these 6 core concepts in programming. They are, Class Object Encapsulation Inheritance Abstraction Polymorphism
7. Data Literacy — A New Skill to Fuel Career Growth, Especially in a Non-Tech Company
Andy Teoh is an advanced analytics professional. He is an excellent writer and he knows what he is talking about.
Understanding data is becoming a major skill in an age where data is abundant. Do check his other work as well.
Data literacy is the ability to read, analyze, and communicate to others with data in context, including an understanding of data origins and constructs. In this golden era of digital and data, we do not need to be Facebook or Walmart to build a data lake in our company. With the advancement in both hardware and cloud-based systems, this opportunity is now readily available to every company for a modest cost. As organizations amass much more data at an unprecedented rate, it becomes increasingly important for all employees — and not just data scientists — to be data literate so that we can contribute better in our roles and help our companies sharpen their competitive edge in today’s aggressive global economy. According to a survey sponsored by Qilk, 94% of respondents indicated that data literacy is key to developing professional credibility to boost career growth.
6. Eight Ways to Identify a Fake New Facebook Friend Request
Tom Handy is an investor and an excellent writer. His style is simple, direct, and engaging. If you like this story, do check his other work.
You just received a friend request on Facebook from someone you don’t know. The most likely thing you do is accept the friend request. The friend request was from a pretty woman who you think is harmless. Wrong! This could be an online scammer using a fake profile trying to connect with you so they could ruin your life. These days it is very easy for someone to create a fake profile and then scam you. I have figured out a few clues that I will share with you that you need to know.
5. How Is Technology Changing the Way We Read?
Jason Ward is a freelance journalist, author, and writer. His writing style is simple, direct, and engaging. You’ll fall in love with his work. Don’t miss this one.
For several millennia reading was limited to a very select few. Books, scrolls and parchments were written by hand and were both prized and rare. Only a small percentage of people were able to read. Then, in the 15th century, that began to change when Johannes Guttenberg invented the printing press. Literacy rates began to rise but progress was slow. However, the Industrial Revolution and the ability to mass-produce paper soon changed that. Education, news, and the rise of popular novels and literature soon became mainstream, leading to a correlating growth in things like libraries and bookshops. People discovered the joy of the written word.
4. Finding Order In The New World
Stuart Englander loves to write inspiring stories. He is an excellent writer. You’ll love his short story.
They arrived as a group of two dozen adventurers to find a barren and desolate plain, and there was no turning back now. Within a few years of sweat and toil, they turned the soil into a burgeoning landscape, and ultimately, it became a fully functioning ecosystem. This group of like-minded pioneers had much to be proud of, not the least of which was the creation of a new community, a garden of prosperity. Marvin Stafford perched on his favourite boulder, a pinkish-red block just outside his door. He stared across the still, rough landscape outside the compound, reminiscing over the past fifty years. He’d been here from the beginning, an unlikely leader who became the driving force behind the village’s success.
3. Worlds in The Magic of a Gamer’s Brain
Tree Langdon is a writer and a poet. She is very intelligent and her writing style is simple but thought-provoking. She is a superb writers.
In this story, she shares an experience when she attended a writing course with some young people who played a lot of games. Read to learn what happened next.
Recently, I was surprised to discover how computers and gaming can enhance your creative writing skills. I enrolled in writing course at our local college and our first assignment was to write a short story using a scene to create emotion. As the students read their stories in class, I realized I approached writing from a different angle than the other students. In one way, that made sense. I have a lot more years to draw from, so I pulled from those experiences. But that wasn’t quite what was going on.
2. Review on Li-Fi: An Advancement of Wireless Network
Arslan Mirza is a freelance writer. He is interested in new technologies and he wants to share his information.
The concept of visible light wireless communication (Li-Fi) technology was proposed by Professor Hass in the United Kingdom at the TED (Global Technology and Entertainment Design) conference in 2011, and the preliminary application of optical transmission data was successfully realized in 2012. In October 2013, the Fudan University experiment also successfully realized the wireless transmission of visible light indoors. Through this technology, one LED light can be used for the normal Internet access of 4 computers, with a maximum rate of 3.25 Gbps and an average Internet rate of 150 Mbps
1. A Hackathon Gave Birth To One of The Most Influential Company
Shubham Pathania is a coder and a writer. He lives between fiction and reality — his words, because he is a great writer. Do encourage him by reading his story. | https://medium.com/technology-hits/editors-faves-top-10-why-coding-knowledge-is-the-future-of-literacy-a0d6864e4ad2 | ['Dew Langrial'] | 2020-12-16 21:48:24.417000+00:00 | ['Writing Tips', 'Readinglist', 'Self Improvement', 'Reading', 'Writing'] |
Three misconceptions about Serverless, and why Serverless is often misunderstood? | Serverless technologies come in many guises.
Cloud platforms such as AWS offer many different kinds of ‘Serverless’ tech, such as Fargate, Lambda, Aurora, ApiGateway etc.
Misconception One — You will be vendor locked in.
Part of the power of serverless is that it can easily access additional cloud services from within the same platform.
Because of this, there is a tendency to use multiple services from within the same cloud provider.
It is undoubtedly possible for say a function in AWS lambda to call out to a service in Azure or GCP, but you then have to start looking at things like security boundaries, authentication etc.
The reason people misconceive that you have to go with one cloud provider is that it is just so much simpler to do so.
Using multiple cloud vendors also reduces the cost-effective nature of serverless.
The cost of developing a small piece of the puzzle, and the cost of running is so tiny in comparison to a full-blown application; it simply does not make sense cost-wise to then call another cloud service provider.
Cloud providers tend to use plug-and-play style hooks and make integration of services very straightforward, so if you want to avoid vendor lock-in, it is very much so possible, it just might cost you a bit more in time to do so.
Misconception two: Serverless == Functions as a Service
There is a common misconception that serverless is just a Function as a Service (FaaS), such as AWS Lambda or Azure Functions.
It is entirely false.
Although FaaS can make up the compute part of serverless, there are different ways to do this, and many other services out there which provide additional functionality than compute, such as database or gateways. | https://medium.com/swlh/three-misconceptions-about-serverless-and-why-serverless-is-often-misunderstood-a74b7bba4102 | ['Craig Godden-Payne'] | 2020-08-04 10:23:26.334000+00:00 | ['Event Driven Architecture', 'Serverless', 'Software Development', 'AWS', 'DevOps'] |
How to migrate your company to a new product overnight | Photo by Kevin Ku on Unsplash
If the platform you develop is being used by millions of people every month, it’s hard to imagine how challenging undertaking would it be to move it to a completely new version or to merge it with another service that you’re performing fusion with.
Over the web, you can find plenty of articles that most of such projects fail. For more than two years, I took part in one of these projects.
Back then, we had two platforms to serve our users in different countries. Our goal was to migrate them to a single one, without causing a disaster or significant downtime. In the below article, I’ll share with you how we did it and what we have learned during the whole process.
Revolution starts from the business side
Startups’ acceleration process in a nutshell is grabbing more and more money from the investors in a series of financial rounds. If the startup is good enough and grows well, it can count on even millions of dollars of investors’ money.
I’ll never forget one of our all-hands meetings back in 2016, during which, our CEO Mariusz, told us that we just closed another financial round and we will use acquired money to perform a fusion with our then biggest competitor — Spanish Doctoralia.
We used to do fusions of other smaller companies in the past, but we never stood in front of such a great challenge. Soon after sharing the news, our minds were full of not only happiness but also anxiety.
Doctoralia had a great product and a team of outstanding people, including a well-scaled IT department. With ease, they managed to win their home market, a couple of other European ones, LATAM countries, and Australia. At the same time, Docplanner was present on a handful of other European markets as well.
Why joining our products had a lot of sense
Both services allowed to book a doctor’s appointment, write opinions about these doctors, or ask them some questions. Of course, they both had some small differences in the business logic, but key features were basically the same.
Doctoralia was developed in .Net, while Docplanner in PHP. It was hard to imagine further product development in this scenario — it would end up writing the same functionalities twice, solving a doubled amount of bugs. It wouldn’t work at all.
Another problem is data architecture that, in large projects, usually tends to lean towards not only best practices but also a chain of product development decisions. Even if both platforms have entities like user, doctor, appointment, they are represented by totally different data structures. If one wants to migrate them between the platforms, data transformation is needed for every entity.
For the above reasons, a lot of companies restrain from performing a fusion and prefer to just do a takeover of both software and people who support it, expanding the markets as one corporation, but with a handful of independent business entities full of people that do not share the same work culture or even do not know each other at all.
This is not the way we run our business. Because we believe that, in the long term, people and good relations are way more important than accountancy, we didn’t want to just take over another company, earn more money, and move on. We chose to make a fusion of our products, to share our past experiences, and to develop a single, uniform product together.
This change took us almost two years. But from today’s perspective, I’m 100% sure it was worth it. Maybe most of the company mergers fail, but hey, we made it and it was the best decision we could make.
Assumptions and naive estimations
Our product is divided into two parts — one for doctors, which we call SaaS, and the other for patients, called the marketplace. At that time, SaaS was at the beginning of its shiny road, thus didn’t require much effort to make it work in all of the markets we had.
Marketplace, on the other hand, was a huge challenge. Both Doctoralia’s and Docplanner’s implementations were very large, taking literally millions of lines of code and handling huge amounts of data.
To handle the migration with ease, we created a new dedicated team — Merge. We took the following assumptions:
We have 11 markets to migrate Migrate one country at a time, not splitting it into phases Start with smaller markets first. It’s safer in case of failure. If the upcoming market has some functionalities on the Doctoralia side that we lack in the Docplanner marketplace — first let’s implement these before migrating. Aim for as less downtime as possible (a couple of hours max). Merge team offers full support for the migrated market for the first two weeks after the process is finished.
Looks naive? Maybe, but this is exactly what we did. The only thing naive in the end, was our estimations because initially, we thought it’s gonna take a couple of months to finish the job.
Market migration — MVP approach
One of the golden rules for developers is to not reinvent the wheel. While it works best for most of the cases, it was the main reason for our two big failures that we “achieved” during the migration of the first couple of countries.
Our first approach was what startups really love — MVP. We used to shovel data in the CSV format in the past, so why shouldn’t we act differently this time?
From practical experience, it looked quite simple — we extracted data from Doctoralia’s database, transformed it into the format we support in Excel, then pulled it in using importers we created. I must say that for smaller data batches it worked pretty well, giving our PO the ability to import data back and forth in the testing environment whenever he wanted.
The CSV rollercoaster allowed us to migrate data of the first two markets. But, because they were at the same time markets with the lowest amount of data, with the third country we hit our first wall.
It turned out that, even on a relatively powerful machine, there’s a limit of data that Excel is able to process in roughly reasonable amounts of time. When we started to experience more and more freezes, we decided to drop the CSV approach and look for something better.
We still didn’t want to reinvent the wheel though and to write everything from scratch. This is why we started to check what current ETL solutions had to offer.
After a brief search and a couple of meetings, we decided to give Xplenty a shot. This cloud-based tool, apart from the extensive data-manipulation feature set, offers awesome UI to manage the whole process. As backend developers, we’re not “clickers” by nature, but the ability to set up different data flow scenarios in such a way totally bought our hearts. To top that, this tool wasn’t entirely new to our organization, as our Business Intelligence department used it and highly recommended it.
As I just wrote, Xplenty is great. It is so great, that we resigned from it much later than we should — around 2 months of the data-modeling nightmare we led ourselves into. Of course, we gave it a test drive before we opted in, but were simply too optimistic and restrained from performing more complex testing.
A tool that was so simple to use, turned against us when it came down to migrate data with more complex one-to-many relationships. One day, looking at the screen below, we finally opened our eyes and decided to resign from using it.
Importing doctor’s basic data in Xplenty
I’m still too afraid to calculate how much money we lost. I don’t even mean the payrolls of a group of really talented developers, but time impact on the whole company that awaits our fusion to be completed.
Morale dropped a lot, but the ball was still rolling and we didn’t want to let it go.
Third time’s a charm — do reinvent the wheel
The lesson from the above chapters is simple — some special cases require tailored solutions, so if you want to get the job done right, do it yourself from the ground up.
Three main pillars of technology able to complete such a broad task are easy scalability, easy development, and easy monitoring.
Easy scalability
CSV lesson taught us that we have to be prepared for larger and more complex sets of data. With every next market, it’s gonna grow, and we have to keep in mind our promise — to migrate everything overnight.
To give you a sneak peek of what we achieved — with the CSV approach it took us around 1 hour to migrate 16.000 users and 34.000 doctors. With the custom solution I’m about to describe, migration of the Brasil market that held 2 million users and 600.000 doctors, took roughly 16 minutes.
Easy development and maintenance
The only way to built tools like these is to separate them from business code as much as you can. For the whole time, we were working on a living organism — marketplace codebase maintained by dozens of other developers every day. We had to keep in mind that we cannot stop other teams from developing our product while we work on the next market migration.
Easy monitoring
Without proper monitoring of the migration process, you are blind. Of course, testing the app thoughtfully after data migration is super important, but having proper tools to monitor the migration process in realtime is crucial.
Let’s talk about our solution in detail
Below I’m describing key points in our tool’s implementation and overall development strategy, that allowed us to develop it fast and with low-risk in mind.
Stack and environment setup
A proper environment setup requires a hybrid approach. All of the code related to missing features and data importers should be merged to the main repository branch as often as possible. You should use as much production code as possible for importing new entities because it gives you perks related to data persistence like triggers, URL generators, etc for free. Also if models definition changes in the production branch, you can quickly detect it, react, and apply required refactors to the importers.
On the other hand, introducing some configuration changes, like temporary database configuration, services needed solely for the migration process, might easily break production or be hard to rollback. Thus, for these changes, the best strategy is to apply these changes on a separate branch based on the main branch of the repository. Working testing and migration environments should be based on that branch. After the migration is done, there is no simpler way to switch back to the production configuration than just deleting this branch.
To build our migration software, we used technologies that we already know — PHP7, Symfony, RabbitMQ, MongoDB. Can you find technologies that would perform better for such a kind of a task? Surely you can. But picking fresh technologies that you barely know might get you into more trouble than you can imagine in the long term.
Another great advantage of this choice is that you can write the whole thing inside your target app, which is really convenient. You don’t have to create another communication layer for e.g. external service. You can use all of the stuff that you already have. Not only mentioned triggers and URL generators but also, for example, data validators — rewriting them would consume a lot of time and would bring close to none added value to the whole project.
Migration flow and breaking the rules
There are two sides of the migration — source (Doctoralia marketplace) and target (Docplanner marketplace).
To migrate data, we connected directly to the source database. Keep in mind that during the migration process Doctoralia marketplace was still running in production mode. To make a compromise between maintaining the app alive and data integrity, we decided to switch its database to read-only mode, and aimed for late-night hours, which lowered the possibility of new data coming in (new bookings, users).
From the performance side, the best approach is to pull all of the data of a given type at once. Executing select-all query for 1,5 million users might take some time, but on the other hand, you cut down a lot of latency and network time for future data processing, as communication with the source database is no longer needed.
The next step is data validation. If validation for some record didn’t pass, we remove it from the stack, put it aside for future analysis. Then, all of the validated records are converted to RabbitMQ tasks with rule — one record, one task. If for some reason, one task fails, it will not stop other imports. Of course, it is a good practice for rabbit tasks to carry on as small payload as possible (preferably ID), but putting whole data payload for a single record inside the task saves you some time while importing.
Another rule to break here is Don’t Repeat Yourself. For regular software development, it’s a must, but when you want to cut down business and architectural dependencies between different entities, code duplication is indeed what you sometimes need. For example, our diseases, services, and drugs dictionaries share a very similar structure, so it is tempting to write one importer to handle them all. In a perfect world, I would agree, but in reality, when you should expect business change on every corner, I strongly advise you to not follow that path.
The above steps can be developed as a pretty straightforward structure: for each type of data, you need a single generator (for example Symfony Command), which will handle querying for data, its validation tasks publishment. On the opposite side, you need a single handler for each data type, that will do the actual import from the RabbitMQ task. This way structure is kept single and it’s dead easy to scale.
These huge return buttons are pretty comfy
Grouping imports into stages
A great strategy to optimize and automate imports a bit is to group them into import stages, grouped and ordered in a way that reflects data relations. If you have to import 100 different types of entities, out of which 20 are independent dictionaries, there’s nothing against importing them simultaneously.
In our case, we used the Symfony feature that makes it possible to programmatically run predefined commands. We created configs that described stages, their relationships, and the import data types they were responsible for. It lowered the number of steps needed to migrate the full dataset from over 100 to about 20. Less clicking, less space for human errors.
Ensuring data integrity
One of the problems with migrations chunked into stages is maintaining data integrity and its relationships in a new application. Assuming that the ID of every entity might change, we have to come up with a mechanism that will make it happen.
Let’s take a look at the following example — a doctor and his offices. These are two different entities in a one-to-many relationship. To migrate this dataset in a proper way, we first need to determine the correct order. Of course, we will pick the Doctor entity first, as in this data set it remains independent. The doctor will be imported and will be given a new ID that is compatible with the target application.
The next step would be to migrate the doctors’ offices data. But how to maintain the relation, when previously imported doctor entities have new IDs? There are two ways out of this problem. The first one is creating a new column in the doctor entity definition, that will hold ID from the source system. I do not recommend this one as this approach will force you to modify the schema of every entity you import, which will result in more cleaning up afterward, plus it is not transparent to other developers in the organization.
A better solution is to go into a bit more “hacky” way. Create a single table, that will have a fieldset of the source entity identifier, target entity identifier, and entity type. If you migrate data from multiple systems into one, you can add another column determining the source.
I called this approach “hacky” because this way you lose what most RDBMSs offer you — relation checking. In other words, you cannot have external keys. On the other hand, what you gain is having a single place where all of the mappings belong, fully transparent of the production code of your target system. Therefore, while importing doctors’ offices, just exchange the old doctor id for the new one using this table, and you are good to go.
Maximizing amount of properly imported data
Chances that you will manage to migrate all of the data between systems that have a lot of differences in business logic are, sadly, low. It’s definitely achievable, but pragmatically it’s better to find a compromise between the amount of imported data and time spent to normalize it.
Let’s take a look at another example — the phone number field. In Docpanner’s marketplace, we were always very strict when it comes to phone numbers handling. We always checked if they are real, then formatted them to uniform format before persisting. Doctoralia on the other hand was a bit more tolerant in this matter — the system allowed any input that looked like a phone number. It could be a single phone, a couple of them separated by spacebar or comma, combined with internal PBX numbers and so on.
To some extent, you can define patterns that will define how most people tend to write their phone numbers, but if you have a dataset of a couple of millions of records, it’s unlikely that you will cover every case. Sometimes it’s even impossible not for technical reasons, but for business ones — you received three phone numbers in your input, while the target system can handle only two. In these situations, you have to aim for maximization of the number of situations you are able to handle but never target for 100% coverage, unless it’s crucial for the business.
Just a reminder here to use what you already have in your target system — if you already have pretty decent data validators, use them instead of building new ones.
Migration monitoring in practice
When your timeframe to import millions of records is limited to a couple of night hours and you want to have the best control possible over the situation, traditional data logging is not the thing that you should look for.
This is why we created our own monitoring system, which is both pretty easy and efficient.
When a new task is created by the generator, we create a new record corresponding to it — it contains ID and type of imported item. When the task is being processed later by the handler, it updates this record according to the import result — success or failure. If the import failed, we also persist information about why it failed, so we have a quick overview of the situation.
Having in mind heavy database loads during the import procedure, we use separate MongoDB instance to store these logs.
To calculate the overall estimations and performance, we also log separately some metadata about the whole import batch itself — like type, date started.
To view logs in realtime, we created a command-line tool, that displays multiple import stats and estimates in the realtime. If something is wrong, the red flag starts to blink and we can react instantly.
Our homemade migration process monitor
Important steps after successful data migration
After the migration is done, there’s a bunch of actions required to perform before switching production to the new app.
The first move is to run a set of crucial tests to determine if all of the main functionalities work fine. If everything is fine, we rollback the temporary changes that I mentioned in the earlier stages. At the same time, we start the migration of the freshly populated database to the production servers.
The last step is to update the DNS records, warm up the cache, and populate ElasticSearch.
Congratulations, your migration process is now complete :)
One of the mornings after another country successfully migrated
Tips & tricks
Test period before the migration
The extensive testing phase before the migration is your (and your organization’s) best friend. That’s why we always aimed for 2–3 month testing phases instead of 2–3 weeks ones. Plan this process in advance the way to have some time to not only test the importers themselves but also:
cooperating with the product folks, prepare a comprehensive set of test scenarios, and conduct them across the whole service, from different aspects — user, doctor, employee,
make sure that all of the people that will be working on a new app, will get proper training on test data sets before production migration,
show a new version of your service to a narrowed group of your clients to gather more feedback and detect more potential bugs.
Talking a little bit more about employee training. What turned out to be really cool is what happened during the migration of the Brasil market — we used gamification methods to conduct it. In our office in Curitiba, there was always a couple of computers with test environment running accessible for everyone, so each employee could check it out, ask some questions to the product expert we sent to the site. For most active employees, there were small gifts to encourage them a little bit more.
While planning test migrations, plan to do at least one of them with full power available. This way you can estimate if you gonna fit into the planned migration timeframe, as well as to test if scaling works properly. In our case, this practice quickly revealed performance problems with the RabbitMQ instance, which sometimes refused to handle too heavy batches of incoming tasks.
What’s the best moment to start the whole process?
It’s best to conduct migration during the night of your end-users. So for us, it was sometimes during our night hours, while sometimes, when we were migrating Latin America countries, it happened during the day.
If your migration estimation is not as optimistic as it could be, it’s best to find a reasonable compromise between limiting functionalities to as little group of users as possible, at the same time maintaining a safety margin, so if something goes wrong, you still have time to react and fix it.
Cost optimization
Watch out for potential money traps. There are, in general, two places to optimize — things that cost a lot to compute or gather that don’t change much during every migration. Also, consider the costs of unused resources in the test environment.
For instance — we have around 600.000 doctors in Brazil. Most of them have profile pictures attached. There is no point in converting these pictures on every import — it will both burn time and money related to computing power that you have to pay for. Do the conversion once instead, correlate new assets with corresponding doctor ids from the source system, then store for further iterations. On the next iterations, detect and recompute only things that have changed.
Another example is optimizing the cost of the test environment. While final migration needs as much power as possible, during the test period the app is being used by so small group of people, that it makes no sense to have the best performing servers running it.
Maintaining a good SEO
It’s almost sure that both of the systems share different URL schemas for the same resource, for example, doctor profile. Don’t forget to let Google know that URL schemas changed, so your SEO will remain good and healthy. How to do that? Just create a bunch of HTTP 301 redirect routings from the old scheme to the new one, using what you already have (e.g. mappings of id’s from the source system). It’s pretty straightforward in Symfony and should be also easy to achieve in any other framework.
Be prepared for errors
Chances that migration will go totally smooth are close to none. Be prepared, that you will have to face unexpected problems both during and after the migration process. Make sure that you have the right set of people, who can handle stressful situations easily. Build your team out of people who know almost everything about both of the apps you’re dealing with. Apart from the knowledge itself, they should have proper access rights, a lot of experience, and a steady hand, when it comes to doing something that might lead to a terrible crash.
And if, by any chance, you use some asynchronous communication or processing in your service, keep in mind that errors related to this topic are the hardest to detect and most painful to fix.
Mathias Verraes at Twitter.com
Team setup
To add a little bit more about choosing the right people for the job. What kind of people you should look for while building your team? Not only the ones who have 10+ years of experience in coding or product management but mostly the ones that are patient, long runners, and are able to handle stressful situations easily.
20 hours run in the office without sleep? Hold my tea
Our fusion was definitely a long-term undertaking, filled both with interesting challenges, but also the ability to interact with enormous computing power, as for some markets we used to have hundreds of worker machines running. But’s that’s just a grain of sand compared to the number of hard decisions we had to make, repetitive and boring tasks, work under pressure, stress, and responsibility.
What really matters are the people, not the code
The last two years taught me a lot of things. For example, for a lot of problems you have to approach with a proper distance, make some cold-blood decisions when needed, and do not let emotions control you.
Jumping into the car and heading to the office in the middle of the night I wasn’t thinking that I’m stressed or sleepy. What I always felt is that I’m going to do something great with all these amazing people I had a chance to work with. All of the procedures, fuckups, and issues, despite huge amounts of stress, I tended to hide behind a thick layer of abstraction.
Our team consisted of around a dozen folks, but what we were doing impacted the whole ~1,5k organization. We used to cooperate with a lot of other people from other teams, departments, and countries — product owners, country managers, experts, customer care specialists, moderators — basically everyone.
The abstraction that I mentioned, gave me a little shock kick at the end of our journey. After migrating the last market, our Product Owner arranged a gathering for everyone involved to spend some time together, have some common activity outside of the office. When I stepped outside of the building, I’ve seen more than 50 people. Some of them I met the first time in my life, even though I was working at Docplanner for many years.
Our team having some good time after the last migration is done. It was totally worth it!
What was common between all these folks was happiness, openness, joy. It wasn’t about a feeling of relief that “it’s finally over”. On a daily basis, all of these people are experts in what they do and they love cooperating together, generating a great atmosphere every day.
What makes me happy even more is that thanks to this project, we are able to grow faster and serve a better product to our patients and doctors. I personally believe that what we do changes our lives in a really good way.
That’s all folks
That day had to come eventually. Around a year after the last migration, we removed all of the migration code from our codebase.
If you consider conducting a similar process in your company, feel free to contact me — we will try to share more of our experiences and help. | https://medium.com/docplanner-tech/how-to-migrate-your-company-to-a-new-product-overnight-b6d29509c31c | ['Maciej Szkamruk'] | 2020-11-17 07:37:09.988000+00:00 | ['Startup', 'Mergers And Acquisitions', 'Programming', 'Product Management', 'Merger'] |
Dear Trolls: Write Your Own GD Post | I write about racism and sexism quite a bit, and the touchiness of the subjects only seem to underscore why these are still such pervasive problems for us in this country. It’s always amazing to me that anyone living and breathing today can deny the existence of racism or sexism, but plenty of people do (why, hello, privilege, you oblivious devil, you), which is most of the reason I choose to feature these topics so consistently in my writing. Also, spoiler alert, I’m a black woman, and the intersection of gender and race happens to be my particular jam. Write what you know, as the old cliche advises.
As you might imagine, I get some pretty fun responses to my articles. In this case, fun is a convenient euphemism for disgusting, rude, racist, sexist. Etcetera. These less than witty replies are normally short and sweet, an attempt to devastate my argument in a way that normally just ends up proving my original point. Reading these kinds of responses always makes me cackle with self satisfied glee, because the commenter really doesn’t get it, and I find that level of absolute obtuseness amusing beyond reason.
But there exists another class of responses entirely. To be honest, I don’t actually read these responses in full, mostly because of how long they are. A short, grammatically incorrect insult that aims well high of the mark is hilarious and fun to read, mostly because it doesn’t waste that much of my time and provides much needed laughter. But a response that goes on for paragraphs — some seeming to closely follow the five paragraph model of writing persuasive essays that I learned as a freshman in high school — astound me. Why? To what end? Did you honestly expect me to read this novella and respond? Because most of my thoughts on the matter are in the original post, which you can reference to your heart’s content if you didn’t properly track my argument during your first reading.
Seriously, y’all, if your nasty response to my article or blog post is longer than the 700 words I originally wrote, how about you write your own goddamned post?
In light of this odd tendency, I’m just going to go ahead and put everyone on notice: I write because I have something to say and I want to share it. I actually do enjoy vigorous dialogue — in person — but the beauty part about writing is that I get to launch my opinions out in the digital ether and you can either read them or not read them. What you can’t really do is argue with what I’ve written down. You can let it simmer and change the way you think about the subject, or you can disagree with what I’ve said and move the fuck on, taking absolutely nothing with you when you go. But if you reply to something I write with an article of your own, you’ve just wasted your time. That’s a big fat TL;DR from me.
Ain’t. Nobody. Got. Time. For. That.
If you find that upsetting, don’t despair too quickly. There’s still a wonderful upside to the magical medium that is the internet: you can write what you want, whenever you want, and maybe someone will actually read it. How fabulous is that?!
If your impulse upon reading my 1,000 words is to reply with 1,000 snarky, densely packed words of your own, I invite you to kindly follow these steps:
Fully assess if this is the best place to leave such lengthy commentary.
Unless and until you perform step number one, don’t begin to reply to my original post.
Calculate the probability of your response actually being read (Spoiler: it’s 0%).
Kindly compile a list of pros and cons before you place itchy fingers on keyboard.
Only continue writing when you are sure you can keep any response well south of 100 words.
Fully edit your response to eliminate all spelling and grammatical errors.
Finally, highlight all and delete.
By carefully following my trademarked FUCKOFF method, you can save yourself so much unnecessarily wasted time and energy. Think of the free minutes suddenly opened up in your schedule that you would have spent throwing poorly chosen words into the wind.
You might be asking yourself what you should do if, after following my FUCKOFF method you still feel compelled to let loose a stream of noxious online commentary in hopes of putting an uppity black feminist in her place? Well, as aforementioned: WRITE YOUR OWN GODDAMNED BLOG POST.
It really is that simple. If I can do it, you can do it — maybe not as elegantly, but, you know, we can’t all be wordsmiths.
And if something I’ve written about racism or sexism has really hit you so hard that you find yourself enraged to a level that makes it impossible for you to let it go, maybe take a nice long look in the mirror. Sounds as though it was written with someone like you in mind. As always, reflection is your friend, as is personal growth… | https://tessmartin.medium.com/dear-trolls-write-your-own-gd-post-ecc8d852595a | ['Tess Martin'] | 2019-02-02 16:44:38.153000+00:00 | ['Hate', 'Blogging Tips', 'Writing', 'Racism', 'Feminism'] |
Operationalizing AI with Databricks, Microsoft and Slalom | Photo by Photos Hobby on Unsplash
In Through the Looking Glass — Lewis Carroll’s sequel to Alice’s Adventures in Wonderland — the Red Queen tells Alice,
“…here we must run as fast as we can, just to stay in the same place”.
As someone who works in technology, the “Red Queen Effect” is all too real with some technologies barely lasting a few years before being replaced by something newer and shinier. This challenge is particularly true for companies that are attempting to take data-driven intelligence and machine learning (ML) into production and establish artificial intelligence (AI) as a core differentiator to their competitors.
The race towards developing AI capabilities isn’t unfounded since it is arguably the most disruptive capability to drive not just business transformation, but impact people’s lives. At Slalom, we have worked together with hundreds of organizations to address challenges such as process automation, generation of rapid insights, augmenting human decision-making and making sense of complex patterns. With ever-changing patterns of human behavior and consumption, businesses will need to be more and more intentional about their investments in AI as more and more companies enter the space.
The truth, however, is that AI is hard and the challenges numerous. With increasing digitization, more and more of what we as consumers do is available as raw data to companies such as yours which provide us with goods and services. The rise of IoT (Internet of Things) devices not only hints at a deluge of data but also the interconnectedness of everything we do. However, the seemingly simple act of collecting and organizing all this data requires the expertise of data engineers, data architects and cloud engineers. Garnering insights from the organized data subsequently leverages the skillsets of data scientists who may serve up their intelligence to internal teams, like marketing and FP&A, or use it to improve customer experience. In the latter case, collaboration with product managers and front-end engineers who design the interfaces that customers interact with is critical.
Given the breadth of personas required to successfully leverage data, the importance of people in this equation is clear — data, tools, and technology are nothing without people.
Ensuring that team members are empowered and encouraged to experiment and innovate, with the knowledge not all initiatives will be successful, requires a cultural shift. To this end, at Slalom we have developed a Modern Culture of Data (MCoD) framework to help you unlock the potential of your organization and realize investments in AI. The first step is strategic alignment of your business goals. Defining an overarching vision for your organization, which is sponsored by executive leadership, is key for business units to unify with a common sense of purpose. To achieve this vision, people in your organization need access to trustworthy data via flexible and user-friendly systems that appeal to both technical and non-technical audiences. The end goal is one where data and analytics are embedded into every decision your organization makes.
The five pillars that enable a Modern Culture of Data
To realize such a culture of experimentation and data-driven decision making, Slalom has partnered with Microsoft and Databricks to build a delivery framework using the best that Azure has to offer. By combining our collective expertise across industries, data engineering, data science and DevOps, we’ve put together an acceleration path capable of sprinting you through the machine learning enablement process.
Databricks, a company founded by the creators of Apache Spark, is a first party service on the Azure cloud which allows data engineers, data scientists and data analysts, to use cutting edge tools tightly integrated with the effectively infinite scalability, reliability and security of the cloud. Teams that use Databricks can access data and use open-source coding tools on a common platform, bringing IT and business silos together, resulting in reduced time to deployment. The flexibility of the platform means that technically savvy users are able to maximize time spent in their area of expertise while simultaneously being able to collaborate with and learn from users in other teams. With the backing of the Azure cloud, your team has the choice to leverage a multitude of existing Azure AI services without having to recreate every component from scratch. Azure also provides a plethora of options for quickly and reliably deploying to production the AI solutions your teams develop.
If your company is relatively new to AI, the challenge may seem overwhelming. The sheer number of problems you could focus on and the number of ways you could approach solving these problems is immensely difficult to navigate.
At Slalom, we recommend you think big but start small — lay a solid foundation driven by a data strategy and focus on one problem that demonstrates impact and builds momentum.
It’s easy to get caught up in buzzwords but more often than not it is the simple solutions that are most impactful for a company getting started with AI. Whatever your situation, as long as you have a use case to deliver value via AI and ML, our solution can accelerate the process without reinventing the wheel. | https://medium.com/slalom-technology/operationalizing-ai-with-databricks-microsoft-and-slalom-c423e6a20350 | ['Krishanu Nandy'] | 2020-10-14 16:14:43.318000+00:00 | ['Microsoft', 'Azure Databricks', 'AI', 'Culture', 'Data'] |
Convert Web Pages Into PDFs With Puppeteer and Node.js | Convert Web Pages to PDF
Now we get to the exciting part of the tutorial. With Puppeteer, we only need a few lines of code to convert web pages into PDF.
First, create a browser instance using Puppeteer’s launch function:
const browser = await puppeteer.launch();
Then, we create a new page instance and visit the given page URL using Puppeteer:
const webPage = await browser.newPage(); const url = "https://livecodestream.dev/post"; await webPage.goto(url, {
waitUntil: "networkidle0"
});
We have set the waitUntil option to networkidle0 . When we use the networkidle0 option, Puppeteer waits until there are no new network connections within the last 500 ms. It is a way to determine whether the site has finished loading. It’s not exact and Puppeteer offers other options, but it is reliable for most cases.
Finally, we create the PDF from the crawled page content and save it to our device:
The print to PDF function is quite complicated and allows for a lot of customization, which is fantastic. Here are some of the options we used:
printBackground : When this option is set to true , Puppeteer prints any background colors or images you have used on the web page to the PDF.
: When this option is set to , Puppeteer prints any background colors or images you have used on the web page to the PDF. path : This specifies where to save the generated PDF file. You can also store it into a memory stream to avoid writing to disk.
: This specifies where to save the generated PDF file. You can also store it into a memory stream to avoid writing to disk. format : You can set the PDF format to one of the given options: Letter , A4 , A3 , A2 , etc.
: You can set the PDF format to one of the given options: , , , , etc. margin : You can specify a margin for the generated PDF with this option. | https://medium.com/better-programming/convert-web-pages-into-pdfs-with-puppeteer-and-node-js-8e72fb3d0bd2 | ['Juan Cruz Martinez'] | 2020-12-28 16:57:54.055000+00:00 | ['Programming', 'Nodejs', 'Web Development', 'JavaScript', 'Typescript'] |
Deep Learning — Artificial Neural Network(ANN) | 3. Building your first neural network with keras in less than 30 lines of code
3.1 What is Keras ?
There is a lot of deep learning frame works . Keras is a high-level API written in Python which runs on-top of popular frameworks such as TensorFlow, Theano, etc. to provide the machine learning practitioner with a layer of abstraction to reduce the inherent complexity of writing NNs.
3.2 Time to work on GPU:
In this we will be using keras with Tensorflow backend. We will use pip commands to install on Anaconda environment.
· pip3 install Keras
· pip3 install Tensorflow
Make sure that you set up GPU if you are using googlecolab
google colab GPU activation
We are using MNIST data set in this tutorial. The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from MNIST. The digits have been size-normalized and centered in a fixed-size image.
We are importing necessary modules
Loading the data set as training & test
Now with our training & test data we are ready to build our Neural network.
In this example we will be using dense layer , a dense layer is nothing but fully connected neuron. Which means each neuron receives input from all the neurons in previous layer. The shape of our input is [60000,28,28] which is 60000 images with a pixel height and width of 28 X 28.
784 and 10 refers to dimension of the output space , which will become the number of inputs to the subsequent layer.We are solving a classification problem with 10 possible categories (numbers from 0 to 9). Hence the final layer has potential output of 10 units.
Activation function can be different type , relu which is most widely used. In the output layer we are using softmax here.
As out neural network is defined we are compiling it with optimizer as adam,loss function as categorical_cross entropy,metrics as accuracy here. These can be changed based upon the need.
AIWA !!! You have just build your first neural network.
There is questions in your mind related to the terms which we have used on model building , like relu,softmax,adam ..these requires in depth explanations I would suggest you to read the book Deep Learning with Python by Francois Chollet, which inspired this tutorial.
We can reshape our data set and split in between train 60000 images and test of 10000 images
We will use categorical encoding in order to return number of features in numerical operations.
Our data set is split into train and test , our model is compiled and data is reshaped and encoded. Next step is to train our neural network(NN).
Here we are passing training images and train labels as well as epochs. One epoch is when an entire data set is passed forward and backward through the neural network only once.Batch size is number of samples that will propagate through the neural network.
We are measuring the performance of our model to identify how well our model performed. You will get a test accuracy of around 98 which means our model has predicted the correct digit while 98 percentage of time while running its tests.
This is how the first look of a neural network is. That’s not the end just a beginning before we get a deep dive into different aspects of neural networks. You have just taken the first step towards your long and exciting journey.
Stay focused , keep learning , stay curious.
“Don’t take rest after your first victory because if you fail in second, more lips are waiting to say that your first victory was just luck.” — Dr APJ Abdul Kalam
Reference : Deep Learning with Python , François Chollet , ISBN 9781617294433
Stay connected — https://www.linkedin.com/in/arun-purakkatt-mba-m-tech-31429367/ | https://medium.com/analytics-vidhya/deep-learning-artificial-neural-network-ann-13b54c3f370f | ['Arun Purakkatt'] | 2020-08-01 16:07:37.763000+00:00 | ['Neural Networks', 'Deep Learning', 'Data Science', 'Artificial Intelligence'] |
How to Be a Successful Background Performer in the Film Industry | How to Be a Successful Background Performer in the Film Industry
Or, as the crew calls us, a prop that eats.
When a shoulder and wrist injury, followed by two failed surgeries, forced to me give up my thirty-seven-year long career as a physiotherapist, I began writing full time.
But, as many of you are probably aware, writing is a somewhat less than lucrative career. Especially at the beginning. I had to put in my time honing my craft and sending my work out to small publications for little or no pay. I remained hopeful that eventually higher paying ones would accept it, and one day, I’d land an agent and a book deal.
Until that happened, though, I needed to find a way make some money. As a self-employed professional, I had no pension to fall back on. I desperately wanted to bring at least a little money into the family coffers.
I live in Greater Vancouver, affectionately known as Hollywood North. Every day there are countless productions being filmed here, most of which require a significant number of background performers.
Photo by Martin Jernberg on Unsplash
A writing compatriot of mine had mentioned that she worked as a background performer and how it was the perfect gig for a writer. Typically, there are hours and hours of down time. She told me this provides the perfect opportunity to work on her manuscripts, and better yet, she gets paid by the production company to do so.
Seemed like a win-win situation to me, so I signed up. I now work for a casting agency and for a background performer agency.
In the past year and a half, between the two of them, I’ve worked on over thirty different productions, including: The Art of Racing in the Rain, To All the Boys I’ve Loved Before, Supergirl, Batwoman, Nancy Drew, Snowpiercer, The Good Doctor, Travellers, The Twilight Zone, The 100, and A Million Little Things.
I’ve worked with James Marsden, Kristin Chenoweth, Amanda Seyfried, Milo Ventimiglia (the dad from This is Us,) John Larroquette, Keegan-Michael Key, John Corbett, John Sena, Aidan Gillen, Jennifer Connelly, Wayne Brady, Sarah Wayne Callies ( from The Walking Dead,) to name but a few.
I’ve been a dog walker, patient, doctor, Tai chi practitioner, teacher, professor, wedding guest, funeral guest, concerned citizen, senator, refugee from an alien invasion, victim of a hostile take over, FBI agent, woman from the 1950’s, a presidential campaign donor, gambler from the 1980’s, millionaire, and a person at a Christmas carnival. That was by far the most challenging. The day we filmed it was 99 degrees, and I was wearing wearing a down parka, hat, gloves, and scarf. Yikes.
I’ve run from aliens, zombie, and terrorists, and danced at weddings, cried at funerals, watched pretend car races, played with dogs, and chanted at a trial.
Photo by Daniel Jensen on Unsplash
And I’ve had fun doing it all.
Days can be long, though, 16 and a half hours on one show this week. Occasionally, I’m on set for pretty much that entire time time with only a few short breaks. Other times, I sit in background for hours, waiting to be called to set. And during that time, I write.
I had one job where I worked, and I use that term rather loosely, fourteen hours, for three days straight. But I only made it to set for 15 minutes on the last day. The rest of the time I waited to see if they would need me. And during that time I wrote a short story, edited seven chapters of my novel, read two books, napped, ate, (and the food is usually fantastic,) and chatted with my fellow background performers.
Not a bad deal.
The pay isn’t great, but when you work overtime it adds up. Those three days of reading, writing and editing, eating, and napping netted me over $600 Canadian. Considering I wouldn’t normally be paid to do any of those things, I’d say that was a pretty sweet deal.
Might be something to check out if you’re looking for a way to supplement your writing income.
If you’re considering signing up, here are a few things you should know:
Ten Rules for background performers— aka, props that eat.
Keep you calendar updated daily. You can block off any days that you don’t want to work, but if your calendar is green, you better be prepared to work. Nothing pisses casting directors off more than if they try to book you and you say no. That’s a perfect way to be sure you won’t be called again. Always respond immediately to your agent’s texts, or as soon as humanly possible. If they can’t get hold of you to confirm, they will move on to the next person. Always bring three complete changes of clothing. You will get an email with suggestions, colours to avoid, styles of clothing, era, ie: 1980s, 1990’s etc, but wardrobe will expect you to show up with three complete outfits, clean and tidy, for them to choose from. Do not talk to the cast. Unless, of course, they speak to you first. Sure, you can fan-girl all you want, but for goodness sakes, do so discretely. I’m quite certain I made John Corbett more than a little uncomfortable when I stared at him, mouth agape, thinking, “OMG, he’s SO TALL and SO GORGEOUS!” Listen to your wranglers, assistant directors, and directors. Do what they say. Exactly what they say. Not following directions can slow the production and lead to very testy assistant directors and wranglers. Never a good thing, trust me. Be quiet. You may be on set for hours at a time, and it can be boring. But if you insist on talking, do so very quietly. 150 BG all talking at once can be overwhelming, and again, make for very testy AD’s and wranglers. You will be asked to mime conversations and emotional reactions. But remember…that means no sound! Get used to faking it, no need to use real words. I know someone who repeats, silently, of course, watermelon, hamburger, tornado, over and over. Bring something to do while you wait. One poor man showed up to a production, his first, with nothing. No books, no games, and a phone that died hours before our fourteen-hour-day was over. That’s a lot of time to contemplate life. Bring a handheld fan. Winter or summer. You may be in a studio, and studios can be notoriously hot. Bring Hot Shots. These are little packages that heat up via an exothermic reaction when you shake them. Buy a case and always bring a handful with you to set. If you don’t know what they are, google them. A BG performer’s best friend in the winter. They’re life savers on cold, wet, and windy days.
Next week I might be doing something a little different. My picture has been put forward for a life cast of my head and shoulders, followed by make up and prosthetic tests. I don’t actually know what it’s for, they haven’t told me that yet, but it sounds fascinating. and I can’t help but think it’s great research for a future story.
Although, after watching a video on the hour-long process, where your face is covered, first with silicone, and then plaster, leaving only two small holes to breathe through, one for each nostril, I might need to reconsider my willingness to try this. | https://medium.com/the-partnered-pen/how-to-be-a-successful-background-performer-in-the-film-industry-20b34e0f65b7 | ['Leslie Wibberley'] | 2019-10-21 00:46:29.645000+00:00 | ['Movies', 'Acting', 'Jobs', 'Film', 'Writing'] |
Tekton Pipeline — Kubernetes-native pipelines | Tekton Pipeline is a new project which allows you to run your CI/CD pipelines in a Kubernetes-native approach. Tekton Pipelines emerged out of the Knative build project. If you like to know more about Knative I highly recommend you to visit their project website which is available here.
Before we start talking about what’s Kubernetes-native means and how Tekton Pipelines are working I would like to first take a step back and clarify shortly why containerized pipelines are so important and helpful:
Some time ago we started putting our workloads into containers. We did so because of advantages like isolation, dependencies, scalability and immutability. Wouldn’t these be also helpful in your CI/CD pipeline?
Think of “build-hosts” which only provide the tools and dependencies needed for one particular pipeline task. An environment which is looking the same every run and does not have any dependencies of other projects which might cause issues. As well as easily scalable pipelines.
That’s why we need and should use containerized pipelines!
Now, where we briefly talked about containerized pipelines let’s talk about where Tekton Pipeline with its Kubernetes-native approach can help:
Tekton Pipeline allows us to run our containerized pipelines in our existing Kubernetes Clusters. This means we do not need additional machines to run our pipelines and therefore can better utilize our existing ones.
This is greats but, to be honest, that alone does not make Tekton Pipeline unique. Tekton Pipeline goes one step further and also stores everything related to our pipeline within Kubernetes — as Kubernetes resources. This allows us to work with our pipelines as we do with any other resource. Think of a Deployment or Service which you can create and manage using kubectl and YAML files.
How to get started
As mentioned above Tekton Pipeline lives within a Kubernetes Cluster. It is based on 5 Custom Resource Definitions (CRDs), Deployments, Configmaps and Services. You can run the following command to get started:
Besides the above-mentioned resources, it will also create a Namespace, a Pod Security Policy, a Service Account and ClusterRoles. Tekton Pipeline is ready as soon as all Pods in the newly created Namespace (the default name is tekton-pipelines) are ready.
You can, of course, review the above YAML and customize it based on your needs.
If you need to share artifacts or other pipelines resources between your tasks you will need to configure a storage option. You can either use PVCs which will be requested every time needed (Dynamic Volume Provisioning is key!) or Blob storage. You will find more details on this task here.
A first pipeline
So, how does Tekton Pipelines work? I will explain the different resources (Custom Resource Definitions) of Tekton Pipeline using the small example below. The pipeline will build a small Go application, build the depending Image and then push it into a Registry. You will find all the related files here.
First of all, we will create two PipelineResouce definitions which we will use to provide the source Git repository and the target Registry. The Pipeline Resouces are optional but are very helpful to reuse the same pipeline with different sources and targets.
kind: PipelineResource
metadata:
name: git-repo
spec:
type: git
params:
- name: revision
value: master
- name: url
value:
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: image-registry
spec:
type: image
params:
- name: url
value: registry.gitlab.com/nmeisenzahl/tekton-demo/demo:latest apiVersion: tekton.dev/v1alpha1kind: PipelineResourcemetadata:name: git-repospec:type: gitparams:- name: revisionvalue: master- name: urlvalue: https://gitlab.com/nmeisenzahl/tekton-demo ---apiVersion: tekton.dev/v1alpha1kind: PipelineResourcemetadata:name: image-registryspec:type: imageparams:- name: urlvalue: registry.gitlab.com/nmeisenzahl/tekton-demo/demo:latest
Now we need to create a Task resource to define the steps of our pipeline. You can, of course, define multiple tasks if you need them. In our example, we will use Kaniko to build our Image. The Dockerfile, as well as the app resources, are stored in the Git repository.
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: build-docker-image
spec:
inputs:
resources:
- name: git-repo
type: git
params:
- name: pathToDockerFile
description: Path to Dockerfile
default: /workspace/git-repo/Dockerfile
- name: pathToContext
description: The build context used by Kaniko
default: /workspace/git-repo
outputs:
resources:
- name: image-registry
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.10.0
env:
- name: "DOCKER_CONFIG"
value: "/builder/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=${inputs.params.pathToDockerFile}
- --destination=${outputs.resources.image-registry.url}
- --context=${inputs.params.pathToContext}
We could now create a TaskRun resource to run an instance of the above task. However, in this example, we use the Pipeline which we can use to combine multiple tasks in a pipeline:
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: demo-pipeline
spec:
resources:
- name: git-repo
type: git
- name: image-registry
type: image
tasks:
- name: build-docker-image
taskRef:
name: build-docker-image
params:
- name: pathToDockerFile
value: /workspace/git-repo/Dockerfile
- name: pathToContext
value: /workspace/git-repo
resources:
inputs:
- name: git-repo
resource: git-repo
outputs:
- name: image-registry
resource: image-registry
Because we move the image to a registry, you must ensure that the pipeline can authenticate itself by configuring an ImagePullSecrets for the service account used (in our case, the default service account).
We now have everything in place to finally run the pipeline. For that, we need a final definition. A PipelineRun resource:
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: demo-pipeline-run-1
spec:
pipelineRef:
name: demo-pipeline
resources:
- name: git-repo
resourceRef:
name: git-repo
- name: image-registry
resourceRef:
name: image-registry
To verify the status of your pipeline run you can execute kubectl get pipelineruns -oyaml .
There is more
Beside the Tekton Pipeline project itself, there is also a project for a CLI which makes you work with your pipelines easier. You can also install a web-based Dashboard to view and manage your pipelines in a browser.
Furthermore, the team is working on another project called Tekton Triggers. This project is pretty new (the first commit was 4 weeks ago) and still work in progress. Tekton Triggers will allow calling Tekton pipelines based on “triggers”. Those could be git commits, git issues or any kind of webhooks. More details are available here. | https://medium.com/01001101/tekton-pipeline-kubernetes-native-pipelines-296478f5c835 | ['Nico Meisenzahl'] | 2019-07-28 17:18:11.460000+00:00 | ['Continuous Integration', 'Tekton', 'Continuous Delivery', 'DevOps', 'Kubernetes'] |
Eidoo wallet to become hypersmart thanks to ORS A.I. algorithms | Eidoo, the app that helps cryptocurrency go mainstream with multi-currency wallet, and ORS SA, the Swiss company of leading A.I. software group ORS GROUP, are announcing today the closing of a sales agreement for integrating the ORS Crypto Robo Advisor “Hypersmart Contract” with the Eidoo wallet.
Eidoo also plans to launch a hybrid exchange, a decentralized marketplace, and a debit card in the near future.
Eidoo users will be able to automatically use Artificial Intelligence algorithms in trading and investing crypto assets on the basis of their own preferences and risk profiles.
With over 200,000 downloads worldwide, Eidoo is set to become a leader in offering advanced blockchain-related functionalities.
With a portfolio of over 1,000 algorithms and hundreds of software solutions, ORS intelligent connector (the Hypersmart Contracts) of A.I. & Blockchain is set to dramatically enhance blockchain and cryptocurrency projects.
About Eidoo
Eidoo is a fast, easy multicurrency wallet and hybrid exchange for blockchain assets, and the first human-to-blockchain interface for all cryptocurrency needs. Eidoo is available to download for free from the iOS app store or the Google Play Store for Android. Learn about Eidoo from the official business paper and stay up to date on Twitter @eidoo_io.
About ORS
The ORS Group is a software company of more than 100 IT developers and scientists. It boasts over 20 years of experience in delivering sophisticated A.I.-based optimization software solutions to a large international client base (www.ors.ai).
Their new product, the Hypersmart Contracts (“HSC”), aims to provide access to more than 1,000 proprietary algorithms and hundreds of software solutions to the Crypto Community and to established businesses (www.orsgroup.io). At ORS, we envision a global network of entrepreneurs and independent companies empowered by our ABC technology building blocks: Algorithms, Blockchain, cryptocurrencies. | https://medium.com/eidoo/eidoo-wallet-to-become-hypersmart-thanks-to-ors-a-i-algorithms-2e56876b3079 | ['Amelia Tomasicchio'] | 2018-02-16 15:44:49.945000+00:00 | ['Token', 'Artificial Intelligence', 'Blockchain', 'Ethereum', 'Bitcoin'] |
A Heartbroken God | (Apple Trees in Fall — Photo by Noe)
A Heartbroken God
If I am a reflection of God, then, God is heartbroken.
I went into the Chiropractor today to address some chronic back pain. I am just getting to know this doctor. I am still adjusting to life in a somewhat rural community. My Chiropractor is a devout Christian. He loves baseball and there is a large American flag decorating the wall in the waiting room. In the treatment room, a plaque reads “Our God is a great God”. He is a good-hearted man.
For spring break, my doctor visited California with his family. I overheard him talking to another patient in the hallway about his trip to San Francisco and about her trip to Disneyland. When the nurse came in to check my blood pressure, she asked if I had taken a vacation with my family for spring break (it is very common for people to try to get to sunnier weather this time of year). I joked that my wife’s new “adulting job” doesn’t seem to include much time off and how ironic it is that we did more traveling when we were living on a shoestring budget. The nurse responded that she enjoyed meeting my wife last week and we both agreed that she is pretty great: smart, articulate and always even-tempered.
I tried to not think about all of the qualities that I lack. I tried to keep my mind from wandering back to California and my friends back home who “know me”. I tried to rub the feeling of being an outsider out of my eyes and to be grateful to have a good blood pressure reading.
The nurse noticed my wet rain jacket and I explained that I had walked to my appointment. We laughed about how quickly a sprinkle can turn into a downpour. When the Chiropractor came in, he mentioned my jacket as well. I had carefully hung it onto a chair, damp side up, away from the wall to avoid getting anything wet. It has been raining a lot this week and some of the streets are flooded in nearby areas. I used to love springtime in California but here, it feels more like winter with flowers.
My new Chiropractor is friendly and somehow we started talking about his training. He shared that he had never met an openly gay person until he attended Chiropractic school. He told me that his mentor had been a lesbian and that she had been very understanding. When he expressed his curiosity about her sexual orientation, she welcomed him to ask respectful questions. He explained how much she helped him to see that despite their differences, people can share much in common. They are still close friends to this day.
He asked me if I had experienced much in the way of homophobia since I had moved to this area. I told him that the homophobia was not much different than elsewhere but that I wasn’t used to the racism that I had been encountering. I mentioned the two conversations with neighbors that made me feel uneasy. Both conversations seemed to arise randomly.
One neighbor was discussing a college club in which his granddaughter was involved. He announced that he disapproved of the club and added abruptly that no one can make him stop hating “some people.” He punctuated the statement with a long steely look into my eyes. This neighbor is elderly and his wife is friendly and so, I still see him and we haven’t had any more unpleasant interactions.
The other neighbor (different occasion) was wearing an Army Vietnam Veteran cap and I mentioned that both of my Uncles were Army Veterans and fought in Vietnam and that it had been a difficult time for them. He responded tersely that he enjoyed the killing in Vietnam and that he would do it again if he could. The conversation descended swiftly from there, as if there was much further to fall, into something about the Mexicans who took away his son’s chance at a decent education. We still wave at each other when he is out riding his motorcycle but we haven’t actually spoken since.
These conversations came up when I was delivering fresh blueberries from my garden. I wish I was better at small talk, or at minimum, that I wasn’t the type of person people feel so at ease with being candid. It can be awkward when you look “white” but are the child of an immigrant from Central America. People speak their minds in this town and I usually appreciate that quality in a person. However, when people say disparaging things to me about Latinos or other minorities, at first, without knowing about my heritage and later, without caring, it can be difficult to manage.
I told my Chiropractor that I missed my friends in Calfornia and that despite the rat race, congestion and smog, I missed the cultural diversity, museums, and great food. When he adjusted my back, it made an unusually loud crunch. He replied that the rain gets him down too and I got the feeling with everything that he didn’t say, that he might be considering relocating to California.
I sat up and felt a little dizzy and my doctor, who is a very inquisitive man, asked me if I attended church. I explained that I was once a very devout Christian but that my church rejected me when I came to terms with my sexual orientation. I clarified that I am basically agnostic now — that, I don’t know — and that it seems to take as much faith to firmly not believe in God as it does to be a true believer; both are strong convictions and I am not sold on either argument, at this point in my life. I added, awkwardly, that I enjoy the teachings of Buddhism. I paused, still feeling a little dizzy and continued like a distracted driver barrelling through a red light: It seems to be the nature of man to create a God that reflects his own image and that although all religions teach love, it troubles me that people kill each other in the name of their Gods.
A long silence hovered above us and I wished that I could delete my spoken words. In retrospect, I think he was probably thinking that church might be a place where I could make friends or something. I missed the cue.
My doctor completed one last adjustment to my neck and I heard a loud snap; It didn’t hurt but I felt a little dizzier and this time, he looked a little dizzy too. I worried that my last comment made me sound like I was opposed to religion. So, I added one last log to the fire and chirped, despite my reservations about God’s existence, I try to live a spiritual life.
At that point, I could tell that we were both unsure of what that actually meant. We smiled at each other, said a quick see ya’ next time! He opened the door and left the room. I exhaled and wished that I could be more like my wife. She never seems to get into these types of conversations. She just delivers the blueberries and everyone comments on the weather.
I put my damp jacket and shoes back on and proceeded to walk down the hallway as I continued turning my thoughts over like smooth stones in a river. I half-consciously offered a friendly nod and thank you to the nurse and receptionist and headed back out back into the rain.
I walked in a daze, pushing down the memory of my old youth pastor. I was trying to hold back the conversation that I had with him over thirty years ago, the last time I attended church, when I confided that I thought I might be gay. They say that timing is everything, but really, no time would have been the right time for that little chat. He responded with a fast and stern warning to not come back until I was sure I was straight. I saw a flash of fire and brimstone in his eyes and I left, at a fast clip, once and for all.
I reminded myself of how long ago that was but pondered how it could feel like only yesterday.
I inhaled and exhaled deeply as I redirected my mind back to my new Chiropractor, reassuring myself that he was different; Things have changed and I am far from being a teenage church girl. I don’t need to listen to that pointless and dusty old echo in my head.
When I was a Christian, I believed that this literally meant to be Christ-like. I took that seriously. For me, it meant that I would give my life to save my fellow man. It meant that I loved my neighbor — no matter who they happened to be. I also believed that we, as humans, were directed by God to take care of each other and to protect the precious earth that God had created.
If that is what it truly is to be a Christian, then, I suppose that I would still be/still am. Unfortunately, not all Christians interpret their Bibles in the same way. Many do not welcome me, or people like me, into their businesses, places of worship or communities. Some would reject Jesus himself if he came to their front door looking like a dark-skinned and bearded immigrant. There is no point in mentioning that the Last Supper seemed a lot like Passover, when you are uninvited.
I no longer hold enough faith to fully believe or disbelieve in any one story of God. Still, despite my agnosticism, I see traces of the face of God within others, nature, all living things and within myself. I realize now that if I am a reflection of God, I must mirror a heartbroken and somewhat confused God.
I still love with a heart that loves unconditionally, despite the great sadness that I feel. I have met Godly people in every walk of life, both religious and outside of religion. If I ever find the faith to believe again, it will be to believe in a nameless God that does not prefer one child over another.
Thank you for reading — Noe
Copyright © 2019 Noe. All rights reserved | https://medium.com/a-cornered-gurl/a-heartbroken-god-f1caa208b2e5 | ['Noe', 'Lisa Arana'] | 2019-04-19 18:12:47.901000+00:00 | ['Christianity', 'Nonfiction', 'LGBTQ', 'Religion', 'Racism'] |
Why You Should Work a 9–5 Before Doing Anything Else | Let me start by saying that I am all in favor of doing your own thing. I completely get it. You have a dream and you want to pursue it. Chances are, you want to get going ASAP.
Having worked on my projects ever since I was 21 or so, I know exactly how it feels. But looking back, I believe that any post-grad should start their career by working a normal 9–5 instead of getting straight into freelancing or starting a business.
Here‘s why:
Right now, you’re just not good enough.
If you start freelancing now or try to start your own business, the competition will crush you. That’s just a simple fact. No matter what business you are getting into, you’re just not good enough. | https://medium.com/the-post-grad-survival-guide/why-you-should-work-a-9-5-before-doing-anything-else-f68be02a1b30 | ['Tim Rettig'] | 2019-12-09 12:31:01.145000+00:00 | ['Self Improvement', 'Work', 'Entrepreneurship', 'Careers', 'Career Advice'] |
5 of the Most Common, Easy-to-Fix Problems We See in Curation | 5 of the Most Common, Easy-to-Fix Problems We See in Curation
Simple tips from Medium Curation
Photo: Maica/Getty Images
Every day the curation team reviews stories for distribution through Medium’s topics. Curators are looking for great stories to distribute, but in addition to quality, stories must meet some minimum guidelines. (Read the full curation guidelines here.) Some of the issues that disqualify a story from curation can easily be fixed. So before you hit publish, check to see if you’ve met these commonly missed guidelines.
Note that you can still publish on Medium without meeting these guidelines (except for rules violations), but you will not be eligible for curation.
1. Write quality headlines
We could fill a whole post with headline best practices, but here are a few tips:
Be specific. Don’t leave the reader to guess what your piece is about.
Don’t leave the reader to guess what your piece is about. Spark interest. Engage the reader’s interest by highlighting what’s unique about your piece.
Engage the reader’s interest by highlighting what’s unique about your piece. Be clear. Avoid confusing terms, general statements, or “insider” jargon.
Avoid confusing terms, general statements, or “insider” jargon. Be clean. Watch out for typos in your headlines and please don’t use profanity.
Watch out for typos in your headlines and please don’t use profanity. Go for reads, not clicks. Steer clear of clickbait headlines — tropes like “one weird thing” or using “this/that” to get the reader to click. Make sure your story backs up the claim in the headline.
Steer clear of clickbait headlines — tropes like “one weird thing” or using “this/that” to get the reader to click. Make sure your story backs up the claim in the headline. Bigger is not better. Make sure caps lock is off. No all-caps titles.
Here are a few great headlines:
“Crushes Are Wonderful — But They’re Not Everything”
“It’s Bikini Body Season! So What Should I Do With My Regular Body?”
“Elon Musk Wants You To Merge With Your Technology”
Here are a few not-so-great headlines:
“On Headlines”
“Mom”
“To Be a Perfect Person, Do This One Simple Thing”
2. Refrain from asking for claps
At the end of your story, readers will already see the prompt for claps. If they like the story, they will clap. You don’t have to ask again. It degrades the quality of the piece when you ask for claps at the footer of the story. A great story stands on its own and doesn’t have to ask for appreciation. Asking for claps disqualifies a story from curation.
3. Make sure you have the rights to use the images in your story
Images can help improve the readability of a story — especially feature images. They can make the story more inviting. When you use an image, make sure you have the rights to use it. If you are using an image you don’t have the rights to, that’s a copyright violation and disqualifies your story from curation. The most common image copyright violations we see are writers taking copyrighted images from photo services like Getty, the Associated Press, and Shutterstock without a license or permission.
From the guidelines: “It should be an image that you have the rights to use. Free-use resources like Pexels, Pixabay, Unsplash, and the Gender Spectrum Collection are great for sourcing Creative Commons-licensed images.”
4. Avoid rules violations
In order for a story to be curated, it must comply with Medium’s rules. Make sure there are no platform violations within the piece.
No ads or sponsored content, as defined in “Ad-Free Medium.”
Avoid embeds that directly collect emails or data from users. See “Embedded Content” under the rules.
Remove or disclose all affiliate links in the story. This is a rule from the FTC.
5. Do not include requests for donations
Please don’t link to your Patreon, GoFundMe, etc. in your story. | https://blog.medium.com/5-of-the-most-common-easy-to-fix-problems-we-see-in-curation-48f9a0395fb7 | ['Medium Creators'] | 2019-07-31 18:40:43.925000+00:00 | ['Curation', 'Writing', 'Writers'] |
New Features in Python 3.9 Beta. Learn what’s coming to Python | New Modules
Two new modules have been added in the latest version:
zoneinfo
from zoneinfo import ZoneInfo
from datetime import datetime, timedelta
# IANA time zone support >>> dt = datetime(2020, 12, 31, 12, tzinfo=ZoneInfo(“Asia/Kolkata”))
>>> dt.tzname()
‘IST’
graphlib
graphlib provides functionality to topologically sort a graph of hashable nodes.
Topological sorting refers to the given digraph G=(V, E), find a linear ordering of vertices such that for all edges (v, w) in E, v-> w from vertex v to vertex w, vertex v comes before vertex w in the ordering.
I’ve tried to explain how topological sorting can be done in the illustration below:
Graph
Topological Sort
>>> import graphlib
>>> from graphlib import TopologicalSorter
>>> graph = {'E': {'C', 'F'}, 'D': {'B', 'C'}, 'B': {'A'}, 'A': {'F'}}
>>> ts = TopologicalSorter(graph)
>>> tuple(ts.static_order())
('C', 'F', 'E', 'A', 'B', 'D')
You can read more about this function here. | https://medium.com/better-programming/features-in-python-3-9-beta-cd33790cea6b | ['Megha Modi'] | 2020-07-24 10:36:32.214000+00:00 | ['Python Programming', 'Python3', 'Python', 'Data Science', 'Programming'] |
M1 Macs: as Responsive as the iPad Pro but with macOS | M1 Macs: as Responsive as the iPad Pro but with macOS
There is no more need to compromise. The perfect Macs are finally here.
Photo by Joey Banks on Unsplash
Aren’t you tired of reading and hearing about the new M1 Macs? Well, maybe there’s a good reason people can’t stop talking about them.
For the past two years, I started using my 2018 iPad Pro more because I found it more reactive and, overall, more pleasant to use than my 2016 MacBook Pro.
Doing complex tasks on an iPad is extremely annoying and frustrating, for sure. But as long as you don’t multitask, do anything too web intensive, or have to manage files all day, the iPad is overall a way snappier and responsive device than any MacBook.
Until now.
Lots of people have been like me: using their iPad Pro almost as their primary computing device. Some of them chose mobility. Others were happy to use a responsive, powerful, and fun device.
But these people have been spoiled recently with the release of these new Macs.
The M1 Macs are the best of both worlds. They are as responsive and snappy to use as the iPad Pro, but they are more capable and feature a more complex OS, macOS Big Sur.
These Macs are so capable, have such good battery life, and are reasonably price, it gets confusing to recommend anything else. Even the $4,000 16-inch MacBook Pro doesn’t seem to outperform the new M1 MacBook Air by this much. And on the other end, I don’t see how relevant it can be to recommend the purchase of an iPad Pro while the sub-thousand dollars MacBook Air can do much more (unless you are an artist or a designer, of course, in which case you may enjoy using the Apple Pencil.)
I decided to purchase the M1 MacMini, and I have been using it every day for two weeks. I decided to keep my iPad Pro that I will use as my mobile device. I keep being impressed by how powerful the Mac is. I go through 4K footage in Final Cut as if the clips were 480p. I import, generate previews, and edit in Lightroom faster than I have ever experienced or seen. Managing files, web browsing, and other miscellaneous tasks are instant or very quick to launch.
The global experience so far has been pleasant. Going back to my Intel MacBook Pro, the difference is massive. Having to wait these few seconds every time I open a document or launch an app feels unnecessary and illogical.
When it comes to compatibility, I haven’t come to the point where an app couldn’t run. I’m sure they exist, and I would recommend you check out every app you are using in your current workflow for compatibility before you decide to upgrade.
But if you have been a fan of using an iOS device, specifically an iPad Pro, you will love upgrading to the M1. If the idea of switching your workflow to an iPad Pro terrified, now is the best time for you: you can benefit from the responsiveness of the iOS device while not compromising with the OS. | https://medium.com/macoclock/m1-macs-as-responsive-as-the-ipad-pro-but-with-macos-a1125856d179 | ['Charles Tumiotto Jackson'] | 2020-12-02 07:24:42.282000+00:00 | ['Gadgets', 'Apple', 'M1', 'Mac', 'Tech'] |
Web Scraping News with 4 lines using Python | In this article, I will show you how to collect and scrap news from various sources. Therefore, instead of spend a long time to write scraping code for each website, we will use newspaper3k to automatically extract structured information. If you prefer not to read this article, you can see my full code on github.
Let’s get started, the first step is to install the package that will be used, namely newspaper3k using pip. Open your terminal (Linux / macOS) or command prompt (windows) and type:
$ pip install newspaper3k
After installation completed, open your code editor and import the package with the following code
>>> from newspaper import Article
In this post, I will scrape the news from The New York Times entitled ‘A Lifetime of Running with Ahmaud Arbery Ends With a Text: ‘We Lost Maud’.
Next, enter the link to be scraped
>>> article = Article('https://www.nytimes.com/2020/05/10/us/ahmaud-arbery-georgia.html’)
You have the choice to determine the language used. Even so, the newspaper can detect and extract language quite well. If no specific language is given, the newspaper will detect the language automatically.
But, if you want to use a specific language, change the code to be like.
Then parse the article with the following code
>>> article.download() >>> article.parse()
Now everything is set. We can start using several methods to extract information, starting with the article authors.
>>> article.authors ['Richard Fausset']
Next we will get the date that the article was published.
>>> article.publish_date datetime.datetime(2020, 5, 10, 0, 0)
Get full text from the article.
>>> article.text ‘Mr. Baker was with Mr. Arbery’s family at his grave site . . .’
You can also get image links from articles.
>>> article.top_image 'https://static01.nyt.com/images/2020/05/10/us/10georgia-arbery-1/10georgia-arbery-1-facebookJumbo-v2.jpg'
In addition, the newspaper3k also provides methods for simple text processing, such as to get keywords and summarization. First, initialize using .nlp() method.
>>> article.nlp()
Get the keywords.
>>> article.keywords ['running',
'world',
'needed',
'arbery',
'lifetime',
'site',
'baker',
'text',
'wished',
'maud',
'arberys',
'lost',
'pandemic',
'upended',
'ends',
'ahmaud',
'unsure',
'mr']
And next is to summarize the news, but this feature is limited to only a few languages as explained in the documentation. | https://towardsdatascience.com/scraping-a-website-with-4-lines-using-python-200d5c858bb1 | ['Fahmi Nurfikri'] | 2020-10-08 14:37:47.228000+00:00 | ['Scraping', 'Python', 'News'] |
Visiting: Categorical Features and Encoding in Decision Trees | When you have categorical features and you are using decision trees, you often have a major issue: how to deal with categorical features?
Often you see the following…:
Postponing the problem: use a machine learning model which handle categorical features, the greatest of solutions!
Deal with the problem now: design matrix, one-hot encoding, binary encoding…
Usually, you WILL want to deal with the problem now, because if you postpone the problem, it means you already found the solution:
You do not postpone problems because in Data Science, they accumulate quickly like hell (good luck remembering every problem encountered, then come back 1 month later without thinking about them and recite them each).
You do not postpone problems without knowing the potential remedy afterwards (otherwise, you might have a working pipeline but no solution to solve it!).
So what is the matter? Let’s go back to the basics of decision trees and encoding, then we can test some good stuff… in three parts:
Machine Learning Implementations: specifications differ
Example ways of Encoding categorical features
Benchmarking Encodings versus vanilla Categorical features
Decision Trees and Encoding
Machine Learning Specification
When using decision tree models and categorical features, you mostly have three types of models:
Models handling categorical features CORRECTLY. You just throw the categorical features at the model in the appropriate format (ex: as factors in R), AND the machine learning model processes categorical features correctly as categoricals. BEST CASE because it fits your needs. Models handling categorical features INCORRECTLY. You just throw the categorical features at the model in the appropriate format (ex: as factors in R), BUT the machine learning model processes categorical features incorrectly by doing wizardry processing to transform them into something usable (like one-hot encoding), unless you are aware of it. WORST CASE EVER because it does not do what you expected to do. Models NOT handling categorical features at all. You have to preprocess manually the categorical features to have them in an appropriate format for the machine learning model (usually: numeric features). But how do you transform (aka ENCODE) them?
We will target specifically the third type of model, because it is what we want to assess. There are many methods to encode categorical features. We are going to check three of them: numeric encoding, one-hot encoding, and binary encoding.
Categorical Encoding Specification
Categorical Encoding refers to transforming a categorical feature into one or multiple numeric features. You can use any mathematical method or logical method you wish to transform the categorical feature, the sky is the limit for this task.
Numeric Encoding
Are you transforming your categorical features by hand or are you doing the work with a computer?
Numerical Encoding is very simple: assign an arbitrary number to each category.
There is no rocket science for the transformation, except perhaps… how do you assign the arbitrary number? Is there a simple way?
The typical case is to let your favorite programming language do the work.
For instance, you might do like this in R…
my_data$cat_feature <- as.numeric(as.factor(my_data$cat_feature))
Such as this:
as.numeric(as.factor(c("Louise",
"Gabriel",
"Emma",
"Adam",
"Alice",
"Raphael",
"Chloe",
"Louis",
"Jeanne",
"Arthur")))
This works, this is not brainer, and it encodes the way it wants deterministically (check the ordering and you will see).
One-Hot Encoding
One-Hot Encoding is just a design matrix with the first factor kept. A design matrix removes the first factor to avoid the matrix inversion problem in linear regressions.
Ever heard about One-Hot Encoding and its magic? Here you have it: this is a design matrix where you keep the first factor instead of removing it (how simple!).
To put it clear, just check the picture as it talks for itself better than 1,000 words.
In addition to thinking about what One-Hot Encoding does, you will notice something very quickly:
You have as many columns as you have cardinalities (values) in the categorical variable.
(values) in the categorical variable. You have a bunch of zeroes and only few 1s! (one 1 per new feature)
Therefore, you have to choose between two representations of One-Hot Encoding:
Dense Representation : 0s are stored in memory , which ballons the RAM usage a LOT if you have many cardinalities. But at least, the support for such representation is typically… worldwide.
: , which ballons the RAM usage a LOT if you have many cardinalities. But at least, the support for such representation is typically… worldwide. Sparse Repsentation: 0s are not stored in memory, which makes RAM efficiency a LOT better even if you have millions of cardinalities. However, good luck finding support for sparse matrices for machine learning, because it is not widespread (think: xgboost, LightGBM, etc.).
Again, you usually let your favorite programming language doing the work. Do not loop through each categorical value and assign a column, because this is NOT an efficient at all. It is not difficult, right?
Example in R, “one line”!:
model.matrix(~ cat + 0,
data = data.frame(
cat = as.factor(c("Louise",
"Gabriel",
"Emma",
"Adam",
"Alice",
"Raphael",
"Chloe",
"Louis",
"Jeanne",
"Arthur"))))
Dense One-Hot Encoding in R example. As usual, the specific order is identical to the numeric version due to as.factor choosing the order arbitrarily!
If you are running out of available memory, what about working with sparse matrices? Doing it in R is no brainer in “one line”!
library(Matrix)
sparse.model.matrix(~ cat + 0,
data = data.frame(
cat = as.factor(c("Louise",
"Gabriel",
"Emma",
"Adam",
"Alice",
"Raphael",
"Chloe",
"Louis",
"Jeanne",
"Arthur"))))
Sparse One-Hot Encoding in R. There is no difference to the Dense version, except we end up with a sparse matrix (dgCMatrix: sparse column compressed matrix).
Binary Encoding
Power of binaries!
The objective of Binary Encoding… is to use binary encoding to hash the cardinalities into binary values.
By using the power law of binary encoding, we are able to store N cardinalities using ceil(log(N+1)/log(2)) features.
It means we can store 4294967295 cardinalities using only 32 features with Binary Encoding! Isn’t it awesome to not have those 4294697295 features from One-Hot Encoding? (how are you going to learn 4 billion features in a decision tree…? you need a depth of 32 and it is not readable…)
Still as easy in (base) R, you just need to think you are limited to a specified number of bits (will you ever reach 4294967296 cardinalities? If yes, get rid of some categories because you got too many of them…):
my_data <- c("Louise",
"Gabriel",
"Emma",
"Adam",
"Alice",
"Raphael",
"Chloe",
"Louis",
"Jeanne",
"Arthur")
matrix(
as.integer(intToBits(as.integer(as.factor(my_data)))),
ncol = 32,
nrow = length(my_data),
byrow = TRUE
)[, 1:ceiling(log(length(unique(my_data)) + 1)/log(2))]
Binary Encoding in base R.
Ugh, the formula is a bit larger than expected. But you get the idea:
Three key operations to perform for binary encoding.
Operation 1 : convert my_data to factor, then to integer (“numeric”), then to numeric binary representation (as a vector of length 32 for each observation), then to integer (“numeric”).
: convert to factor, then to integer (“numeric”), then to numeric binary representation (as a vector of length 32 for each observation), then to integer (“numeric”). Operation 2 : convert the “numeric” to a matrix with 32 columns and the same number of rows as the number of original observations.
: convert the “numeric” to a matrix with 32 columns and the same number of rows as the number of original observations. Operation 3: using the inverse of the binary power property (ceil(log(N+1)/log(2))), remove all the unused columns (the columns with zeroes).
There are, obviously, easier ways to do this. But I am doing this example to show you can do this in base R. No need fancy package stuff.
Benchmarking Performance of Encoding
We are going to benchmark the performance of four types of encoding:
Categorical Encoding (raw, as is)
(raw, as is) Numeric Encoding
One-Hot Encoding
Binary Encoding
We will use rpart as the decision tree learning model, as it is also independent to random seeds.
The experimental design is the following:
We create datasets of one categorical feature with 8 to 8,192 cardinalities (steps of power of 2).
(steps of power of 2). We use 25% or 50% of cardinalities as positive labels to assess performance of the decision tree. This means a ratio of 1:3 or 1:1 .
of the decision tree. This means a . We run 250 times each combination of cardinalities and percentage of positive labels to get a better expected value (mean) of performance.
to get a (mean) of performance. To speed up the computations, we are using 6 parallel threads as One-Hot Encoding is computationally intensive.
as One-Hot Encoding is computationally intensive. The rpart function is limited to a maximum depth of 30 for practical usage, and used with the following parameters:
rpart(label ~ .,
data = my_data,
method = "class",
parms = list(split = "information"),
control = rpart.control(minsplit = 1,
minbucket = 1,
cp = 1e-15,
maxcompete = 1,
maxsurrogate = 1,
usesurrogate = 0,
xval = 1,
surrogatestyle = 1,
maxdepth = 30))
Warning: remember we are doing this on a synthetic dataset with perfect rules. You may get contradictory performance on real world datasets.
Sneak Peak: how are the decision trees looking?
For the sake of example, with 1024 categories and 25% positive labels.
Categorical Encoding
Numeric Encoding
One-Hot Encoding
Binary Encoding
One can understand it as the following:
Categorical encoding well… equality rules so it’s easy to split .
well… . Numeric encoding requires splitting itself , if it splits 30 times in a row on the same branch then it’s over. If a split is frozen afterwards, then it is also over for that branch. Therefore, lot of RNG shaping!
, if it splits 30 times in a row on the same branch then it’s over. One hot encoding requires.. as many splits as there are categories, which means a crazy lot and so much it stops very quickly because if you do one split, one part of the split will be frozen (because it is a perfect rule dataset) => it gives this escalator-like shape , thus very poorly performing.
and so much => it gives this , thus very poorly performing. Binary encoding has less than 30 features in all my cases, therefore each tree should be able to depict all the rules (theory is true, practice is wrong because you need splits to not close on themselves, which is not possible in theory, but possible in practice) => it gives this right-tailed tree shape. When the unbalancing increases, the performance increases because it requires a lower expected value (mean) of splits to perform a perfect split than a perfectly balanced case.
General Accuracy of Encodings
Without looking in depth into the accuracy, we are going to look quickly at the general accuracy of the four encodings we have.
On the picture, we can clearly notice a trend:
Categorical Encoding is the clear winner , with an exact 100% Accuracy at any time.
, with an exact 100% Accuracy at any time. Numeric Encoding is doing an excellent job as long as the number of cardinalities is not too large : from 1,024 cardinalities, its accuracy falls off drastically. Its accuracy over all the tests is around 92%.
: from 1,024 cardinalities, its accuracy falls off drastically. Its accuracy over all the tests is around 92%. One-Hot Encoding is doing an excellent job like Numeric Encoding, except it falls down very quickly : from 128 cardinalities, performing consistency worse than Binary Encoding. Its accuracy over all the runs is around 80%.
: from 128 cardinalities, performing consistency worse than Binary Encoding. Its accuracy over all the runs is around 80%. Binary Encoding is very consistent in performance but not perfect, even with 8,192 cardinalities. Its exact accuracy is approximately 91%.
Balancing Accuracy of Encodings
To look further at the details, we are splitting the balancing ratio (25% and 50%) of the positive label to check for discrepancies.
We are clearly noticing one major trend:
The more the dataset is unbalanced, the more accurate the encodings become.
Inversely: the more the dataset is balanced, the less accurate the encodings become.
We can also notice our Numeric Encoding is getting worse faster versus Binary Encoding when the dataset becomes more balanced: it becomes worse from 1024 cardinalities on a perfectly balanced dataset, while it becomes worse only from 2048 cardinalities on a 1:3 unbalanced dataset.
For One-Hot Encoding, it is even more pronounced: worse 256 cardinalities on a perfectly balanced dataset, to 128 cardinalities on a 1:3 unbalanced dataset.
In addition, there seems to be no reason to use One-Hot Encoding over Numeric Encoding according to the picture.
In addition, there are three specific trends we can capture:
Numeric Encoding is not giving consistent results , as the results are flying around a bit (compare the box plot sizes and you will notice it). It seems it requires a lot more cardinalities to converge to a stable performance . It depicts also more consistency “as the balancing becomes more unbalanced”.
, as the results are flying around a bit (compare the box plot sizes and you will notice it). It seems it requires . It depicts also more consistency “as the balancing becomes more unbalanced”. One-Hot Encoding is extremely consistent. So much consistent there is not much to say about if you want to approximate the predictive power of a categorical feature when alone.
So much consistent there is not much to say about if you want to approximate the predictive power of a categorical feature when alone. Binary Encoding gets consistent when the cardinality increases, while maintaining a stable performance. It depicts also more consistency “as the balancing becomes more unbalanced”. Perhaps someone has a mathematical proof of convergence towards a specific limit, given the cardinality and the balancing ratio?
Training Time
The training time is provided here as an example on dense data using rpart . Each model was run 250 times, with the median for:
25 times per run for Categorical Encoding.
25 times per run for Numeric Encoding.
1 time per run for One-Hot Encoding (too long…).
10 times per run for Binary Encoding.
It shows why you should avoid One-Hot Encoding on rpart , as the training time of the decision tree literally explodes!:
Data is dominated by One-Hot Encoding slowness.
Without One-Hot Encoding scaling issue, we have the following:
A more fair comparison.
As we can clearly notice, if you are ready to spend a bit more time doing computations, then Numeric and Binary Encodings are fine.
Conclusion (tl;dr)
A simple resume with a picture and two lines of text:
Categorical features with large cardinalities (over 1000): Binary
Categorical features with small cardinalities (less than 1000): Numeric | https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931 | [] | 2017-04-23 20:50:03.775000+00:00 | ['Machine Learning', 'Design', 'Data Science'] |
How Fitness Destroyed My Mental Health | Six months after Eliot was born still, when the physical complications cleared me to return to fitness, I couldn’t. I would pick up a barbell, maybe push out a few reps, and then burst into uncontrollable sobs. I thought I had lost interest in all of it. Telling myself that it happens because it does. You grow and change through life. I missed it but found myself being overwhelmed at the idea of writing out a meal plan, calorie tracking, and having a training plan for myself. I — a trainer, and nutritionist — overwhelmed with it because I am a human experiencing real life, too. I paid money to other people on the internet who promised to help me deal with my eating disorder. It failed. Why? Because none of the ones I could find had their own personal experience with it.
Instead, I created my own “protocol” (on accident) that worked for me.
I unfollowed every fitness account.
If you have ever lost any amount of weight or gotten into a gym program, you likely follow some, if not many, accounts on social media of the gurus in that sphere. Whatever worked for achieving a desirable “fitness” goal is now an IG account full of promoting whatever it is. MLM weight loss products, diets, surgeons, workout programs. Now, here we are with massive amounts of “fitness” accounts from self-made experts.
Despite knowing how to discern fact from Instagram, and perfect body fiction, it was still difficult to navigate. What they served to do instead was make me feel like I was failing nearly every second of every day. How could I ever get back to any of that? I was convinced I never would and never could. (And I know that if it happened to me it happens to all of you, too).
I unfollowed them all. Including my long time friends.
I ate whatever I wanted.
When I felt like it.
When you spend literal years in a caloric deficit via point or container systems without having a professional or the knowledge to calculate proper macronutrients and intake needs for your body, you will damage your metabolism. I did not know this in the beginning years but even after I learned, it was still extremely difficult. My body always reverted to undereating or giving off few (if any) hunger cues during periods of stress. The idea of meal prepping and tracking felt too overwhelming to me, so I didn’t.
It took many months before I did not have extreme food anxiety over this. The panic about what I was putting into my body, how it would show up on the scale, how far back would it set me ate me alive (pun intended). So, in moderation, I ate whatever sounded good to me. There were truly no good or bad foods for the first time.
I moved my body in whatever way felt good to me.
This meant taking long walks or hikes or doing some heavy squats or deadlifts until it no longer felt good anymore. This also meant I was much less active than I had been previously and sometimes, I did not move it at all.
I spent years dedicating between four and six days a week with a training plan of some kind. My rest days were what we call “active rest” days. Meaning that, even on days without a set plan, I was still gently active (re yoga, walking, hiking, biking, rowing). It took a massive amount of self-restraint to not fall into a trap of feeling like my “training” was pointless when operating at minimal levels like this but eventually, absolving myself of the pressure to perform was a much needed mental break.
I ditched my Fitbit.
The thing is, there is little to no science behind 10K steps per day. They are inherent estimates and, therefore, often incorrect. Stirring a pot of soup on the stove can be registered as steps regardless of how expensive your fitness device is.
Either way, it is not any measure of self-worth, and I have had many clients also fall victim to this mentality too.
I shifted my definition of fitness.
Most people, including too many in the industry, think of fitness only as physical health. This ideology is how women like me fall into this trap of achieving weight loss, exercise, and then “falling from fitness grace” after a pregnancy, loss, or because life just happens. But fitness is all things with a small mindset shift. It sort of went like this: instead of training sessions, I started viewing my therapy appointments as my workout for that day.
Because…
Fitness is living now. No fluff. Fitness is mental health. Fitness is self-care. Fitness is balance. Fitness is connecting with friends. Fitness is enjoying life in the body you have at this moment.
Not when you lose 10 more pounds. Not when your 1100 calorie diet allows you to purchase those smaller shorts or gifts you a “bikini” body. Not punishing or berating yourself for working out or for not working out.
If you have experienced this burnout or fall from the good graces of all the things that fitness as an industry is, please know it is not a reflection of you or your self-worth.
You can heal. You can love yourself again. You can wear whatever you want and eat whatever you want, too. | https://medium.com/the-virago/how-fitness-destroyed-my-mental-health-78cca41041d0 | ["Leah O'Daniel"] | 2020-12-14 04:14:50.273000+00:00 | ['Women', 'Healing', 'Fertility', 'Mental Health', 'Fitness'] |
The Importance of Seeing our Parents as People | Remove the metaphorical cape, and let them be human.
Photo by Kamila Maciejewska on Unsplash
I don’t know what it’s like trying to balance parenting with work, stress and other relationships. I’ve never experienced being a parent in pain, who has to smile for the kids. I have been the kid, however, who wonders why mommy is sad or takes it personally when Daddy is in a bad mood. We likely all have been, because it’s something kids don’t always understand.
I’ve been that teenager who holds resentment over perceived parental shortcomings — and is mouthy, standoffish and dismissive as a result. I know what it’s like to become an adult, feel no empathy and show no mercy as it relates to the personal failings of my mother and father. Because, I thought of it from the perspective of how I was affected. This made it inexcusable.
On some level, being damaged by those who are supposed to protect us is unjustifiable. Many of us feel like we didn’t stand a chance in some areas, and that’s not fair. Then, when we consider what we’ve endured, who we could have become and what we could have done had it not been for this under-nurtured or abused aspect of our development, it’s easy to hold our parents accountable and be upset — possibly even warranted.
What we don’t consider is how small we’ve made the margin for error and the unattainable standards we may have set. We don’t grow up seeing our parents as people. They’re not Bob and Joann, they’re mommy and daddy. We think they breathe to serve us as though we are the only thing of significance in their lives. This is why we can’t comprehend why they aren’t showing us any attention or don’t feel like playing. Children operate off of the id, which is based on the pleasure principle — the idea that every wishful impulse should be satisfied immediately. Through our lenses, parents have one job. Us.
We never really learn our parents as people who hurt, have fears, desires, and areas of interest completely unrelated to the role that they play in our lives. Without learning there is no understanding. Sometimes our parents are struggling just to survive. They have their own vices and demons. We don’t know this because we don’t know them.
Our parents have personalities that we don’t often see because they brand themselves as mommy or daddy in our eyes. They behave in front of us the way that they wish to be perceived, as someone good, our provider and comforter. They shield us from moments of weakness and usually want to be our real-life superhero. It would kill them if the mask went away and we realized that they are deeply flawed. So, the sides of them to which we are exposed can be limited.
Parents rarely discuss with us their poor decisions or the unflattering behavior they’ve demonstrated. They aim to display a model example for us to follow, and of which to be proud. Sometimes for our protection, sometimes for theirs. Nonetheless, in a sense, we make them superhuman. This is why we’re shocked and in disbelief when we get older and learn of questionable deeds a parent has done. We don’t know the person who would do such things.
Perhaps, it would be more effective to set an example that is simply true. That’s tough to do, though. Parents know that no matter what else goes on in their lives, regardless of who out in the world believes they are useless, they can come home and look into the eyes of a little boy or girl who thinks they’re worth something. We all need that feeling. I really can’t blame anyone for not wanting to disturb such a sacred space. The irony is, the kid with the noticeably imperfect mother doesn’t love her any less than the one who loves a seemingly flawless mother.
When a parent inevitably falls from that pedestal or fails altogether to reach the apex that we’ve established, it can be difficult to recover. We don’t have anything else to hold on to. Being our mom or dad is the only point of reference we have for them. This brings me back to my mention of viewing a parent’s shortcomings from the perspective of how it affects us. When we only know them inside of this box, that is what happens. We don’t consider what they may have been going through that contributed to their falling.
A great deal of responsibility comes with being a parent and is not to be taken lightly. Children are defenseless. We depend on our parents to meet our needs and prepare us for future phases of life. We then become adults, look back and critique the job that they did. Some get better grades than others. Some offered lackluster effort, while others tried harder and did more. Every circumstance is different and some of us have been subjected to such horrific upbringings that we’ll likely never be able to separate the human from the parent. For the rest of us, it is possible and I can attest to the positive impact that it will have on that relationship.
The thing about the id is that it does not progress with time or experience. It remains selfish in nature and isn’t impacted by reality or logic because it operates within our unconscious mind. So, it’s still there. We just evolve to a point where we can override and not be controlled by it. It is the id that makes determinations about another person, parents included, solely from a self-serving vantage point. If we were to put it in check we’d recognize, although we may not have explicitly witnessed, that parents are people, who happen to have children. | https://acamea.medium.com/the-importance-of-seeing-our-parents-as-people-8a505e90a138 | ['Acamea Deadwiler'] | 2019-03-13 07:52:45.458000+00:00 | ['Life Lessons', 'Family', 'Forgiveness', 'Parenting', 'Psychology'] |
How and Why to Add a Storybook.js Design System to Your Existing React Application | Start Adding Non-Trivial Components As You Work
Whenever you are updating the design of a component, it is good practice to add a story file to Storybook for it. In 80% of cases, it takes just a few minutes to add a *.stories.js .
“A story captures the rendered state of a UI component. Developers write multiple stories per component that describe all the ‘interesting’ states a component can support.” — Storybook docs
You will typically have one story for each state that your component has:
This is as basic as a story can get and doesn’t take advantage of Storybook’s additional tooling, which I will describe a bit later on.
If a story has any complications, like dependency on a context you haven’t had to mock yet or some other mysterious issues (they do occasionally crop up), leave it to your best judgment to make it work rather than completing your work. If it’s just some small content change on some page, then it’s probably not worth the distraction.
You have two options.
1. Refactor your components and export a stateless version
This is often the best approach when building new components. These sorts of outside dependencies are often done unintentionally. They were not being built with re-use or modularity in mind. The time spent refactoring could make it easier to demonstrate hard-to-access states in components and will effectively serve as a set of visual unit tests for your components.
Once your component exports a functional stateless UI component, you can add it to Storybook:
In this example, you will notice that we are using Storybook’s args. These can get complicated, but they are essentially a more portable means of setting the props for a particular story.
In this example, the Header component has been made into a stateless component and the two states ( header and authenticatedHeader ) each have their own story, with the userProfile object being passed in as a prop.
2. Work with what you have and make a provider for it
If the component you’re working with is a higher-order organism component like a page with nested connected components, you may need to wrap it in a provider.
This could either be a mock or even an actual instance of the dependency. This means you will be displaying the components similarly to how they are used in the application, and it will likely be the easiest way to get it working.
For example, let’s say you are working on a React/Redux application and want to add your page component that contains nested connected components and perhaps some React Router links. This would require you to include a Redux context and React Router context:
You’ll notice that I’ve done this with decorators. These are simple functions that can modify each story with extra markup and more. In this case, it makes for a pretty clean way to add multiple providers.
To display different states, you can dispatch simple events yourself:
You’ll notice that I had to dispatch a logout action here. This is because, in this case, my stories each share the same global instance of the store.
However, if we were to take this a step further and mock our store, then we could create a reset method. In this case, I didn’t have the time or need for a mock — and I want to keep things simple — so I simply used my existing store.
Remember, we’re trying to be pragmatic here. We just want a single pristine example of how our component is used. | https://medium.com/better-programming/how-and-why-to-add-a-storybook-js-design-system-to-your-existing-react-application-fece8afb0d00 | ['Matthew Weeks'] | 2020-10-20 15:19:22.567000+00:00 | ['JavaScript', 'React', 'Developer Tools', 'Nodejs', 'Programming'] |
Blue Macaws Are The Gardeners Of The Forest | Additionally, dispersal distances varied between the five main palm species, too, and curiously, these distances were unrelated to fruit size.
Anodorhynchus macaws and palms evolved together
The dependence of the Anodorhynchus macaws on the seeds of palms as well as their role as dispersers of their seeds suggests this led to an evolutionary arms race between the parrots and the palms. Specifically, the palms evolved an extremely hard coat around its fruits to avoid seed predation by the macaws. The macaws evolved ever-larger and more powerful beaks to crack the palms’ strong nut coats so they could consume the seeds contained within. The result of this arms race may have been exploited by the palms to exclude all seed predators except the Anodorhynchus macaws, thereby ensuring that some of their seeds were dispersed over long distances by these strong and highly mobile flyers.
A pair of Lear’s macaws (Anodorhynchus leari) in flight. (Credit: Joao Quental / CC BY 2.0)
The evolutionary relationship between the Anodorhynchus macaws and their palms is apparently an ancient one. Multiple molecular studies have identified when parrots first originated and have found many of their lineages began speciating rapidly around 66 million years ago (i.e., ref). This coincides with an important time in palm evolution, too, where there was extensive speciation and changes in the traits of their fruits, both of which could have resulted from coevolution with parrots. In contrast, extinct mammalian megafauna arose much later than both the parrots and the palms whose seeds they presumably dispersed.
This study argues that the Anodorhynchus macaws perform an important long-distance air-delivery service that large-fruited palms rely upon to maintain gene flow between isolated groups and to regenerate palm stands. In view of the severe range contractions and extinctions that the Anodorhynchus macaw species suffered, their mutually beneficial interactions with a variety of palm species have surely disappeared from large portions of their former ranges. This recent history could be the cause of currently observed spatial and genetic arrangements of palm seedlings versus adult plants (ref).
Local and global extinctions of the Anodorhynchus macaws could provide important test cases for gaining a better understanding of the effects of disruption of seed dispersal on palms. Currently, there is one reintroduction project for Lear’s macaw (ref), and this could be used for assessing the effects of recovery of ecological functions.
Anodorhynchus macaws are globally threatened species
Both of the Anodorhynchus macaws have suffered dramatic population decreases in recent decades as well as reductions in their geographic ranges and thus, are classified as threatened by the IUCN Red List. The hyacinth macaw has a total estimated population of just 6500 individuals, distributed between three regions of the Pantanal, Cerrado, and Amazonia (Figure 1), probably with no genetic flow between them (ref). | https://medium.com/swlh/blue-macaws-are-the-gardeners-of-the-forest-bf6c10e0c861 | ['𝐆𝐫𝐫𝐥𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐬𝐭', 'Scientist'] | 2020-04-20 17:13:41.010000+00:00 | ['Palm Trees', 'Evolution', 'Ecology', 'Ornithology', 'Science'] |
Pollinating drones: Cool patent. Wise investment? | By now you’ve heard the news: Walmart has applied for a patent for drones to pollinate crops. In fact, Walmart Stores Inc. has applied for several patents for unmanned vehicles that would be used for a variety of farming purposes. But the drone that would collect pollen from one flower and dispense it to another is what gained the most attention.
Despite all the headlines and several references to Black Mirror, the systems described in the application are pretty mundane; a long, broad list of the possible “embodiments” for a pollinator drone. And it shouldn’t be surprising that Walmart would want to hold such a patent: The retailer is now in the business of food, big time. If there’s one thing Walmart aspires to do exceedingly well, it’s minimize the risks in its supply chain. With the continued impact of colony collapse disorder among honey bees and the mounting evidence of population declines among wild bees — and a significant portion of the fruits and vegetables we eat dependent on the transfer of pollen — it seems pretty obvious why Walmart might be interested in building and maintaining a fleet of pollinator drones.
So what might it cost Walmart to do this? Because the patent application is filled with general possibilities and the company has decided not to share any further details, we can only make an educated guess for now. However, current real-world efforts in this area are focused on tiny drones acting like individual bees. So let’s presume that the retailer would build on this work instead of starting from scratch.
Last year David Goulson, professor of biology at the University of Sussex and founder of the Bumblebee Conservation Trust in the United Kingdom, wrote a blog post about robotic bees. In it, he addressed this question of expense:
“While I can see the intellectual interest in trying to create robotic bees, I would argue that it is exceedingly unlikely that we could ever produce something as cheap or as effective as bees themselves… Consider just the numbers; there are roughly 80 million honeybee hives in the world, each containing perhaps 40,000 bees through the spring and summer. That adds up to 3.2 trillion bees. They feed themselves for free, breed for free, and even give us honey as a bonus. What would the cost be of replacing them with robots?”
Goulson goes on to do the math, saying that if it cost just a single penny a piece to build these drones (“which seems absurdly optimistic”), it would cost £32 billion ($45 billion) to replace every honey bee. To say nothing of what it would cost to research, develop and ultimately maintain (or regularly replace) this fleet of drones.
Applying this same basic calculation (with a few adjustments) to our American context, the cost for Walmart might look something like this:
There were 2.62 million honey bee colonies in the United States at the beginning of 2017, according to the USDA. An average colony has around 60,000 bees in it.
Only about a third of a colony are foraging bees, the ones who facilitate pollination in their daily hunt. Say 20,000 for an average colony.
That means there are approximately 52.4 billion honey bees that might need to be replaced in the U.S.
At one cent a piece, it would cost $524 million just to manufacture this many pollinator drones.
Of course, replacing all of the foraging honey bees might not be what Walmart intends to do. After all, it’s not as if every single honey bee is going to disappear instantly at the same moment. Walmart may only need to make up for the loss of some of those bees. Let’s say the retailer plans for its drones to cover the work of just 10 percent of the total number of colonies in the United States. In that case, it would cost $52.4 million to manufacture enough drones.
These calculations do not include the R&D, maintenance and disposal costs of the drones. (Yes, disposal costs. Who’s going to clean up all these things when they break and fall to the ground?) Nor do these calculations include what it might cost to also replace the wild bees and other pollinators that do a significant amount of pollination work for farms. Even for a company that had annual net income of $9 billion in 2017, the costs associated with a fleet of pollinator drones could add up quickly. Especially if they cost more than a penny to manufacture: even a dollar-a-piece production cost (which also seems absurdly optimistic) would suddenly inflate the initial price tag into the billions of dollars.
Yes, Walmart reported over $485 billion in total revenue for 2017, and food sales account for more than half of that amount. So in that light, even a $5 billion price tag for UAVs might seem to be a reasonable investment. However, the company is planning to spend only around $11 billion for capital expenses in 2018; how quickly would a pollinator drone program (even an absurdly inexpensive one) consume the company’s capex budget?
Now consider this other point that Goulson made in his post: Real-live bees feed themselves, groom themselves, and reproduce essentially for free.
So why not take care of the pollinators we already have and give the simple sum of $52.4 million to conservation organizations instead? With that amount of money, what could be accomplished in just a single year?
“It is actually quite hard to work out what conservationists might do with such a vast sum of money,” says Goulson via email. For example, the Xerces Society for Invertebrate Conservation spent $1.9 million on its pollinator conservation programs in 2016. “Fifty-two million dollars, let alone $520 million, seem like unimaginable sums,” says Goulson. “If they did have this sort of money, there is no doubt that they could run a vast program of outreach and habitat creation that would benefit all terrestrial life, not just bees.”
“If we consider the fact that an aesthetically-beautiful, wildlife-supporting, carbon-sequestering and oxygen-producing wildflower meadow can be established in most parts of the U.S. for around 10 cents per square foot, then the best return on investments is clear,” says Eric Lee-Mäder, Pollinator Program co-director for the Xerces Society. “We should all plant wildflower-rich habitats rather than build drones that don’t do any of those things.”
At a rate of 10 cents per square foot, $52.4 million could plant about 12,000 acres — over 9,000 football fields — in pollinator habitat.
In the United Kingdom, says Goulson, farmers can get subsidies of around $600 per hectare (2.47 acres) to create high-quality habitat for bees on their land. If a program with the same incentive were offered in the U.S., $52.4 million would pay for over 215,000 acres to be planted (over 160,000 football fields).
“There is no doubt that this would be an enormous boost to pollinator populations, and to wildlife generally,” says Goulson. With a caveat. “But neither I nor anybody else could tell you whether it would be ENOUGH to permanently maintain healthy pollinator populations across the U.S.A.”
So what if Walmart gave $52.4 million every year for several years to conservation organizations?
After reaching out to the company several times, Walmart ultimately provided the following response to the specific calculations and questions in this story: “As mentioned, we’re always thinking about new concepts and ways that will help us further enhance how we service customers, but we don’t have any further details to share on these patents at this time.”
If Walmart hasn’t crunched the numbers on pollinator drones, it will be interesting to see exactly what concepts the company starts thinking about once it does. But if the retailer has already determined that a fleet of drones is good for the bottom line, the details of those plans are going to be equally fascinating to see. | https://medium.com/the-bee-report/pollinating-drones-cool-patent-wise-investment-1ed2e077851 | ['Matt Kelly'] | 2018-09-30 12:12:21.662000+00:00 | ['Science', 'Agriculture', 'Bees', 'Economics'] |
Error Handling in my Flutter App | Again, looking closely at the runApp() function, we see, if nothing is passed as parameters, the Firebase Crashlytics routines are assigned to handle and report exceptions. The runApp() function displayed above then calls the runApp() functions displayed below. Yet, it too is not the function you’re familiar with. However, this second runApp() function does eventually call the original runApp() function supplied by Flutter, but not before wrapping your app around an error handler. How this error handler works is what we’re going to examine, today.
In The Zone
We’re going to walk-through the code and examine the series of events that occur when an error is triggered in the app. This error will occur when the runApp() function is running and the app is first starting up. In the screenshot above, you can see the error routine, runZonedError, is highlighted. It’s called when an error occurs at startup.
In the screenshot below, you’ll notice I’ve commented-out the Firebase Crashlytics routines for now. Hence, they won't be used in this instance. Let’s instead see what the ‘default behaviour’ is when an error occurs at startup.
Below, is a screenshot of the error routine, runZonedError. When an exception does occur, it merely passes the generated Exception and StackTrace objects to the routine, _debugReportException(). It is this routine that then produces a ‘FlutterErrorDetails’ object and finally passes it to the Flutter framework’s error handler. Of course, since this app uses the MVC framework, the developer is able to define their own error handler.
To Make An Error
To cause the error, I’ve decided to mess with the very framework I use with all my apps, and change some code to intentionally cause an error — I’ll be sure to correct it after we’re done with this exercise.
Listen To The Connection
In this framework, you can assign ‘listeners’ to be triggered if and when the device’s connectivity status changes for one reason or another. For example, when you turn off the wifi or your phone. The class, App, has such a routine to add a listener, and it’s found right in its constructor. As your recall, the App class is extended and called in the main.dart file. A screenshot of this class and its constructor is displayed below.
I made a quick change to the code to cause an error at startup. The function, addConnectivityListener(), is called every time the Flutter app starts up. In most instances, there won’t be a ‘listener’ specified and so there’s an if statement there to prevent any problems. You see, one can’t assign null as a listener, but I’ve commented out that if statement. The app is not going to like that.
Report The Error
And so, in the screenshot below, we’re back where the app is just about to call Flutter’s own error handler. It had started up but quickly encountered that error while running Flutter’s runApp() function. In the screenshot below, we see the FlutterErrorDetails object was created and, again, is just about to be passed to Flutter’s reportError() function. As you know from your own experiences with Flutter, this usually results in the ‘Red Screen of Death’ or at the very least, the error message and stack trace displayed in your IDE’s console screen.
Let’s that a quick peek inside Flutter’s reportError() function. You can see that it, in turn, calls the routine assigned to the static variable, onError. Note, the reportError() function is called almost every time an exception occurs in your Flutter app.
Dump The Error
By design, the default behavior for the static function, FlutterError.onError, is to call yet another static function, dumpErrorToConsole. A very descriptive name — you’re familiar with this as it lists the error right out on the IDE’s console screen. In fact, below is a screenshot of this original routine.
However, in this framework, it’s the function below that’s called instead Below is a screenshot with a breakpoint stopping the running code. In this MVC framework, error handling is to be as versatile as possible. For example, the onError() function below could have been overridden by the developer to customize the error handling. However, you can see the ‘App Controller’, con, has its own error handler — a developer could have customized that error handler instead. Options.
Let’s now look at that AppController’s error handler. We can see you if you didn’t override it with your own error routine, it turns to it’s associated State object and it’s error handler. At last count, that’s three places in the framework for error handling to be adapted to specific requirements of a Flutter app. Three places the developer or developers could have overridden. Good to have options.
By design, in this framework, each State object can have its own error handler. Each State object can have any number of Controllers associated with it, and it’s standard for each Controller to then rely on the State object’s error handler — and that’s what you’re seeing in this code. At this point, the State object can either have its own error handler defined or will simply use the ‘old’ or ‘default’ error handler. As it happens, in this case, it’s the old static function, dumpErrorToConsole, that’s the default error handler. Let’s see how this all works.
And so, when the instance variable, handler, is executed in the screenshot above, the old static function, dumpErrorToConsole, is indeed called and the error listed in the IDE’s console window as you see below. | https://medium.com/follow-flutter/error-handling-in-my-flutter-app-1279ec681e90 | ['Greg Perry'] | 2020-11-10 21:59:32.464000+00:00 | ['Mobile App Development', 'iOS App Development', 'Flutter', 'Programming', 'Android App Development'] |
Five PyTorch functions recommended for machine learning beginners | PyTorch is a free and opensource machine learning library based on the Torch library. It has numerous in-built functions that allows data scientists to build machine learning algorithms with the shortest lines of code. In this article, I will outline and explain five interesting functions that every beginner in the field of data science will find useful when it comes to building models using PyTorch. These functions include;
torch.cat()
torch.randn()
torch.split()
torch.reshape()
torch.transpose()
Before we begin, let’s install and import PyTorch
Function 1 — torch.cat()
The torch.cat() function concatenates two or more tensors of the same dimension into a single result tensor.
This example shows the use of torch.cat() to join two tensors into a single output tensor.
In this second example, we use torch.cat() function with an optional dim=dimension argument. Expected dimension values are 0 and 1. The 0 dimension makes the concatenation to occur along the vertical axis where as the 1 dimension makes the concatenation to occur along the horizontal axis. If not specified, the default dimension is 0 and all specified tensor are concatenated along the vertical axis.
In example 3, the torch.cat() function fails to execute simply because of invalid argument combinations provided. The function expect a tuple of tensors and not individual tensors as arguments. It is easy to forget to pass the tensors as a tuple hence failure to execute.
For beginners, joining of tensors will be a key requirement in manipulation of datasets hence knowing how to and when to do the same will be an important step in the right direction.
Function 2 — torch.randn()
THe torch.randn() is used to generate a tensor filled with random numbers from a standard normal distribution. The function syntax is shown below:
Syntax: torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
where the size argument is required argument which specifies the sequence of integers defining the size of the output tensor.
The rest of the function arguments are optional.
The examples below demonstrate the use of torch.randn().
Example 1 above shows the basic use of torch.randn() to generate a tensor of the shape 2,3,2. The values are randomly generated and contains both positive and negative values.
The torch.randn() function in example two returns a 4 x 3 x 2 tensor where also the gradient is recorded for further use.
Example 3 demonstrates that when using torch.randn(), at least one argument must be provided for the function to correctly execute and return results. This is important because the returned tensor will need to have an expected shape for it to be useful and that shape must be specified in advance.
The torch.randn() function is a very useful function that every beginner must learn. It is mostly applied in generation of initial weights matrices and bias vectors that are necessary for training of model across different machine learning algorithms.
Function 3 — torch.split()
The torch.split() Splits the tensor into chunks based on the specified split size. The function accepts the arguments as specified in the syntax below.
Syntax: torch.split(tensor, split_size_or_sections, dim=0)
where tensor is the input tensor to be splitted, split_size_or_sections is either an integer number or a list specified the split chunk size and the dim argument specifies the dimension on which to split and this can be either 0 or 1.
In this example, we have generated an 8 x 2 tensor and managed to split the tensor into two chunks using the size argument value 4.
It is important to note that when specifying the size argument, the value specified refers to the size of the output tensor chunk and not the number of chunks returned.
In example 2 above, we are splitting a tensor by specifying a lift of sections to split at. It is important to note that the sum of specified values in the list must be equal to the length of the input tensor corresponding to the split dimension.
For example, the shape of the input tensor is 4 x 2 and therefore the contents of the split list [1,3] must add up to 4. The first output will have one row and two columns where as the last output will have three rows and two columns.
In the example 3, we have an input tensor with a 4 x 2 shape and we have specified [2,1] as the split sections list. We are getting an error because our the sum of values in our splitting list does not equal the length of the list corresponding to the split dimension which is 1.
If we specify [3,1] as the splitting list, we would expect our function to execute given that 3+1=4 will be equal to 4 which is the length of the input list which we intend to split along the dim=1.
The torch.split() is an important function to understand as it is quite useful in generation, testing and training of machine learning models. In most cases, an input data will need to be split into training and testing datasets and most commonly, the torch.split() will be the function to use.
Function 4 — torch.reshape()
The torch.reshape() function returns a tensor with the same data and number of elements as input, but with the specified shape. The function simply changes the shape of a tensor from the original shape to the specified shape.
In this example, we generate a 1 x 10 tensor using the torch.arange() function and then we reshape the tensor into a 2 x 5 tensor. It is important to note that the specified shape must be compatible with the original shape for the function to return the desired result without generating an error.
In example 2 above, we generate a 3 dimension tensor with the shape (3,1,4) and reshape it into a (3,2,2) tensor. The most important factor here is to ensure that the shape of the resultant tensor is compatible with the input tensor.
In example 3, we try to reshape a (1,10) tensor into a (4,2) tensor and we get an error message because the two provided shapes are not compatible.
Reshaping a tensor from one shape to another is a useful operation for machine learning tasks. The point to note is the fact that the input and output tensor shapes must be compatible in other words, the product of the elements in the provided shapes for both input and output tensors must always be the same.
Function 5 — torch.transpose()
The torch.transpose() is used to transpose the values of a tensor from one specified dimension to another. This operation is useful in the multiplication of matrices and vectors. The syntax for using the transpose function is shown below:
syntax: torch.transpose(input, dim0, dim1)
where input is the original tensor to be transposed, dim0 is the dimension to transpose from, and dim1 is the dimension to transpose to.
In this simple example, we transpose a 2 dimensional 2 x 3 tensor generated by the randn() function from the horizontal axis into vertical axis i.e from dim=0 to dim=1. The result is such that the contents of the first row, becomes the contents in the first column in the resulting tensor.
In example 2 above, we are transposing a 3 dimensional matrix with the shape 2x3x2 from horizontal to vertical axis. An interesting comparison of the input and output tensors shows that when transposing a 3 dimensional matrix, the matrix is reshaped such that if the shape of the input tensor is (x,y,z) then the resulting tensor will have a shape with dimensions (y,x,z).
The difference between transpose and reshape in this case is the arrangement of values in the resulting tensor.
Example three demonstrates that all the arguments for torch.transpose() function are required and must be specified for successful execution of a pytorch code containing a torch.transpose() function.
As mentioned already, torch.transpose() is one of the key pytorch functions that every data science beginner ought to understand due to its wide application in machine learning model development.
Conclusion
In this notebook, the focus was on the five most important pytorch functions that every data science beginner ought to learn. Starting with the pytorch.cat(), we have demonstrated that the importance of joining tensors either on the vertical or horizontal axis.
We have also extensively used the pytorch.randn() function through out this notebook for generating sample tensors in various examples. This shows that, a beginner will not have to manually type sample tensors for learning purposes but will quickly be able generated sample to quicken the learning process.
We have also seen how to split tensors which is the opposite of concatenation of tensors as well as reshaping and transposing tensors. I do hope that by reading through this article, one will be able to quickly grasp the concepts illustrated by the numerous examples here in.
Reference Links
Provide links to your references and other interesting articles about tensors | https://mochisoft.medium.com/five-pytorch-functions-recommended-for-machine-learning-beginners-fb1a3d722f0c | ['Michael Ochieng'] | 2020-11-30 23:03:54.847000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Pytorch', 'Jovian'] |
Why you should use REACT Native to Develop your app? | REACT native
At first, like most people, I was skeptical of using Facebook’s React . Demos of the extension of React JavaScript programming language, JSX made some developers uneasy. For years, people worked with HTML and Java Script separately until React seemingly combined them. Some also questioned whether it was necessary for there to be another customer-based library in all of the existing libraries.
As it turned out, React proved to be the best solution for our own projects as well as for our clients. Even companies like Netflix or Skype use React. With React Native, the framework is also perfect for app programming. React Native is a great alternative for developing performance-strong apps that feel at home on both Android and iOS.
Now I come to give you an overview of the framework and to report on my favorite features. React is described by its developers as follows: “A JavaScript library for building user interfaces”, React puts its focus on the view of the application. That is, when writing a React code, the displayed code will be written React components, which are small parts that describe how the App will look like.
Consider a small example that can be used to display a simple button.
const Button = React.createClass ({
propTypes: {
onPress: React.PropTypes.func.isRequired,
text: React.PropTypes.string.isRequired,
},
render () { return ( <TouchableOpacity onPress = {this.props.onPress} style = {…}> <Text style = {…}> {this.props.text} </ text> </ TouchableOpacity> ); } });
This button component has two parts of Input Data: onPress, which is a callback function in case the button is pressed; and text, which is responsible for the view of the button.
The XML-like structure is called JSX, which is syntactic sugar for React function calls. And TouchableOpacity and Text are existing components that React Native provides.
After the creation of the Button component, it can be used again and again in the application, with recurring style and behavior.
This simple example demonstrates how a React app is built. “Piece by piece.” Even though the app grows in function, the individual components remain understandable and manageable at every level.
NATIVE VS HYBRID APPS. WHAT SHOULD YOU CHOOSE IN 2017?
Truly Native
Most apps written with JavaScript use Cordova , or a framework that builds on it, as well as the popular Ionic or Sencha Touch frameworks .
But no Cordova App will ever get the feel of a real native App. Scrolling, keyboard behavior and navigation can bring frustrating experiences if they do not work as expected. Even though JavaScript is still written in React Native, the components are rendered as Native Platform widgets. And if you’ve written apps in Java or Objective C, you’ll spot some React Native components right away.
Ease of learning
One of React’s biggest strengths is reading comprehension, even for programmers who have not worked with it yet. Many other frameworks first require you to learn a long list of concepts that have nothing to do with the language building blocks. As an example, let’s compare the difference between rendering a list of friends in React Native and Ionic (Angular JS)
Ionic uses ngRepeat Directiv.
Let’s assume we have an order for friends. Each field contains the following fields: first_name, last_name and is_online. But we only want to show them, which are currently online. Here is our controller:
function FriendCtrl ($ scope) { $ scope.friends = [ { first_name:, John ‘, last_name: ‘Doe’, is_online: true, }, { first_name:, Jane ‘, last_name: ‘Doe’, is_online: true, }, { first_name:, Foo ‘, last_name:, Bar ‘, is_online: false, } ]; }
And here is our view:
<div ng-controller = “FriendCtrl”> <Ul> <li ng-repeat = “friend in friends | filter: {is_online: true} “> {{friend.last_name}}, {{friend.first_name}} </ Li> </ Ul> </ Div>
But if you are unfamiliar with Ionic / Angular JS, you can directly ask yourself a few questions.
What is $ scope? What is the syntax for a filter? And how can I add more behaviors, such as sorting the friends list?
With React Native, one can revert to already existing knowledge of the language, using the filter and map functions.
const DemoComponent = React.createClass ({ render () { const friends = [ { first_name:, John ‘, last_name: ‘Doe’, is_online: true, }, { first_name:, Jane ‘, last_name: ‘Doe’, is_online: true, }, { first_name:, Foo ‘, last_name:, Bar ‘, is_online: false, } ]; return ( <View> {friends .filter (f => f.is_online) .map (f => <view> {f.last_name}, {f.first_name} </ view>)} </ View> ); } });
Since the main part is normal JavaScript and only a few small parts are different, React Native is easier to understand for all programmers and is easy to use even for beginners. React is also a very good learning tool, if you do not know how to use Map or Filter. React also brings you closer to these functions.
Developer experiences
“Happy developers are productive developers,” and React Native has a good developer environment. Instead of constantly waiting for the code to compile and restart the app every time you make a small change, your React Native Codebase changes in the running app.
And if you’ve been working with JavaScript, you probably also know the Chrome Developer Tools. While running React Native in Developer mode, you can tie this to your desktop Chrome browser, so you can use your debugger tools and profiling tools at the same time. React uses native flexbox for the layout . While every layout engine is different, React Native’s support for flexbox means you can use the same layout code for Android, iOS and web.
WHY IT IS A GOOD TO DEVELOP AN EFFICIENT MVP
CODE SHARING
We’ve already looked at how to use code between iOS and Android via React, but can do it on the web. Everything in React that is not bound to a Native Component is already divisible. Imagine an app that can be rendered through servers, or through web browsers, or Android or iOS, all powered by a shared codebase. We’re not there yet, but the community is working to it, in unpredictable steps.
Our conclusion
Due to the ease of development, the quality of the apps, and the diversity of the platform and its environment, our team at Elitech Systems has always enjoyed the learning process and development with React Native. If you want to learn more about React Native, click on the links below. If you would like to develop app in React Native, but do not have the time and leisure then just contact our team and we’ll be happy to help you anytime. | https://christinacheeseman827.medium.com/why-you-should-use-react-native-to-program-your-app-4fd98869662c | ['Christina Cheeseman'] | 2020-10-26 18:10:59.128000+00:00 | ['React', 'Native App Development'] |
Why do people fall in love with Python? | Most of the people think the word “python” is a snake but it is the most popular programming language. Everyone will fall in love with life and some people may face an END in love, but those who fall in love with python there is no END (i.e “;”) with it because python programming has no Semicolon. So, people fall in love with Python programming.
Less coding:
Compared to other languages python has fewer syntaxes to write code. The syntax written in python is very simple and easy which makes it more user-friendly. Some logic with require 7 lines of code in another language but can be done only using 3 lines of code in python which makes it more efficient.
It is Free:
Python is an open source language, using the python language it doesn’t require any custom built platform or subscription. Thus any desktop and laptop are compatible with Python. In python modules and libraries are absolutely free for coding.
It Is One Of The Most Trending Languages:
There are a number of programming languages like C, c++, Java, PHP. Python is the most trending programming language in software marketing. We can use python language for businesses for developing and it has more impact on programming.
Improved Programmer’s Productivity:
The Python programming language has extensive support libraries and object-oriented design that increases the number of the fold of programmer’s productivity while using languages like Java, Perl,c, c++.
Integration Feature:
It has the enterprise application integration that makes it easy to develop web services. It has powerful control capabilities as it calls directly through c, c++, java. Python also processes XML and different markup languages as it can run on all modern operating systems through the same code. | https://medium.com/quick-code/why-do-people-fall-in-love-with-python-4b23121fd9f1 | [] | 2019-04-12 18:26:00.668000+00:00 | ['Java', 'PHP', 'Programming', 'Scala', 'Pyhton'] |
Jenkins on Kubernetes: From Zero to Hero | Jenkins on Kubernetes: From Zero to Hero
Using Helm and Terraform to get an enterprise-grade Jenkins instance up and running on Kubernetes
TL;DR
This post outlines a path to getting an enterprise-grade Jenkins instance up and running on Kubernetes. Helm and Terraform are used to make the process simple, robust, and easy to repeat.
A GitHub repo containing the code snippets from this post can be found here.
The Objective
With Kubernetes adoption growing like crazy, many organizations are making the shift and want to bring their favorite DevOps tool, Jenkins, along for the ride. There are a lot of tutorials out there that describe how to get Jenkins up and running on Kubernetes, but I’ve always felt that they didn’t explain why certain design decisions were made or take into account the tenants of a well-architected, cloud-native application (i.e. high availability, durable data storage, scalability). It’s pretty easy to get Jenkins up and running, but how do you set up your organization for success in the long run?
In this post, I will share a simplified version of a Kubernetes-based Jenkins deployment process that I have seen used at some of the top brands in the world. While walking you through the process, I will highlight each key design decision and give you recommendations on how to take the solution to the next level.
When evaluating a cloud architecture, I like to use the Well-Architected Framework from AWS to make sure I cover my bases. That framework focuses on operational excellence, reliability, security, performance efficiency, and cost optimization. For the purposes of brevity, we will just focus on a few key elements of reliability and operational excellence in this post, namely high availability, data durability, scalability, and management of configuration and infrastructure as code.
Helpful Prerequisites
To get the most out of this article, you should have a relatively good understanding of:
Jenkins, how to use it, and why you would want to deploy it
Kubernetes and how to deploy applications and services into a Kubernetes cluster
Docker and containerization
Unix shell (e.g. Bash) usage
Infrastructure as code (IaC) principles
Technology Dependencies
To follow along and deploy Jenkins using the code samples in this post, make sure you have the following resources configured and on-hand:
A Kubernetes cluster (tested on v1.16): this can be an AWS EKS cluster, a GKE cluster, a Minikube cluster, or any other functioning Kubernetes cluster. If you don’t have a cluster to work with, you can spin one up easily in AWS using the Terraform module enclosed here. Permissions to deploy workloads into your Kubernetes cluster, forward ports to those workloads, and execute commands on containers. A Unix-based system (tested on Ubuntu v18.04.4) to run command-line commands on. The kubectl command-line tool (tested on v1.18.3), configured to use your cluster permissions. Access within your cluster to download the Jenkins LTS Docker image (tested on Jenkins v2.235.3). The Helm command-line tool (tested on v3.2.4). Helm is the leading package manager for Kubernetes. The Terraform command-line tool (tested on v0.12.28). Terraform is a popular cross-platform infrastructure as code tool.
If you don’t have these dependencies ready to go, you can easily install and configure them using the links above.
Our Approach
This article will walk you through deploying Jenkins on Kubernetes with several different levels of sophistication, ultimately ending in our goal: an enterprise-grade Jenkins instance that is highly available, durable, highly scalable, and managed as code.
We will go through the following steps:
Deploy a basic standalone Jenkins primary instance via a Kubernetes Deployment. Introduce the Jenkins Kubernetes plugin that gives us a way of scaling our Jenkins builds. Show what our basic setup is missing and why it is insufficient for our goals. Deploy Jenkins via its stable Helm chart (code for that chart can be found here), showing what this offers us and how it gets us close to the robustness we are looking for. Codify our Helm deployment using Terraform to get the many benefits of infrastructure as code. Go over additional improvements you can make to take Jenkins to the next level and highlight a few remaining key considerations to take into account.
Photo by Christopher Gower on Unsplash
Step One: Creating a Basic Jenkins Deployment
First things first, let’s deploy Jenkins on Kubernetes the way you might if you were deploying Jenkins in a traditional environment. Typically, if you are setting up Jenkins for an enterprise, you will likely:
Spin up a VM to be used as the primary Jenkins instance. Install Jenkins on that instance, designating it as the Jenkins primary instance. Spin up executors to be used by the primary instance for executing builds. Connect Jenkins to your source control solution. Connect Jenkins to your SSO solution. Give teams access to Jenkins — most will not need UI access and can make due with webhook access maintained by a set of administrators— and empower them to run builds. Actively manage the system over time to make sure it is operating effectively.
Let’s walk through the first three steps, but do so in a “Kubernetic” way.
Pro tip: To run the examples in this post efficiently, I recommend opening up two shell instances. This will help when you go to port forward and want to run other commands at the same time.
First, let’s make sure our Kubernetes context is set up. If using AWS and EKS, this would mean running something like the following: aws eks update-kubeconfig --region us-west-2 --name jenkins-on-kubernetes-cluster .
Next, let’s create a namespace to put all of our work in: kubectl create namespace jenkins .
For simplicity, let’s change our Kubernetes context to look at the jenkins namespace we created above: kubectl config set-context $(kubectl config current-context) --namespace jenkins . Because we’ve done this, we won’t have to pass the --namespace argument to all of our future commands.
Now, we need to create several RBAC resources to enable our Jenkins primary instance to launch executors. To do this, save the following configuration to your file system as 1-basic-rbac.yaml :
Navigate to the folder containing that configuration on your file system and run the following command: kubectl apply -f 1-basic-rbac.yaml .
This will create the RBAC resources we need in the cluster.
Now, save the following Deployment configuration to your file system as 2-basic-deployment.yaml :
Navigate to the folder containing that configuration and run the following command: kubectl apply -f 2-basic-deployment.yaml .
This will create a Jenkins primary instance Deployment from the jenkins/jenkins:lts Docker image with a replica count of one. This Deployment will make sure that, assuming the cluster has enough resources, a Jenkins primary instance is always running in the cluster. If the Jenkins primary instance pod goes away for any reason other than Deployment deletion (e.g. crash, deletion, node failure), Kubernetes will schedule a new pod to the cluster.
Okay, wait for that Deployment to finish provisioning, checking every once in a while to see if the Deployment is in a Ready state using the kubectl get deployments command.
Once that Deployment is in a Ready state, we can forward a port to the container in our pod to see the Jenkins UI locally in our browser: kubectl port-forward deployment/jenkins-standalone-deployment 3000:8080 .
You should now be able to see the Jenkins Getting Started page in your browser at http://localhost:3000 .
Pro tip: I highly recommend sticking to port 3000 if your setup allows you to. That should save you from some unexpected troubleshooting as you go through this post.
Alright. Because we have simply deployed the Jenkins base image and are not using Jenkins Configuration as Code, we have to configure the Jenkins primary instance manually.
On the first setup page, it asks you for your Jenkins administrator password. You can grab that password quickly by running the following: kubectl exec -it deployment/jenkins-standalone-deployment -- cat /var/jenkins_home/secrets/initialAdminPassword .
Copy that password, enter it into the password input field, and select continue .
On the next page, select Install suggested plugins and then wait for the plugin installation to complete.
Once the plugin installation completes, create a new administrator user with a name and password you will remember.
If this seems like a lot of work, and you are already convinced that this is not the best way to go about things, go ahead and jump to the section on Helm below.
Next, you can configure your Jenkins URL how you see fit. Its value is not important for this tutorial, but it is very important when you are actually using Jenkins in production. Go ahead and do that now, and then continue on until you have completed the initial setup wizard.
Hooray! If you followed the above steps, you now have a Jenkins primary instance running on Kubernetes and accessible in your browser. Of course, this is not how you would access your Jenkins UI in production — you would likely use an Ingress Controller or Load Balancer Service instead — but it is good enough for demonstration purposes.
Okay, let’s create a simple Jenkins pipeline to make sure our Jenkins instance actually works. Go to http://localhost:3000/view/all/newJob or the equivalent URL based on your port forwarding setup and create a Pipeline with the name jenkins-on-kubernetes-pipeline — you may be prompted to enter your new admin user’s password. This will look like the following:
Now, go to http://localhost:3000/job/jenkins-on-kubernetes-pipeline/configure — you should be taken there automatically. In the box at the bottom of that page, paste the following simple pipeline code:
Now you can save your pipeline job and run a build by clicking Build Now .
Wait a few seconds, and then you can view the console output here: http://localhost:3000/job/jenkins-on-kubernetes-pipeline/1/console . On this page, you will see the output from your pipeline. It should look approximately like this:
Terrific! We just ran a Jenkins build successfully on Kubernetes. Let’s move onto the next step.
Step Two: Introducing the Jenkins Kubernetes Plugin
The build that just ran successfully is awesome and all, but it ran on our Jenkins primary instance. This is not ideal because it limits our ability to scale the number of builds we run effectively and requires us to be very intentional about the way we run our builds (e.g. making sure to clean up the file system by clearing up workspaces).
If we stick with the current configuration, we will almost certainly run into trouble with space issues, unexpected file system manipulation side effects, and a growing backlog of queued builds.
So, how do we fix these issues seamlessly?
Enter the Jenkins Kubernetes plugin.
The Jenkins Kubernetes plugin is designed to automate the scaling of Jenkins executors (sometimes referred to as agents) in a Kubernetes cluster. The plugin creates a Kubernetes Pod for each executor at the start of each build and then terminates that Pod at the end of the build.
What this ultimately provides us with is scalability and a clean workspace for each build (i.e. better reliability and operational excellence). As long as you don’t exceed the resources available to you in your cluster — cluster autoscaling removes some of this concern — as many pods as you allow will be spun up to accommodate the builds in your queue.
Deploying Jenkins in this fashion also offers several additional benefits over a traditional VM deployment, including:
Executors launch very quickly (i.e. better performance efficiency). Each new executor is identical to other executors of the same type (i.e. increased consistency across builds). Kubernetes service accounts can be used for authorization of executors in the cluster (i.e. more granular security). Executor templates can be made for different base images. This means you can run pipelines on different operating systems using the same Jenkins primary instance as the driver (i.e. improved operational excellence).
So, let’s get this set up.
First, we need to add a Kubernetes Service to Jenkins to allow it to be accessed via an internal domain name. To do so, save the following configuration to your file system as 4-basic-service.yaml :
Navigate to the folder containing that configuration and run the following command: kubectl apply -f 4-basic-service.yaml .
Now we can configure the actual Kubernetes plugin.
Go to http://localhost:3000/pluginManager/available and search Kubernetes in the search box.
Select the box next to the plugin named Kubernetes and click Download now and install after restart at the bottom of the page.
Wait a few seconds for the plugin to download.
Now, go to http://localhost:3000/restart and click Yes . Wait a few minutes for Jenkins to restart. When it does, log back in with your admin credentials.
Pro tip: Jenkins restarts can be a little finicky. If your browser window does not reload after a minute or so, close that tab and open up a new one.
Once you are logged back in, go to http://localhost:3000/configureClouds/ . This will look a little different on older versions of Jenkins.
On this page, add a Kubernetes cloud and then click Kubernetes Cloud details .
Configure the Kubernetes Namespace, Jenkins URL, and Pod Label like so:
Once you have that configured, click Save .
Now, let’s go modify our job ( http://localhost:3000/job/jenkins-on-kubernetes-pipeline/configure ) to use the new Kubernetes executors.
Change the contents of the pipeline to the following:
Now, try running Build Now again.
Wait a few seconds — it might even take a few minutes to launch the first executor — and then you can view the console output here: http://localhost:3000/job/jenkins-on-kubernetes-pipeline/2/console .
The output should look approximately like this:
Sweet! We now have a highly-available Jenkins primary instance, we can create jobs on that instance, we can run builds, and the number of Jenkins executor Pods will automatically scale as the build queue fills up.
We’re done, right?
Not so fast. Let’s keep going.
Step Three: Bringing the House Down
Though some might argue our setup is highly-available and scalable, what we just deployed is still not anywhere near the enterprise-grade solution we are looking for. To demonstrate that, let’s simulate a failure.
A common way to simulate failures on Kubernetes is to delete a Pod from a Controller (e.g. a Deployment or ReplicaSet). To do that, let’s first determine what Pods exist in our namespace: kubectl get pods . You should get something like this:
Now, you can delete that Pod like so: kubectl delete pod jenkins-standalone-deployment-759b989cf4-6ptvc . This will kill your port forwarding session, so just expect that.
If you move quickly enough, you can then run kubectl get deployments and see a Ready state of 0/1 like the following:
Wait a few more seconds, though, and you will see something different. Your Deployment will eventually return to a Ready state of 1/1 . This means that Kubernetes noticed a missing Pod and created a new one to return the Deployment to the right number of replicas.
With our Deployment in a Ready state, we should be able to access the Jenkins UI again. Let’s confirm that by forwarding a port to the Deployment again: kubectl port-forward deployment/jenkins-standalone-deployment 3000:8080
Now, go back to http://localhost:3000 in your browser and refresh the page.
Uh oh…do you see what I see?
You should be seeing the Getting Started page again.
This is happening because we didn’t configure our Jenkins instance to have persistent storage. What we have deployed is roughly the equivalent of a Jenkins instance deployed on a VM with ephemeral storage. If that instance goes down, we go back to square one.
This is just one way in which our current setup is fragile. Not only is our Jenkins primary instance data not persistent, but this lack of persistence almost entirely nullifies the high availability benefits we get from using a Kubernetes Deployment in the first place. On top of that, our Jenkins configuration is largely manual and, therefore, hard to maintain.
Alright. Let’s fix this using our trusty friend, Helm.
Step Four: Deploying an Enterprise-Grade Jenkins With Helm
Helm is a package manager for Kubernetes. It operates off of configuration manifests called charts . A chart is a bit like a class in object-oriented programming. You instantiate a release of a chart by passing the chart values . These values are then used to populate a set of YAML template files defined as part of the chart. These populated template files, representing Kubernetes manifests, are then either applied to the cluster during a helm apply or printed to stdout during a helm template .
For our Jenkins installation, we will use the handy dandy stable helm chart. If you want to look at all of the values you can set for the version we use, take a look at the documentation on GitHub here.
To save some time, let’s keep most of the default values that come with the chart but pin the versions of the plugins we want to install. If you don’t do this step, you will likely run into issues with incompatible plugin versions as Jenkins and its plugins change over time.
Pro tip: If you are working with a chart and want a values.yaml file to work off of, repos usually come with one that holds all of the default values for the chart.
By default, out of the box, this helm chart provides us with:
A Jenkins primary instance Deployment with a replica count of one and defined resource requests. These resource requests give Jenkins priority over other Deployments without resource requests in the case of cluster resource limitation. A PersistentVolumeClaim to attach to the primary Deployment and make Jenkins persistent. If the Jenkins primary instance goes down, the volume holding its data will persist and attach to the new instance that Kubernetes schedules in the cluster. RBAC entities (e.g. ServiceAccounts, Roles, and RoleBindings) to give the primary instance Deployment the permissions it needs to launch executors. A Configmap containing a Jenkins Configuration as Code (JCasC) definition for setting up the Kubernetes cloud configuration and anything else you want to configure — to change this configuration, you would modify the values file you pass into the Helm chart. Configmap auto-reloading functionality through a Kubernetes sidecar that makes it so JCasC can be applied automatically by updating the JCasC Configmap. A Configmap for installing additional plugins and running other miscellaneous scripts. A Secret to hold the default admin password. A Service for the Jenkins primary instance Deployment that exposes ports 8080 and 50000. This makes it easy to connect Jenkins to an Ingress Controller, but also allows the Jenkins executors to talk back to the primary instance via an internal domain name. A Service to access the executors with.
Sounds a lot more robust than what we just built ourselves, doesn’t it? I agree! Let’s get that up and running in our cluster.
Before we do, let’s clean up our previous work by running the following:
kubectl delete -f 1-basic-rbac.yaml kubectl delete -f 2-basic-deployment.yaml kubectl delete -f 4-basic-service.yaml
Now, let’s set up a values file to pass into the Helm chart. To do so, save the following to a file named 6-helm-values.yaml :
Now we are good to deploy Jenkins using Helm. To do so, the process is super simple. Run:
helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm install jenkins stable/jenkins --version 2.0.1 -f 6-helm-values.yaml
Pro tip: It is almost always helpful to pin a version to entities like a Helm chart, a library, or a Docker image. Doing so will help you avoid a drifting configuration.
This will create a Helm release named jenkins that contains all of the aforementioned default features. Give that some time, as it does take a while to get Jenkins up and running.
It is important to note that there are a lot of extra features (e.g. built-in backup functionality) that we don’t take advantage of here with the default chart configuration. This is meant to be a starting point for you to build on and explore. I highly recommend evaluating the chart, creating your own version of the chart, and customizing it to meet your needs before deploying it into a production environment.
Once you have given the release time to deploy, run helm list to verify it exists.
To get the admin password for your Jenkins instance, run printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo . This is a little different from the last command we ran to fetch the password because the password is now stored as a Kubernetes Secret.
Then, run the following to forward a port:
export POD_NAME=$(kubectl get pods --namespace jenkins -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace jenkins port-forward $POD_NAME 3000:8080
Now you will be able to see Jenkins up and running at http://localhost:3000 again.
Now, let’s test our release and make sure we can run builds.
Once again, go to http://localhost:3000/view/all/newJob and create a Pipeline with the following code:
Pro tip: You could also use the Helm chart to add jobs on launch or the Job DSL plugin to add jobs programmatically. I would recommend the latter in a production environment.
Click Build Now again, and let’s marvel at what Helm just did for us.
Hopefully, your build ran successfully. If it did, awesome! Let’s make this setup even better.
Step Five: Moving Our Deployment to Terraform
In the current state, we have checked most of the boxes that we were looking for in an enterprise-grade Jenkins deployment, but we can improve our setup even more by managing it with infrastructure as code.
Terraform is the infrastructure as code solution I see used most in industry, so let’s use it here.
Essentially what Terraform does is take a configuration, written in HCL, convert that configuration into API calls based on a provider , and then make those API calls on your behalf with the credentials made available to the command-line interface. Because we are already authenticated into the cluster, and we are the ones calling Terraform commands, Terraform will be able to deploy the Jenkins Helm chart.
First, let’s delete our existing helm release: helm delete jenkins . Verify the release no longer exists with helm list .
Now, save the following Terraform configuration to a file called 7-terraform-config.tf :
Awesome, now we can use Terraform to deploy our chart.
Run terraform init in the directory where you put 7-terraform-config.tf . This will initialize a state file that holds Terraform state — typically you would store this state remotely using something like Amazon S3.
Now, run terraform plan . This will print out what is going to change in the cluster based on the configuration you have defined. You should see that Terraform wants to create a helm release named jenkins .
If this looks good to you, run terraform apply . This will give you one more chance to make sure you want to apply your changes. If everything seems to check out, which it should, reply yes when Terraform asks for validation input.
After a few minutes, you should have an enterprise-grade Jenkins up and running again! Booyah!
Pro tip: Terraform is great for a lot of things, but one of my favorite things it provides is drift detection. If someone goes and changes this helm chart ad-hoc, you will be able to tell by running a terraform plan and checking to see if any changes show up in the plan.
We’re almost done here, but I would be remised if I didn’t go over a few additional recommended improvements and considerations with you. Check those out below.
Step Six: Improvements and Considerations
Now that you have an enterprise-grade Jenkins up and running, you probably want to make it more useful to your organization by making the following improvements:
Set up SSO via a plugin like OpenId Connect Authentication or OpenID.
Open up your firewall to allow communication with your source control solution.
Create a custom executor image and custom pod template to be used for builds. This will allow you to install common dependencies across builds and will decrease build time.
Create a set of Jenkins Shared Libraries for teams in your organization to call upon when running deployments. Using these, you can define guardrails for what teams can and cannot do with their Jenkins usage. In fact, with this, you can set it up so very few people actually need access to the UI, and developers can just run builds by pushing to source control.
Convert the stable Helm chart we used above to a custom one stored in source control. This will allow you to manage changes to the chart over time and make sure you have more control over its lifecycle.
Set up monitoring via Prometheus and Grafana to evaluate the health of your Jenkins instance over time.
When deploying applications and services, configure Jenkins jobs to perform zero-downtime and blue-green deployments. This link will help you do that.
Add theming to make Jenkins a little cleaner by using something like the Simple Theme plugin.
I also recommend you take the following considerations into account:
If you want to stick with block storage for storing data from the primary Jenkins instance, you will want to add an automated volume backup system to restore from if the volume is ever corrupted. Using a tool like Velero simplifies the backup process, and you can actually use it to back up and restore your entire cluster in the event of a disaster.
If you want a more dynamic storage option and are using EKS, you might want to set up something like the EFS Provisioner for EKS to back Jenkins with a scalable file system.
If you want to deploy non-Helm workloads to Kubernetes using Terraform, take a look at the alpha provider for Kubernetes. It allows you to deploy any Kubernetes configuration that you can deploy without Terraform, something the original Kubernetes provider could not do. More information on your options for this workflow can be found here.
Regularly check to make sure your Jenkins plugins are up-to-date. In general, I recommend limiting the number of plugins you install as much as possible. From a security perspective, plugins are one of the weakest links for Jenkins, so limit their use and keep them up to date.
Finally, empower your developers to dive in and build great stuff!
Conclusion
If you’ve made it this far, fantastic! I hope that this post was valuable to you and you learned something new. You rock!
A GitHub repo containing the code snippets from this post can be found here. If you find it helpful, go ahead and give it a star. If you run into any issues, please submit a pull request to that repo, and I will do my best to respond.
Take care!
Addendum | https://medium.com/slalom-build/jenkins-on-kubernetes-4d8c3d9f2ece | ['Luke Fernandez'] | 2020-09-30 17:26:14.161000+00:00 | ['Jenkins', 'Helm', 'Terraform', 'DevOps', 'Kubernetes'] |
Using Permutation Tests to proof the Climate Change | Now the interesting question is: Do the maximum and minimum values behave like one would expect if there was no change of the underlying temperature distribution. More specifically we will ask: Assuming no change of the temperature distribution over all the years, how many distinct maximum and minimum values would we expect and what would be the probability of the number of maximum/minimum values that we are seeing here?
One interesting aspect of this type of statistical question is that the actual temperature values themselves are not used at all. Only the relative ordering of the values is relevant. This property of the statistical test proves to be very useful in many other cases where you cannot really assign meaningful values to increasingly severe events, but you still want to be able to test for randomness. Of course the result then is of a purely qualitative nature (i.e. the temperature is increasing at a statistical significant level) but the test will not tell you by how much the temperature is increasing.
2. A Little Bit of Theory
In order to approach the question whether the number of new maximum/minimum values for the yearly average temperature behaves as one would expect, we need to formalize the problem a little bit.
For the following examination, we generalize the problem to a vector of independent, identically distributed (IID) continuous random variables X(1)…X(n). All of the three properties are important here: We assume that every random variable X(i) in this vector has the same probability distribution and we assume that all variables are independent from each other. Finally we also assume that all variables are continues, which implies that the probability of two random variables having the same value is zero.
2.1 Record Statistics
We are now interested in what I call a running record: We call position i a record whenever the value of the random variable X(i) is larger than all previous variables X(j) with j < i. With the weather data, we are interested in the number of records of the vector of average temperatures.
Given a sequence of values X(1)…X(n) we can now define a simple statistics as the number of observed records within that sequence and then we can ask if the value of that statistics looks plausible if we assume that all X(i) are IID. This assumption corresponds to the null hypothesis of “no climate change” in the temperature series.
Translating the question into a mathematical language, we would like to calculate (or at least estimate) the probability of having k records in a sequence of n IID random variables. This question immediately brings us to random permutations.
2.2 Random Permutations
Instead of answering the original question let’s transform it to an equivalent problem. To do so, we first assign the rank of the value to each random variable, i.e. the smallest random variable is assigned a rank of 1, the second smallest variable a rank of 2 and so on until the largest random variable is assigned the rank n. Let’s denote the rank of X(i) with R(i).
The following example gives us the ranks R(i) for the small sequence X(i) with 7 elements.
value X(i): 0.2 2.3 1.1 0.7 2.4 1.8 2.1
rank R(i): 1 6 3 2 7 4 5
Note that the ranks R(i) still have the same relative order as the original values X(i), which in turn implies that the number of records in the ranks vector is the same as the number of records in the original values.
The set of all ranks R(1)…R(n) contains all the number from 1 until n, but in a random order. Assuming that the original values X(i) are IID, the vector of ranks R(i) is a random permutation. Therefore the probability distribution of the number of records in a sequence of IID random variables is the same as of the number of records in a random permutation.
2.3 Monte Carlo Sampling
Unfortunately, it turns out that the distribution we are looking for (the number of records in a random permutation) is not exactly simple to calculate. There are some formulas (see On the outstanding elements of permutations by Herbert Wilf), but these build on the unsigned Stirling numbers of the first kind which are not easy to calculate.
Instead of trying to evaluate some mathematical formula for the probability distribution we chose a different route: We simply let the computer conduct a reasonable huge number of random experiments by sampling random permutations, and in each experiment we calculate the statistics we are interested in. The histogram (i.e. the relative frequency of each possible outcome) then gives us an estimation of the true probability. This approach is called Monte Carlo Sampling and by the law of large numbers, the results will always converge to the real values.
3. Implementation
Now we have gathered all the required theory for implementing the statistical test if the number of running maximums/minimums in the temperature chart below is plausible under the null hypothesis that the climate doesn’t change (i.e. the probability distribution of the average temperature doesn’t change over time). | https://towardsdatascience.com/using-permutation-tests-to-proof-the-climate-change-2bc34d614eb7 | ['Kaya Kupferschmidt'] | 2020-12-23 18:14:27.792000+00:00 | ['Climate Change', 'Data Science', 'Statistics', 'Weather'] |
On Ben Yehuda Street | Photo by Sander Crombach on Unsplash
At 3:45 in the morning, I awake from a bad dream.
“The same dream?”
My husband’s voice is soft in the darkness. He takes my scarred hand in his and brings it to his face. I move closer, the curve of his body molding comfortably against mine. This is how I remind myself that I am not alone. The silence between us grows warmer with each moment and soon turns to whispers.
I drift back to sleep. Later that night, I am again in my dream and again awakened by the explosion.
A few weeks ago I stepped into a world where time was stopped, and joy ceased to exist. Some days these dreams are so vivid, I just don’t know if I’m asleep or awake. Always the same noise echoing like thunder — it smells like smoke and ruin. And blood everywhere on the streets.
I want to get away from it so I pace the length and width of the rooms in our house. I can’t. T That which destroys and consumes won’t let me.
***
Survivors and witnesses would recall the perfection of that early summer evening, a time to unwind over a cappuccino and pastry, to take stock of one’s surroundings.
The fact that it was also a Saturday night only served to heighten the restful atmosphere because Saturday nights in Jerusalem are times of reawakening and gentle rousing from the end of the Shabbat. Lights aglow, the city murmurs with activity. Shops reopen; people traverse the sidewalks at a slow pace. Tourists peruse the shops in the city center. Traffic comes to a standstill and the blare of car horns are some kind of reminder of our great revolt in Masada against the Romans.
Seated at an outdoor café, I glanced at my watch. Seven-thirty. My husband was late. I tried his cell, but knowing he was in the habit of listening to loud music while driving, I decided to settle back and enjoy watching the pedestrians.
People streamed up and down the sidewalk. I observed a group of young men in army uniforms congregated in front of the ice cream store, laughing and slapping on each other’s shoulders. Someone had started to play a bongo. Strains of a violin added themselves and then the gentle wail of an old woman singing a sad Russian melody mingled.
I checked my watch again.
A beggar paused at my table with his hand extended. Gaunt, haggard, and hollow-eyed, his cheeks covered with a scraggly beard, he resembled the Moses from the Bible of my childhood. He wore an old sagging black coat. I smiled at him and dropped a few bills into the tin can he held in front. Squinting into the fading sun, I put my sunglasses on and motioned the waiter for another iced coffee.
Just then a slight young man walked up the street towards me. He wore a black trench coat that I thought was too warm for a summer evening. The brim of his cap rested low as he glanced up and down the block. His gaze met mine. It lasted just an instant, but it sent a jolt of dread through me. I looked him over for another moment before glancing at my watch, then returned to my coffee. Sometimes I worry about nothing.
But sometime later, as the blond, curly-haired waiter served me yet another coffee, the atmosphere violently changed without warning. A paroxysmal roar forced a brutal intrusion on the serene early evening. The uproar thundered through my ears. The sky erupted into the raging chaos of a red-and-orange fireball. It hurried above the buildings. Windows shattered by the thunderclap of shock waves pouring through the air.
A gust of warm debris filled the air, racing toward me. Screaming, I dove under the table. I had landed on shards of glass while gasping for air. When the din ended, my ears were ringing. A high-pitched noise rasped in and out of my throat. I stared down at myself; dust-shrouded me like a black vapor. My hands oozed with blood.
I got to my feet and staggered through the dense black smoke. Only in nightmares were feet so heavy.
I saw destruction everywhere. The balcony over a toy store lay smashed in the street. The force of the blast had destroyed the facades of several stores. Flames and black smoke billowed from cars. Glass shards peppered the street. Papers and other debris still burning fluttered in the air.
Then, for a moment, the street was strangely quiet, as though the explosion had blown away everyone’s senses. The dazed pedestrians stood motionless — eyes round with shock, mouths hung open. In the middle of Jerusalem, while the smoke of the bomb still lingered over what should have been a perfectly peaceful day, the moans and cries charged the air from many directions. Then the screams started, echoing over the shriek of sirens hurrying to our rescue.
Anguished cries and police sirens coalesced. The bomb squad had arrived in a wind of shrilling sirens. First responders raced about, ferrying the wounded to ambulances. Dust-covered civilians staggered down the torn sidewalk with shocked eyes, with shrapnel wounds and ears bleeding from the blast.
Where was my husband?
My head was spinning, my heart thudding in my ear, my eyes picking up images around me in meaningless flashes. An armless boy. A wedding ring on a severed finger. Legs blown off below the knee, mutilated feet dangling limp, blood seeping from shrapnel wounds. A man kneeling by the lifeless body of a little boy. I saw his lips moving as if he might be praying. A wounded soldier was screaming. A woman, her face streaked with blood, kneeling on the ground, hugging a boy tightly to her. “Please,” she cried, collapsing into heart-wrenching sobs. “Don’t die, don’t die, please don’t die!” she begged. A vacant-eyed man cradled his unconscious wife.
Nothing had ever prepared me for this. I covered my face with my bloody hands and wailed and wailed.
Where was my husband?
Just then I heard a loud moan coming from behind me. A man was sitting against a wall, his head in his hands; blood trickling through his fingers. He looked up as I approached, and I thought his face seemed familiar. I squatted next to him. He had a large gash on his forehead. “Are you all right?” He did not reply. His expression was bewildered. Small sweat beads covered his upper lip.
I put my hands behind his shoulders and lowered him to the pavement. Suddenly I remembered where I had seen him! He was the old beggar man I had given a few coins to when Ben Yehuda Street been a safer place.
After two men had carried the beggar away, I collapsed to the ground next to the mangled body of a young woman. She was hardly recognizable, save for her soft, blond curls, soaked with blood. She could have been my daughter and still nothing I could have done for her.
I dragged myself away and nearly stumbled over someone else. A young man, perhaps in his early twenties, whose legs seemed shredded below his knees. I crouched down beside him, trying to shake off my own need for tears when he moaned. His unfocused eyes glazed over with pain. He wept. Then he screamed. Then cried for his mother. “Ima! Ima!” he kept calling. My heart overflowed with pity and sadness.
Someone had once told me that death is lighter than a feather. I had not understood what that meant. Now I understood it even less as blood trickled from his mouth and down his throat. I struggled to keep the pity out of my eyes.
My own anguish would have to wait. I wanted to shield him from the pain he was experiencing. He was the son I never had. I took his hand with both of mine and squeezed it, brought my lips to his feverish forehead and kissed him gently. .
“What is your name?” I asked, stroking his blood-soaked hair from his face.
“David,” he whispered. Blood continued to bubble from his lips.
I once read somewhere that our names contain our fates and wondered if David was a victim of his.
Hot tears stung my eyes, but I forced them back and smiled down at him. He was shivering. I covered his body with mine. Our blood intermingled, feeling warm and sticky.
“David, hold on, a doctor will be here soon.” The terror in his eyes was so complete that I couldn’t bear to look at him. My heart was beating frantically against his fading life.
I felt a hand on my shoulder. “He is dead.”
Something was wailing inside me, a scream that never materialized. I held onto David’s hands until someone pulled them away. Then two uniformed boys carried him off on a stretcher, his body covered with an army blanket.
Where is my husband?
And that’s when all sound disappeared. My entire world sucked by gravity toward David’s boots, protruding from under the cover. Seeing them splattered with mud and blood made me think of his mother. An iron-gripping hurt clutched my heart and I began to hyperventilate.
Now adults and children were being laid out on the ground nearby. All around me damaged human beings. I turned to the wall and retched.
Was it only twenty minutes ago that the world had seemed a safe place, when I was sitting at a sidewalk cafe on Ben Yehuda Street, savoring the sounds and smells surrounding me? And the sun, still far from setting, had bathed the old city, its walls, its churches and domes. The cerulean sky, clean of clouds, had imbued me with happiness, hovered over me like a good dream that hadn’t morphed into reality.
Jerusalem, that small city, compacted together, house touching house and roof-to-roof, built of masonry. It was just another day in Jerusalem, just another bomb. A city under siege.
***
At night now I find it difficult to sleep. Images of the smoldering ruins flood my brain. Twenty people were killed, most of them under the age of twenty. Some had died still clutching their belongings, as blood pooled around their bodies. Dozens more injured — destined to endure life-long injuries.
These days I’ll get up before dawn, sit in the living room with a blanket wrapped around myself, and stare outside my window. Glittering lights of the city shine through the darkness. Alone with only my thoughts for company, I wonder whether this is how it is with everyone who has experienced death and fear — the kind that can’t be silenced with comforting words, the kind that looks at you straight in the face with a challenge. The kind that consumes and destroys.
“Let’s go for a walk.”
I hear my husband’s voice.
Side by side on the road, our moon shadows follow us; my husband’s nearly twice as long as my own. | https://henyadrescher.medium.com/on-ben-yehuda-street-bf1ae4fa50d6 | ['Henya Drescher'] | 2019-05-03 13:48:27.535000+00:00 | ['Short Story', 'Israel', 'War', 'Reading', 'Writing'] |
These 5 Famous Writers Didn’t Quit and Neither Should You | Writing is tough. Getting rejected is even harder.
You sit in front of a computer, you let the words slice you open and spill your guts and heart onto a blank document. You have no clue whether it is good or not, whether someone else would find it interesting, but you give your best and let the story unfold.
Eventually, a stranger reads it or glances at it. They then send you a short, courteous private note: “Thanks, we will pass”. Sometimes, most of the time, you won't even get that.
This is brutal and has the potential to make you give up on your dream. You feel like throwing the computer out of the window, and finally, make your mother proud by becoming a lawyer.
So, a minute before you do that, bear in mind that you're not alone. Actually, you are in a really, really, good company. You shouldn’t give up just yet. | https://medium.com/illumination/these-5-famous-writers-didnt-quit-and-neither-should-you-4e648e67cfc2 | ['Elad Simchayoff'] | 2020-09-01 14:52:46.531000+00:00 | ['Inspiration', 'Rejection', 'Books', 'Positive Thinking', 'Authors'] |
Drumpf Praises His Writing Prowess | Drumpf Praises His Writing Prowess
”Wright like a butterfly, STING with a Tweet.”
Drumpf launched a full scale assault on fantasy author J.K. Rowling after she replied with laughs to one of his Tweets. First, however, he removed the offending Tweet from his timeline. Too late, it seems, because it can still be found in the forwards and replies.
This Tweet (since removed) went viral for the laughter it received.
“Why do you think he started his Orwell campaign?” admitted LP Spuckered, White House Aide. “He could care less about Muller. He’s thrown so much mud at it, every one knows he’s guilty but he’ll never be forced out of office. Even when his second term ends. No, his Tweet embarrassed him and now he wants it to never have happened.”
In the thousands of retweets, no one defended 45’s writing skills. “Even his ghost writers were awful,” Michael Wolff, author of Fire and Fury told Emphasis. But when you’re audience is filled with suckers who think you’ll make them rich, you’re not writing to show off your talent.
“Even his ghost writers were awful. But when you’re audience is filled with suckers who think you’ll make them rich, you’re not writing to show off your talent.”
Not so, says White House Press Secretary Sarah Huckabee Sanders. “We know for a fact that the Nobel Committee for Literature is considering him for a prize. When he wins, he’ll be the first President to do so. And everyone knows the Literature Prize is the big one. Much more important than the nickel-and-dime Peace Prize Obama won.”
Drumpf defended himself with the Tweet “Don’t believe a CHILDREN’S AUTHOR about my wrighting skills. I Wright like a butterfly, STING with a Tweet.”
Emphasis wondered if Drumpf has written anything since he doesn’t even read. So we explored the six page file he plans to donate to his Presidential library and tried to track down examples of his writing since he took office.
Writing other than his Tweets that is.
“We know for a fact that the Nobel Committee for Literature is considering him for a prize. When he wins, he’ll be the first President to do so. And everyone knows the Literature Prize is the big one. Much more important than the nickel-and-dime Peace Prize Obama won.”
After two weeks of intensive searching, we found four examples.
Writing in School
Drumpf’s sister Maryanne Barry supplied this sixth grade poem.
The first two were provided by his sister, Judge Maryanne Trump Barry. “I’ve been holding onto these for years for the moment he should get his comeuppance,” she said. “That will never come so you might as well have them.”
The first was written when he was in sixth grade:
I love to grab pussies
I heard my father say
So I grabbed my pussy
Felix the evening before last
and he scratched my hand.
I hope he’s happy in the ground
and maybe I’ll learn to like
pussy like my dad some time.
The following example was his submission for an English essay on writing about empathy.
Impathy exercize
by The Donald
One day the Donald was sitting at lunch by himself at his table his father paid for in the skool cafeteeria. It was a great table, the greatest table a gifted honor student like the Donald ever had.
The Donald looked at the next table and saw the pour skinny kid whoos name he doesn’t remember because his father werks for my father and is probably Mexican or something an not even legel. The poor skinny Mexican kid was eating a fat, warm taco with ground beef that his mother kept warm with a think foil and cloth thing.
The Donald only had his cold roast peasant sandwich slapped together by the Mexican made and tossed into a paper bag specially printed with the Drumpf monagram. The Donald knew that poor Mexican kid felt bad that he had a nice warm taco and the Donald was stuck with a cold peasant sandwich so the Donald offered to trade, knowing Mexicans aren’t two smart.
But the Mexican told the Donald, “No way. My padre told me to never play with you, never talk to you, and never ever trade with you because I’ll get screwed and you’ll get my kohonis.” The Donald impethized with the Mexican kid that he had to make up words because he was so intimadated by a child of the most rich and powerful famly in all of New York.
But the Donald doesn’t have to right anymore because teacher said 200 words, and the Donald rote more than that, more words than any skool kid in the histery of skool kids so his rich and influntial father said the Donald fullfilled his contract and he will take it up with the principol if she dosen’t give this and A.
But the Mexican told the Donald, “No way. My padre told me to never play with you, never talk to you, and never ever trade with you because I’ll get screwed and you’ll get my kohonis.” The Donald impethized with the Mexican kid that he had to make up words because he was so intimadated by a child of the most rich and powerful famly in all of New York.
Drumpf’s prediction in the last paragraph proved astonishingly correct. His twelfth grade English teacher gave him an A+ with the note: “Effort exceeds execution. As always. One day your lack of attention to detail will catch up with you.”
So far her prediction has been off the mark by miles.
Written while in office
We also received two samples from White House staff members. The only two examples of non-Tweet writing they could find. The first appeared on the men’s bathroom wall after Drumpf couldn’t make it to his private bathroom.
Here I sit all broken hearted
cause my bathroom
isn’t golden and gorgeous
like POTUS’s
to sit in when I farted.
This Post-It was left on the refrigerator in the kitchen.
This is just to say
I have eaten
the Big Macs
that were in
the icebox.
Cold beef sucks.
Next time
leave them in
the oven so
their warm.
Emphasis contacted the Nobel Literature Committee to see if they considered Drumpf a legitimate candidate. To their response, “he doesn’t even read,” we offered to email the samples. “It isn’t necessary,” their spokesman replied. “We’ve seen his Tweets.”
Jonesing for an additional 45 fix? Check out: | https://medium.com/emphasis/drumpf-praises-his-writing-prowess-e33b851b8244 | ['Phillip T Stephens'] | 2018-07-30 12:16:42.525000+00:00 | ['Donald Trump', 'Writing', 'Humor', 'Twitter', 'Satire'] |
Giants | We are giants,
living in a world that's all too small
with no guidance,
walking on thin ice, hoping not to fall.
We are magicians,
left with only a couple of tricks
and they have conditions,
cruel, but hidden in old scripts.
We are ancient gods
forgotten even by the last believer,
we never beat the odds,
still wishing we were freer. | https://medium.com/scribe/giants-1dad322799d9 | ['Veronica Georgieva'] | 2020-10-20 16:42:52.849000+00:00 | ['Poetry', 'Anxiety', 'Poem', 'Mental Health', 'Fear'] |
Google DeepMind might have just solved the “Black Box” problem in medical AI | Segmentation Video of an OCT Scan/ taken from DeepMind’s original publication
The key barrier for AI in healthcare is the “Black Box” problem. For most AI systems, the model is hard to interpret and it is difficult to understand why they make a certain diagnosis or recommendation. This is a huge issue in medicine, for both physicians and patients.
Deep Mind’s study published last week in Nature Medicine, presenting their Artificial Intelligence (AI) product capable of diagnosing 50 ophthalmic conditions from 3D retinal OCT scans. Its performance is on par with the best retinal specialists and superior to some human experts.
This AI product’s accuracy and range of diagnoses are certainly impressive. It is also the first AI model to reach expert level performance with 3D diagnostic scans. From a clinical point-of-view, however, what is even more groundbreaking is the ingenious way in which this AI system operates and mimics the real-life clinical decision process. It addresses the “Black Box” issue which has been one of the biggest barriers to the integration of AI technologies in healthcare.
An optical coherence tomography (OCT) scan of the retina
Two Neural Networks
DeepMind’s AI system addressed the “Black Box” by creating a framework with two separate neural networks. Instead of training one single neural network to identify pathologies from medical images, which would require a lot of labelled data per pathology, their framework decouples the process into two: 1) Segmentation (identify structures on the images) 2) Classification (analyze the segmentation and come up with diagnoses and referral suggestions)
DeepMind’s framework tackles the “Black Box” Problem by having 2 neural networks with a readily viewable intermediate representation (tissue map) in between
1. The Segmentation Network
Using a three-dimensional U-Net architecture, this first neural network translates raw OCT scans into tissue maps. It was trained using 877 clinical OCT scans. For each scan’s 128 slices, only about 3 representative ones were manually segmented. This sparse annotation procedure significantly reduced workload and allowed them to cover a large variety of scans and pathologies. The tissue maps identify the shown anatomy (ten layers of the retina) and label disease features (intra-retinal fluid, hemorrhage) and artifacts.
This process mimics the typical clinical decision process. It allows physicians to inspect the AI’s segmentation and gain insight into the neural network’s “reasoning”. This intermediate representation is key to the future integration of AI into clinical practice. It is particularly useful in difficult and ambiguous cases. Physicians can inspect and visualize the automated segmentation rather than simply being presented with a diagnosis and referral suggestion.
This segmentation technology also has enormous potential in clinical training as it can help professionals to learn to read medical images.
Furthermore, it can be used to quantify and measure retinal pathologies. Currently, retinal experts can only eyeball the differences between a present and past OCT scans to objectify disease progression (eg: more intra-retinal fluids). With the AI’s automated segmentation, however, quantitative information such as the location and volume of seen anomalies can be automatically derived. This data can then be used for disease tracking and research, as an endpoint in clinical trials for example.
left: a raw OCT scan; middle: manual segmentation; right; automated segmentation
2. The Classification Network
This second neural network analyses the tissue-segmentation maps and outputs both a diagnosis and a referral suggestion. It was trained using 14884 OCT scan volumes from 7621 patients. Segmentation maps were automatically generated for all scans. Clinical labels were obtained by examining the patient’s clinical records in order to determine retrospectively 1) the final diagnosis (after all investigations), 2) the optimal referral pathway (in light of that diagnosis).
The classification network, therefore, takes segmentation maps and learns to prioritize patients’ need for treatment into urgent, semi-urgent, routine and observation-only. It then outputs a diagnosis in form of a probability of multiple, concomitant retinal pathologies.
Output: predicted diagnosis probabilities and referral suggestions
Image Ambiguity and Ensembling
Image interpretation and segmentation can be difficult for humans and machines alike due to the presence of ambiguous regions, where the true tissue type cannot be deduced from the image, and thus multiple equally plausible interpretations exist. To overcome this challenge, DeepMind’s framework uses an ensemble of 5 segmentation instances instead of 1. Each network instance creates a full segmentation map for the given scan, resulting in 5 different hypotheses. These different maps, just like different clinical experts, agree in areas with clear image structures but may differ in ambiguous low-quality regions. Using this ensemble, the ambiguities arising from the raw OCT scans are presented to the subsequent decision (classification) network. The classification network also has an ensemble of 5 instances which are applied to each of the 5 segmentation maps, resulting in a total of 25 classification outputs for every scan.
Results:
The framework achieved an area under the ROC curve that was over 99% for most of the pathologies, on par with clinical experts. As for the referral suggestion, its performance matched that of the five best specialists and outperformed that of the other three.
Future:
OCT is now one of the most common imaging procedures with 5.35 million OCT scans performed in the US Medicare population in 2014 alone.
The widespread availability of OCT has not been matched by the availability of expert humans to interpret scans and refer patients to the appropriate clinical care.
DeepMind’s AI solution has the potential to lower the cost and increase the availability of screening for retinal pathologies using OCT. Not only can it automatically detect the features of eye diseases, it also prioritizes patients most in need of urgent care by recommending whether they should be referred for treatment. This instant triaging process should drastically cut down the delay between the scan and treatment, allowing patients with serious diseases to obtain sight-saving treatments in time.
“Anytime you talk about machine learning in medicine, the knee-jerk reaction is to worry that doctors are being replaced. But this is not going to replace doctors. In fact it’s going to increase the flow of patients with real disease who need real treatments,” Dr. Ehsan Rahimy, MD, a Google Brain consultant and vitreoretinal subspecialist in practice at the Palo Alto Medical Foundation.
Read more from Health.AI:
AI 2.0 in Ophthalmology — Google’s Second Publication
Deep Learning in Ophthalmology — How Google Did It
Machine Learning and OCT Images — the Future of Ophthalmology
Machine Learning and Plastic Surgery
AI & Neural Network for Pediatric Cataract
The Cutting Edge: The Future of Surgery | https://medium.com/health-ai/google-deepmind-might-have-just-solved-the-black-box-problem-in-medical-ai-3ed8bc21f636 | ['Susan Ruyu Qi'] | 2018-08-25 15:15:23.576000+00:00 | ['Machine Learning', 'Ophthalmology', 'Artificial Intelligence', 'Medicine', 'Health Technology'] |
Leave the Bag at School This Holiday Break | Leave the Bag at School This Holiday Break
Five Reasons Why Teachers Must Recharge over Winter Break
By Melody Johnson, CEO and Founder of Loving Literacy
Reading the title says it all. You must be thinking, “Are you kidding me? Honey, I got hundreds of things to do!”
I get it. You are coming to the end of the semester.
There is the expectation to attend an IEP, EIP, and 504 meetings before the winter break. You also have to finish those lesson plans with the team, the observation with your principal, and collecting data from your class.
Oh, and then you have that holiday craft or assignment to do with the class.
As a new or veteran teacher, you have experienced, what is called “Survival Mode.”
You might feel like you are barely holding on by a string and you are just counting down the days with your lovely students. You hope that you can get by on the four hours of sleep so you can accomplish the tasks being asked of you — even though you are super organized and got help.
But let’s be real. Many times, as teachers, the only time we truly have time to catch up is on those breaks.
With the recent events, let’s think, will it benefit you, mentally to do work over the break?
Below are five reasons why you need to recharge and leave the school bag at school.
1) You Don’t Want to Burnout
This is real. This is exactly what happened to me often as a teacher. This is one of the top reasons why people leave this admirable job. It becomes overwhelming, it becomes saturated with tests, assessments, lesson plans, gathering of materials, gathering of data — that it is too much.
Unfortunately, burnout can also cause serious health issues (panic attacks, other physical and mental health issues) and even the ability to do tasks efficiently — or correctly.
Leave the bag at home. Do what you can and prioritize what you can do in the meantime.
2) You Don’t Want to Lose Your Joy
While you are so focused on all of these things, you might get to the point where you lose the joy in teaching. You might be so focused on what you need to finish, you might lose out on the fun memories that you create daily with your co-workers and students.
Keep your joy by keeping your work at work.
Losing your joy in teaching can look like irritability, frequently becoming upset, rarely smiling, and finding it difficult to express joy in the small moments.
3) You Don’t Want to Drift Through Life
You can become so engrossed in the work itself, that you miss the fleeting memories you want to remember. The memories that you can no longer make. By drifting, you also take the focus off yourself, which we all need self-care.
Create those boundaries for yourself now so you do not get bogged down with the responsibilities of work. Learn to say yes to help and no to taking on additional tasks that you simply cannot do.
4) You Don’t Want to Create Negative Habits
The habits can be neglecting yourself. Either mentally or physically. If you neglect yourself, you cannot care efficiently for your students or even take off the tasks needed to help your students.
5) You deserve to have fun. Yep. I repeat: YOU DESERVE IT.
You deserve it because you sacrifice all the time to give to the kids when they need it, sometimes in situations that no one sees from the outside.
You deserve it because of the times you missed out on family events because you had work to do or data to analyze or a lesson plan to complete.
You deserve it for all the times you would come early or stay late to help with a project at school.
You deserve it because you help motivate the students to be model citizens and to have confidence in themselves. At the same time, you have to seem “put together” even when you feel like falling apart.
The work will always be present. The work will always be there. There will always be assessments, lessons, and collaborations.
But there is only one you.
You deserve to have a good time and to be refreshed so you can be ready to work after the holidays.
Conclusion
Burning out is a real thing. It can happen to the best of teachers. From being so focused, you can lose your joy of teaching to kids. This is a trait that can be seen in others that are burnt out from a heavy workload. When you become so focused on work, it can be hard to see past the task itself to what really matters in your life.
When you neglect this, you neglect yourself.
With all the sacrifices that you made, you deserve to enjoy the holiday and time with your family and friends.
Go watch that extra movie with your family, friends, and rejuvenate. Use those gifts the parents gave to you, sleep in for a day, hang out with a friend, and enjoy your life. Yes, and have that extra cookie! | https://medium.com/inspired-ideas-prek-12/leave-the-bag-at-school-this-holiday-break-a67253ce0627 | ['Mcgraw Hill'] | 2020-12-15 14:08:06.040000+00:00 | ['Winter Break', 'Mental Health', 'Art Of Teaching', 'Education', 'Teaching'] |
The Daily Struggle of Never Feeling Good Enough | If you’re anything like me, you’ve struggled through phases in your life where you simply felt not good enough.
Perhaps it was in relation to a new job, a new relationship, maybe a family member made a comment that stuck with you, and you still haven’t recovered.
You go through life in a constant state of anguish. You feel unworthy, undeserving, and simply… not enough.
You probably don’t even remember the last time you told yourself:
“I deserve this.”
Everybody has an inner critic. Sometimes, that inner critic could be a positive thing. It pushes you to keep going, and it pushes you to strive for more.
However, sometimes that same inner critic controls you and limits your potential.
Lately, I’ve been struggling with my own inner critic. I used to tell myself that the moment I would see growth in my business endeavors and life overall, I would be exceedingly happy. I would finally feel like I was headed in the right direction — I needed that validation.
I craved it.
However, instead of jumping for joy at the little wins, I’ve had — I’ve been getting more and more anxious, all the while, my inner critic has been chatting up a storm:
“You’re an imposter.”
“This is it; you’re not going to get any more wins.”
“You’re not that good.”
The moment you see anything good resulting from your hard work and dedication — your inner critic attacks and undermines you to protect you from the shame of failure.
Shame, also called the “master emotion,” is the feeling that you’re not worthy, competent, or good enough — in a sense, you’re rotten at the core.
Shame is what stops you from trying because the message it conveys is that you won’t be good enough, so you might as well not even try.
The little wins and achievements that I’ve had have always felt conditional.
Leon Seltzer, a psychologist, says: “The inner critic won’t let you see past your achievements as ‘real’ for fear that, if you do, you’ll slack off and end up a ne’er-do-well (a person who is lazy or irresponsible.)”
As a result, you push yourself even more, with diminishing returns, driven more by fear of failure than inspiration.
The Solution:
I used to think that the solution was that I had to shut down my inner critic.
Somehow.
Magically, I would get rid of it altogether. I wouldn’t ever hear another pip or squeak from it ever again.
I tried meditation first. Then journaling, I would write 5–10 affirmations every single morning. Yet, despite feeling more positive in the morning, the positivity would soon burn out as the day went by.
According to Ethan Kross, of the University of Michigan’s Emotion & Self Control Lab, shutting that voice down is not the solution. It won’t work because “the voice will return no matter how hard you try to suppress it. Nor is it always effective to analyze the emotions it rouses; that opens you to the risk of ruminating or reliving those feelings and getting stuck in a negative cycle.”
The best intervention may actually be to respond from a detached perspective — almost as if you were another person.
This is called self-distancing.
To self-distance, one replaces the first-person pronoun I with a non-first-person pronoun, you or he/she, when talking to themselves.
Example:
“I didn’t want to make an emotional decision. I wanted to do what was best for LeBron James and what would make him happy.” — Basketball player LeBron James describing his decision to leave his old team.
Essentially, when people referred to themselves in the second-person or using their own name, they were able to improve their ability to detach emotionally from the situation.
The theory behind this is that people tend to display high levels of wise reasoning when they give advice to others, but not when they decide how to act themselves.
However, when people use self-distancing techniques, by asking themselves what kind of advice they would give to a friend if they were in the same situation, people are able to reduce this asymmetry in their insights and apply the same reasoning skills to their own dilemmas as they would to those of others.
Another great tool is self-affirmation. It’s a helpful tool to offset self-criticism. When you hear a voice saying you’re inferior or deficient, you need to try to see the evidence that refutes it in our mind’s eye.
Affirmations can revise the negative messages we hear — or think we hear.
Final Thoughts:
Your self-critic will never truly go away. And, in all honesty, you don’t really want it to.
Your inner critic is what can push you to run the extra mile, stay up the extra hour, wake up a little earlier, and strive for a bit more each day.
It’s only when that inner critic begins to control your life that it gets dangerous.
The next time something happens and you feel the voice in the back of your head call you an imposter, or say that you aren’t good enough, utilize self-distancing, say positive self-affirmations, and remember that you’re good enough.
Every negative situation that happens to you is a lesson learned, an obstacle you overcame, and an achievement.
Why? Because you survived it — despite it hitting you so hard.
The level of self-awareness you’ve obtained and the amount of knowledge you gained is significantly higher than those who seem to have it easy.
Learn to be more kind to yourself. You’re more than good enough. | https://medium.com/candour/the-daily-struggle-of-never-feeling-good-enough-1a927e466af1 | ['Dayana Sabatin'] | 2020-07-09 22:29:47.583000+00:00 | ['Self-awareness', 'Self Improvement', 'Life Lessons', 'Self', 'Insecurity'] |
My Glaringly White Chiropractor | My Glaringly White Chiropractor
What will it take to make the office welcoming to Black patients?
Photo by Jesper Aggergaard on Unsplash
Chiropractic care has a race problem. I am by no means the first person to say this, but my recent experience receiving care has pushed me to face this reality. I go to one of the better chiropractors in town. Not once have I seen an African American person in my chiropractor’s office. No Black staff. No Black patients.
I live in a part of town that is at least 50% African American. My chiropractor’s office is in a suburb nearby that is slightly whiter, but still fairly mixed. Despite this, their patients and staff are all White.
In chiropractic care, this is the norm. Studies have found that over 97% of chiropractic patients are White, despite 12.6% of the US population being Black. Chiropractic care tends to serve underserved patients: rural patients, uninsured patients, and female patients whose pain has been ignored by mainstream medicine. But while White women whose pain has been dismissed by general practitioners find relief in chiropractic care, Black women and men do not.
Healthcare disparities are a well-established fact of the US medical system, but the case of chiropractic care is even worse than that of general medicine. While 54% of mainstream Medical Doctors are White, 85% of Chiropractic Doctors are White. Only 0.9% of Chiropractic Doctors are Black. Current enrollment in chiropractic courses shows no signs of this changing.
Black People Embrace Other Holistic Medicine
It is easy for White practitioners to assume that Black people simply must not be interested. I doubt this is true. I know enough Black people to see that they often embrace holistic and natural medical solutions. They are well aware that conventional medical systems are not meeting their needs.
We are seeing an increasing number of Black doulas taking on disparities in maternal outcomes. A growing number of Black-owned yoga studios focus not only on fitness but also on body positivity, countering the skinny white woman narrative of other studios. Take a leisurely stroll through our regional farmers’ market and you will see stalls operated by Black women offering natural health and skincare solutions.
So why would Black people not be coming in to see the chiropractor? In one word, trust.
Black people, for good reason, prefer Black doctors. White doctors too often ignore their pain and diminish their symptoms. White doctors too often fail at listening to Black patients or understanding the Black experience. They rarely study health conditions that disproportionately bother Black patients. And, far too often, White doctors cause and exacerbate Black patient’s pain.
Going to a chiropractor requires a ton of trust. You have to be willing to let someone feel all over your body and then literally re-arrange your bones. There is no way you are going to do that with someone who is talking down to you, talking over you, or in any way demeaning you. If you go for a visit and feel like the chiropractor is uncomfortable with you, you won’t keep coming back for treatments. You want a chiropractor who intuitively understands your body and your life.
Walking through the doors of a White chiropractor’s office is a huge risk if you are a Black person. You might do it if you had a close friend recommend the office to you. Otherwise, you probably will find someone else to address your health needs.
All White Messaging for All White People
The smart thing to do for chiropractors seeking to grow their patient base would be to establish partnerships that encourage Black people to refer their friends. But in many cases, practices have (I hope unwittingly) done the opposite.
I first found out about my chiropractor from a mother-baby support group, one that met at 10 AM on Tuesdays. The group was led by two well-meaning, but obviously privileged, White women. They had set up partnerships with several wellness organizations around town. They gave me a welcome packet with coupons for more than half off at the chiropractor. My pelvis was out of place after birth, so I went seeking relief. It worked out wonderfully for me, a White woman.
Once I was established as a regular patient, the chiropractor gave me coupons for introductory visits to give my friends and family. Wonderful. But this kind of friend-to-friend marketing tends to perpetuate racial divides, simply because our social circles are rarely as diverse as they ought to be.
Friend-to-friend marketing tends to perpetuate racial divides, simply because our social circles are rarely as diverse as they ought to be.
If word of mouth marketing from your existing patient base won’t increase diversity, then you are reliant on your print and web-based marketing. And here, again, my chiropractor’s office fails miserably. All the pictures are of White people. The web pictures, the marketing copy, the pictures posted in the office. Every — single — one. There’s not a sign at the door saying “Whites Only”, but there might as well be because a Black person walking in would quickly see that they are not the intended care-recipients.
The How and Why of Fixing This
Photo by Joyce McCown on Unsplash
If chiropractic care remains limited to White patients, then healthcare disparities will be increasingly exaggerated. Black populations already report higher levels of back pain and related disability — the primary condition that chiropractic care is known to alleviate. Not only is chiropractic care effective for spinal injuries, but it is also often far cheaper than conventional medicine. Given the lower income of Black populations, it could offer much needed financial relief as well as pain relief.
A 2019 report by the International Chiropractors Association and the American Black Chiropractors Association called for increased recruitment of Black students to chiropractic training. Certainly, more Black Chiropractors are needed. But it will be hard to convince someone to study chiropractic medicine if they have never had any experience with it. People tend to go into medical professions because they or their family members benefited from that profession.
To increase diversity among doctors of chiropractic, you need to increase diversity in the patients. I don’t know all the answers on this, but the principles are much the same as increasing diversity in other areas.
Approach the problem with humility. Admit that the fault lies within the White institutions. Check your own hiring, marketing, and outreach strategies to see where you have perpetuated White exclusivity. Then start making tangible changes. Stop hiring receptionists and assistants you already know, and cast a wider net. Hire a marketing consultant who knows how to craft messages for Black communities.
Ask the Black community what they would like to see. Partner with Black leaders — in this case, the doulas, yoga teachers, and pediatricians that the Black community trusts — to learn, build trust, and develop referral networks. Be willing to take your practice to their office to work in a space where their patients are comfortable. Expect to be met with skepticism initially. Stick with the relationships and show that you are someone who can be trusted to listen to Black professionals and Black patients.
Chiropractic care ought to be not only accessible but welcoming to everyone who needs it, regardless of race. Right now, it is not. But with some humility and intentionality, it can be. | https://jtatlow1.medium.com/my-glaringly-white-chiropractor-ebf26f7e4b2e | ['Johanna Tatlow'] | 2020-08-14 13:38:56.965000+00:00 | ['Equality', 'Wellness', 'Racism', 'Race', 'Holistic Health'] |
Anonymous functions in python (Lambda) | Small anonymous functions can be created with the lambda keyword in python
Photo by Kevin Ku on Unsplash
Lambda expressions in Python and other programming languages have their roots in lambda calculus, a model of computation invented by Alonzo Church
Semantically, they are just syntactic sugar for a normal function definition. Like nested function definitions, lambda functions can reference variables from the containing scope.
In Python, we generally use it as an argument to a higher-order function (a function that takes in other functions as arguments). Lambda functions are used along with built-in functions like filter(), map() etc.
(anonymous here just means nameless functions)
syntax:
The keyword: lambda
Arguments:
An expression
lambda arguments: expression
They are syntactically restricted to a single expression. ie, can pass any number of arguments, but can only perform one operation on them
functions are objects in python, Lambda functions can be used wherever function objects are required. ie. we can assign it to variables and use as with objects.ie,
variable = lambda arguments: expression
variable(arguments)
difference between a normal def defined function and lambda function
# Python code to illustrate cube of a number
# showing difference between def() and lambda().
def square(y):
return y*y;
g = lambda x: x*x
print(g(7))
print(cube(5))
Without using Lambda : Here, both of them returns the cube of a given number. But, while using def, we needed to define a function with a name cube and needed to pass a value to it. After execution, we also needed to return the result from where the function was called using the return keyword.
Here, both of them returns the cube of a given number. But, while using def, we needed to define a function with a name cube and needed to pass a value to it. After execution, we also needed to return the result from where the function was called using the return keyword. Using Lambda : Lambda definition does not include a “return” statement, it always contains an expression which is returned. We can also put a lambda definition anywhere a function is expected, and we don’t have to assign it to a variable at all. This is the simplicity of lambda functions.
Example of Lambda Function in python
1.
# Program to show the use of lambda functions
double = lambda x: x * 2 print(double(5))
In the above program, lambda x: x * 2 is the lambda function. Here x is the argument and x * 2 is the expression that gets evaluated and returned.
This function has no name. It returns a function object which is assigned to the identifier double . We can now call it as a normal function. The statement
double = lambda x: x * 2
is nearly the same as:
def double(x):
return x * 2
2.
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
>>> pairs.sort(key=lambda pair: pair[1])
>>> pairs
[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
Example use with filter()
The filter() function in Python takes in a function and a list as arguments.
The function is called with all the items in the list and a new list is returned which contains items for which the function evaluates to True .
Here is an example use of filter() function to filter out only even numbers from a list.
# Program to filter out only the even items from a list
my_list = [1, 5, 4, 6, 8, 11, 3, 12]
new_list = list(filter(lambda x: (x%2 == 0) , my_list))
print(new_list)
Example use with map()
The map() function in Python takes in a function and a list.
The function is called with all the items in the list and a new list is returned which contains items returned by that function for each item.
# Program to double each item in a list using map()
my_list = [1, 5, 4, 6, 8, 11, 3, 12]
new_list = list(map(lambda x: x * 2 , my_list))
print(new_list)
example with reduce
The reduce() function in Python takes in a function and a list as an argument. The function is called with a lambda function and a list and a new reduced result is returned. This performs a repetitive operation over the pairs of the list. This is a part of functools module. Example:
from functools import reduce
li = [5, 8, 10, 20, 50, 100]
sum = reduce((lambda x, y: x + y), li)
print (sum)
multi-key sort using lambda
data = [[1,3,4],[1,5,6],[1,4,1],[2,4,5],[2,1,7],[2,1,2]]
data = sorted(data,key=lambda l:(l[1], l[2], l[3]), reverse=False)
print data
Or you can achieve the same using itemgetter (which is faster and avoids a Python function call)(not lambda just a bonus code):
import operator
data = sorted(data, key = operator.itemgetter(1, 2, 3))
And notice that here you can use sort instead of using sorted and then reassigning:
data.sort(key = operator.itemgetter(1, 2, 3))
suggested links:
https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions
https://realpython.com/python-lambda/
https://www.geeksforgeeks.org/python-lambda-anonymous-functions-filter-map-reduce/
https://www.programiz.com/python-programming/anonymous-function | https://medium.com/dev-genius/anonymous-functions-in-python-lambda-5b860e9b4c0f | ['Sooraj Parakkattil'] | 2020-06-22 19:42:06.975000+00:00 | ['Coding', 'Programming', 'Software Development', 'Introduction', 'Python'] |
Dopamine: You’re Soaking in It | Soaring Solo: The Art of Thriving Alone
Dopamine: You’re Soaking in It
The key to short-circuiting unhealthy desires
Photo by Rod Long on Unsplash
If you want something, there’s a reason. A small fraction of the time it’s biological: you are hungry, thirsty, tired or have to use the bathroom. The majority of the time, however, it’s all about the dopamine.
Dopamine is a neurotransmitter that gives us an overall sense of pleasure and relaxation. As humans, we are constantly fluctuating between a state of desire and satiety. When we say we “feel good,” we usually mean that all our desires are met. (Note that this is very different from having all your needs met, as it is quite possible to have everything you need for survival, even to be quite comfortable and secure, but to still have desires.) Dopamine is simply the “feel good” chemical that gives us the sense, for the moment, that all is right with the world.
We are wired for survival, but at a caveman level. The activities that bring us pleasure are the very things that ensured our survival individually and as a species back when food was scarce and procreation was a pressing concern. However, some of these activities are no longer survival issues now that food is plentiful, procreation is better controlled, and our numbers are in no danger of declining. Evolution takes time, and while our brains and their capabilities have advanced, dopamine is still what motivates our behavior. Add to this the fact that we have a much greater variety of sources of dopamine than we did in the Paleolithic Era, and abuse and addiction are the result.
There are plenty of ways to get a dopamine hit: drugs, alcohol, caffeine, nicotine, gambling, sex, exercise, massage, physical touch, hugs, love, connection, and food (more specifically, what I like to call the four junk food groups — sweet, salty, fried and pizza). This is by no means an exhaustive list. In fact, many of the items on the list in my article The Habit Notebook are examples of healthy ways to get your dopamine hit.
Take sugar for example. Our caveman brains still motivate us to eat the things that are calorie-dense and bioavailable, thereby providing quick energy. In the Paleolithic Era, sugar was most common in fruit, which meant that it was often scarce and only available during certain times of the year. There was never any danger of over-consuming sugar — or any other food for that matter. With the current convenience and availability of products with high sugar content — in addition to the sinister concoction high-fructose corn syrup — we are eating and drinking sugar in alarming amounts, with Americans now consuming 135 pounds per person per year, on average — that’s 2.5 pounds a week.
Product development of the processed foods we eat centers mainly around how to get us to want more, which centers around dopamine, and the easiest (legal) way to trigger the release of dopamine is sugar. Ever wonder why you crave McDonalds when it’s not even that good? Yep, sugar. A Big Mac has nine grams of sugar — about the same as a peach. A Quarter Pounder contains ten grams. And that “healthy” chicken sandwich? Eleven grams.
But the issue isn’t so much about amounts as it is about the way processed sugar triggers the desire for more. Even a tiny amount of processed sugar will light up your dopamine receptors so that the caveman brain sends pleasure signals and encourages you to keep doing what you’re doing. But sugar from fruit is accompanied by fiber, which slows down its absorption. Sure, the dopamine hit isn’t quite as strong as it would be if you took forty-seven grams of processed sugar straight to the face (which is what you’d get with a package of Skittles), but it does elevate your dopamine levels in a way that is healthy and sustainable — and without the sugar crash that tends to lead to consumption of even more sugar, perhaps this time in the form of soda (10 grams of sugar per 12-ounce can).
We’re not likely to binge on a bowl of peaches because we’d get pretty full after two or three. Not so with processed sugar, as it takes up very little space in our stomachs, so the combination of a huge dopamine hit and lots of stomach space makes it almost impossible to resist once you’ve started. Also, scans have shown that brain activity after consuming sugar are similar to those after doing cocaine.
Consuming sugar all the time is like any drug — you develop a resistance and need more and more just to get the same high. When you quit for a few weeks, your sensitivity to sugar increases again and you’re able to achieve the same level of pleasure you used to get from large quantities of sugar from a healthy treat like fruit instead.
I tend to harp on sugar because it’s a serious problem. Of the addictive substances available legally and illegally, it is by far the most socially acceptable. And the worst culprit is soda, comprising a full one-third of the 2.5 pounds of sugar we consume weekly. This more than any other offender is the reason our society suffers from such alarming levels of obesity and diabetes. If you can, switch to unsweetened iced tea or water instead. It only takes a few weeks to kick the habit.
But it’s not just about sugar. It’s about anything you’re craving. If you’re craving something, your body wants a dopamine hit. Give it one. It doesn’t have to be the one you’re craving. Want sugar? Have some fruit. Want a cigarette? Go for a walk. Want alcohol? Spend time with a friend. Your brain won’t shut up about it until it gets the dopamine it needs, but you choose what method will provide it.
Video games have emerged in the past few decades as a source of dopamine, resulting in compulsive behavior for some. Passing a new level or defeating a particularly difficult foe provides the player with a sense of accomplishment and productivity — albeit a false one — and that stimulates dopamine production and increases feelings of pleasure and relaxation. (The loading page for the game Candy Crush plays on this fact with the advice “Relax and swipe the stress away.”) A good alternative to this particular habit is to create. Whether it’s painting, drawing, writing, woodworking, knitting, making jewelry, pottery, etc., find the art that speaks to you and make something. Yes, it takes longer, but it will provide the authentic sense of accomplishment and productivity you’re seeking.
If you don’t have a Person — someone who is in your corner, whether friend, relative or significant other — understand that you are at a great disadvantage by virtue of being alone. One of the most consistent and healthful sources of dopamine is having a Person. A partner can be a source of a variety of dopamine hits — hugging, communication, connection, laughter, kissing, sex, and the list goes on. Without a Person, however, you are charged with the task of maintaining healthy levels of dopamine without succumbing to addictive sources. Fear not, there is still plenty you can do to get your “wants” met in a healthy way.
Tune in next time for part two of this essay on dopamine where we’ll discuss ultimate high — the dopamine-oxytocin “cocktail” — as well as the specific tools you can use to encourage yourself opt for the healthful sources of these feel-good chemicals.
Adapted from the forthcoming book Soaring Solo: The Art of Thriving Alone by T.A. Pace. | https://medium.com/illumination/dopamine-youre-soaking-in-it-ab8697b436cd | ['T.A. Pace'] | 2020-12-04 16:57:50.608000+00:00 | ['Alone', 'Self Improvement', 'Personal Development', 'Diet', 'Psychology'] |
CryptovationX: Media Coverage | Since CryptovationX was founded, the company has rapidly gained a widespread media coverage including TV, newspaper, online media, and press conferences.
Pondet Ananchai, CEO, was interviewed by Thai National News (TNN) on the cryptocurrency regulations in China. As the market went down and many investors had lost their fortune, we had spotted more arbitraging opportunities. By arbitraging, it does not matter if a market goes up or down, as long as there is asymmetric price on different exchange.
On February 12th, Pondet was invited to live interview on TNN24 along with Dr. Bhume Bhumiratana, technology advisor to SEC Thailand. The program involved discussions around the topic of blockchain technology and cryptocurrency on various aspects: regulations, investments, and recommendations for Thailand. (Recorded video is available on youtube: https://www.youtube.com/watch?v=2BoBAoW_l94)
Three-days later, Cryptovation hosted the ICO launch press conference and introduced CXA as the cryptocurrency for Robo-advisory and ‘Wealth for All’ Initiative at Intercontinental Hotel. The event involved discussions about blockchain, cryptocurrency, ICO, Artificial Intelligence, and the future investment opportunities.
CryptovationX was also published on Bangkok Post, the leading English newspaper in Thailand, about the ICO launch which quickly gained popularity among cryptocurrency and blockchain community in Thailand. You can read the full article at https://www.bangkokpost.com/business/news/1412859/.
Recently after the cryptocurrency regulation was released, Pondet was also invited to join the live interview ‘CRYPTO Talk’. He had summarized the overall impact of this regulatory framework for crypto-related businesses. He believes that it is a positive action because this means Thailand has officially recognized the status of cryptocurrency and it became the first country in Southeast Asia that has outlined a lawful act on cryptocurrency. However, tax regulation will cause some negative effects on capital gain for both foreign investors and local investors who does not want to pay tax as much as 15%. Moreover, he mentioned that CryptovationX project will fully comply with Thai law since the company is operating in Thailand.
More Information:
Website: CryptovationX.io
Telegram: t.me/CryptovationX_CXA
Facebook: facebook.com/CryptovationX
Twitter: twitter.com/CryptovationX
Medium: medium.com/CryptovationX
Instagram: instagram.com/CryptovationX
#CryptovationX #CXA #RoboAdvisory #Airdrop
‘The Best Friend for Crypto Investors’ | https://medium.com/cryptovationx/cryptovationx-media-coverage-2753991f29df | [] | 2018-07-02 14:02:12.320000+00:00 | ['Artificial Intelligence', 'Bitcoin', 'Blockchain', 'Cryptocurrency', 'Thailand'] |
Our Love | How much our love has grown.
Your hugs, smiles, and cute face.
You make me happy than I have ever known. | https://medium.com/3-lines-story/our-love-2f2936ced348 | ['Pawan Kumar'] | 2020-06-07 09:59:18.275000+00:00 | ['Storytelling', 'Poetry', 'Relationships', 'Dating', 'Love'] |
Accelerator Perception Report | Accelerator Perception Report
Executive Summary
In December 2018, Impact Africa Network conducted a survey with entrepreneurs in Nairobi with the aim of understanding perceptions of, and experiences with startup accelerators. The questions we were seeking to answer were:
Are entrepreneurs in Nairobi experiencing accelerator fatigue?
2. Beyond help fundraising do entrepreneurs see any value in accelerators?
3. How well do entrepreneurs understand accelerators?
4. What are the top ecosystem gaps entrepreneurs struggle with?
Click here for the report | https://medium.com/impact-africa-network/accelerator-perception-report-cfad664269de | ['Mark Karake'] | 2019-03-18 09:38:30.513000+00:00 | ['Nairobi', 'Startup', 'Venture Capital', 'Africa', 'Accelerator'] |
The researcher’s journey: leveling up as a user researcher | I. Process mastery
At this execution-focused stage, your questions around projects will focus on the nuts and bolts:
Who? - Target audience What? - Method When? - When do you want it?
When a product manager says “we should do a usability test” or a designer “needs to talk to customers,” it’s a chance to hone basic skills. Growth comes through experience and reflection, forcing yourself to ask “why do we want to do this?”, “what are we really trying to find out?”, and “is this the right way to get there?”
II. Technical competence
Every interview in and of itself can be a hurdle, and it’s hard to see the forest (project) for the trees (each instance of execution). As a junior level researcher, you must become competent in executing on the basics:
Recruiting
Interviewing
Interview note-taking
Interview debriefing
Observation
Data collection
Surveying
User testing
Simple reporting
Research synthesis outputs are facts, incidents, and simple behavioral insights. You’re ready to move on when these basic pieces can be smartly combined and deployed to ensure a successful project.
III. Organizational influence
Junior researchers strive to empower the organization with insights, and answer on-hand questions. Your influence develops on the strength of that execution, and the sense of judgment that you hone as you learn what your users need and how they do their work:
Credible reporting
Fair, honest judgement of design and product
Interaction-level and usability authority
Failure to have an impact (e.g. aforementioned report on January 3, 2011) is especially instructive: “It’s so clear to me that X is true, and I believe we should do Y. But nobody else sees it — what’s happening?” There’s an outside-in perspective flip that precedes growth, akin to the ideas underlying the practice of service design. It’s not about the great studies that you can do, it’s about finding out what the team needs to push work forward. | https://uxdesign.cc/the-researchers-journey-leveling-up-as-a-user-researcher-a85cd35b53f5 | ['Dave Hora'] | 2020-06-17 20:02:24.316000+00:00 | ['Marketing', 'Customer Experience', 'User Research', 'UX', 'Design Process'] |
Thoughts on “The Word is Murder” by Anthony Horowitz | If Anthony Horowitz isn’t a more familiar name to you, he should be. Fans of the BBC known him firstly as the creator of Foyle’s War, among the best TV mysteries ever. Others know him as the author of the Alex Rider series of YA spy novels. On occasion, he turns his busy pen to more adult novels. His last, Magpie Murders, was a witty delight about the mores and conventions of the modern mystery, wrapped nicely around a good puzzle.
Now he’s sent two new books out the gate at once. One of them is Forever and a Day (a James Bond novel). What concerns us here is The Word is Murder, a standalone follow-up to Magpie.
In Magpie, Horowitz cleverly touched on the tensions between fictional crime and real-life crime. In The Word is Murder, he goes farther and deeper with excellent results. It’s that’s rarest of books in this hyper-serious era — a genuine tour-de-force and a great excuse to stay put in your reading chair.
The novel opens with the last day in the life of Diana Cowper, a middle-aged Englishwoman and theatre enthusiast. She starts her day arranging her own funeral. By the end of it, she’s been murdered, strangled with a curtain rope.
Scotland Yard, as you’d expect, is baffled by a murder whose victim seems to have anticipated her own demise. For help, they turn reluctantly to Daniel Hawthorne, a former investigator, to help crack the case. Dismissed in disgrace from the force some years before, Hawthorne now freelances as a technical advisor to both BBC mystery shows and the Yard, two relationships driven by necessity and steeped in mutual distaste.
Hawthorne brings his own agenda to the hunt for Ms. Cowper’s murderer: He plans to publish a book about himself to settle some scores and restore his tattered reputation. No writer himself, this modern Sherlock Holmes needs a Watson to follow alongside as he brilliantly solves the murder of Diana Cowper, revealing his genius to the world.
For his Watson, Hawthorne reaches out to a writer, a former client whom he advised on a previous BBC mystery show, a real one in fact, known by the title Injustice. Its writer was a noted screenwriter and novelist, a guy named . . . Anthony Horowitz.
Like everyone else at Scotland Yard and beyond, once around the track with Hawthorne was enough for Horowitz. Even so, despite his reservations, he finds himself drawn to Hawthorne’s offer and the crime in question.
The collaboration is a rocky one from the start as Hawthorne rips up Horowitz’s first chapter (which you’ll have just read). His demanding presence knocks Horowitz’s writing life off its keel, culminating in a disastrous encounter between Horowitz, Hawthorne and two real-life pop-culture titans, a hilarious moment that will be an absolute hoot if it ever makes it to the screen.
Despite the chaos brought by Hawthorne, there’s nothing like murder to get a genre writer’s juices flowing and Horowitz, fascinated by his rather repellent but brilliant client, slowly becomes determined to stick with the case, no matter how obnoxious Hawthorne becomes or how great the dangers that lay in wait.
Anthony Horowitz clearly had a swell time weaving himself into his own novel. He makes the most of it as he deftly maneuvers that weird territory between the real-life creator of Foyle’s War and the meta-fictional Horowitz who becomes entangled in a mystery novel. What’s factual and what’s not is one of the book’s entertaining puzzles.
The Word is Murder could have wound up a dispensable bit of self-referential post-modernism but instead provides a vivid, funny look at the joys and frustrations of a successful writer’s life. It also explores the problems inherent in the creation of literary characters. Saints make for bad pasteboard heroes in fiction, but, we all wonder, how much bad can we put up with in our good guys?
Hawthorne, you should know, is a flaming bastard, a bigot who’s smart about many things but ignorant of much else, a man sometimes deeply “unwoke.” He may be the intellectual equal of Sherlock Holmes, but he makes Holmes seem like Philip Marlowe as played by George Clooney.
Horowitz never answers this question — as a novelist myself, I’d say there really isn’t an answer beyond “whatever I think I can get away with.” But he makes it stick nicely in the memory, long after events draw to the suspenseful action-packed finish.
While he can’t provide an answer regarding Hawthorne, Horwitz does provide an antidote with his charming and genial self-portrait. We learn a fair bit about Anthony Horowitz, the successful writer as he gracefully takes us behind the scenes of the world of TV, theatre and movie productions and even into the back parlors of the funeral business.
The Word is Murderis a fun read that leaves you thinking. I can’t wait for the movie. And I bet Horowitz can’t either. Read carefully, and you’ll find a clue as to whom he wants cast as Hawthorne. I don’t know about you, but I’ll be watching.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Thomas Burchfield is the author of Butchertown,a ripping, 1920s gangster thriller that was praised as “incendiary” by David Corbett. (The Long-Lost Love Letters of Doc Holliday; The Art of Character)His contemporary vampire novel Dragon’s Arkwon the IPPY, NIEA, and Halloween Book Festival awards for horror in 2012. He’s also author of the original screenplays Whackers, The Uglies, Now Speaks the Devil andDracula: Endless Night (e-book editions only). Published by Ambler House Publishing, all are available at Amazon,Barnes and Noble, Powell’s Books, and other retailers. His reviews have appeared in Bright Lights Film Journaland The Strand and he recently published a two-part look at the life and career of the great film villain (and spaghetti western star) Lee Van Cleef in Filmfax. He lives in Northern California with his wife, Elizabeth. | https://thomburchfield.medium.com/thoughts-on-the-word-is-murder-by-anthony-horowitz-884ff07ced09 | ['Thomas Burchfield'] | 2019-01-16 23:47:12.346000+00:00 | ['Anthony Horowitz', 'Foyles War', 'Books', 'The Word Is Murder Review'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.